Meta announces that it may discontinue development of AI that it deems too risky



This article, originally posted in Japanese on 11:24 Feb 04, 2025, may contains some machine-translated parts.
If you would like to suggest a corrected translation, please click here.

Meta has announced the ' Frontier AI Framework, ' a comprehensive policy document for the development and release of cutting-edge AI. In it, Meta classifies the risks posed by AI into three levels, and clearly states that if it is determined to have the most serious risk, it will halt development and put in place a strict management system that allows access only to a limited number of experts.

Frontier AI Framework
(PDF file) https://ai.meta.com/static-resource/meta-frontier-ai-framework/

Meta says it may stop development of AI systems it deems too risky | TechCrunch
https://techcrunch.com/2025/02/03/meta-says-it-may-stop-development-of-ai-systems-it-deems-too-risky/

Meta's Frontier AI Framework presents 'criteria for the development and release of cutting-edge AI.' The most important point is that it sets clear criteria that 'dangerous AI should not be developed or released,' and proposes a method for assessing risk into three main stages.



This risk assessment involves first defining a 'threat scenario' with catastrophic outcomes that need to be prevented, with the help of internal and external experts, and considering how AI could be misused. Next, we identify the 'capabilities' required to realize that threat scenario, and evaluate the extent to which the AI model under development possesses these capabilities.

The AI model being evaluated is classified as a 'significant risk' if it 'uniquely enables the realization of a threat scenario,' a 'high risk' if it 'provides significant performance improvements toward realization,' and a 'medium risk' if it does neither.



For example, if the AI directly enables cyber attacks or the development of biological weapons, it will be deemed to have a significant risk and a decision will be made to halt its development. On the other hand, if the AI significantly facilitates cyber attacks or the development of biological weapons, it will be deemed to have a high risk and a decision will be made to restrict its use to internal use only and not to make it public.

It is noteworthy that Meta employs a qualitative evaluation based on expert judgment, rather than quantitative indicators. Meta states that 'at first glance, this approach may seem subjective, but considering that AI evaluation technology is still in its infancy, it is actually a framework that enables realistic judgment.'



TechCrunch, an IT news site, sees the 'Frontier AI Framework' as 'a change of direction for Meta.'

TechCrunch points out that Meta, which has adopted a strategy of 'open development and release,' has presented specific risk assessment criteria to ensure safety. For example, Meta's large-scale language model, Llama, has been downloaded hundreds of millions of times, but there have been reports of it being used for hostile purposes. The Frontier AI Framework announced this time is thought to be aimed at addressing these issues.

TechCrunch also noted that Meta acknowledged that 'scientific methods for AI risk assessment have not yet been fully established' in the qualitative risk assessment. Furthermore, compared to OpenAI, which provides services through APIs without opening up its AI models, and China's DeepSeek, which provides its AI models openly but with insufficient safety measures, the report argued that the Frontier AI Framework may be Meta's answer to the challenge of balancing openness and safety that the entire AI industry faces.

in Software, Posted by log1i_yk