AI systems with 'unacceptable risks' banned in EU

February 2, 2025 marks the first deadline for compliance with the 'AI Law' that came into force in the EU last year. This law will restrict the use of AI systems that pose unacceptable risks to the EU, but experts point out that this deadline is merely formal, and that it will be a while before fines or enforcement are actually implemented.
AI systems with 'unacceptable risk' are now banned in the EU | TechCrunch
On August 1, 2024, the Artificial Intelligence Act came into effect in the EU.
EU's 'AI Law' to come into force on August 1, 2024, fines for violations will be up to 7% of annual turnover or 35 million euros - GIGAZINE

Specifically, the plan involves taking into consideration the risks associated with using AI and restricting its use depending on the level of those risks.
For example, a spam filter in an email client would be classified as 'minimal risk' and would not be subject to regulatory oversight. A customer service chatbot would be classified as 'limited risk' and would be subject to minimal oversight by regulators, and an AI that gives healthcare advice would be classified as 'high risk' and would be subject to heavy regulatory oversight. Anything that poses even more risk - such as an AI that analyses security camera images to create a facial recognition database - would be classified as 'unacceptable risk' and banned entirely.
Examples of actions that fall into the 'unacceptable risk' category are:
AI used in social credit systems, such as those that assess risk based on human behavior
・AI that tries to predict who will commit crimes based on their appearance
-AI that subliminally or deceptively manipulates human decision-making
・AI that exploits people's age, disability, socioeconomic status, etc.
AI that uses biometrics to infer characteristics such as sexual orientation
AI to collect real-time biometric data in public places for law enforcement purposes
・AI that tries to guess people's emotions at work or school
February 2, 2025 is the first deadline for compliance with the AI Act, and companies that use AI are required to comply with the requirements of the AI Act. Companies found to be using AI that poses unacceptable risks within the EU may be subject to fines of up to 35 million euros (approximately 5.58 billion yen) or 7% of the annual turnover of the previous financial year, whichever is greater, regardless of the location of their headquarters.
However, Rob Samroy, a lawyer at a UK law firm, points out: 'The fines won't come into effect for some time. The next big deadline for companies to pay attention to is August 2025, when the competent authority will be chosen and fines and enforcement provisions will come into effect.'
Under the AI Act, 'use' will be prohibited in February 2025, six months after it comes into force, a code of conduct for AI developers will begin to be established nine months later, and one year later, in August 2025, AI companies such as ChatGPT will have to show that 'their models comply with new transparency requirements, are safe, and are easily explainable to users.' And by August 2026, the rules of the AI Act will apply to all companies operating in the EU.

More than 100 companies signed the EU law in September 2024, making a voluntary commitment to comply with the AI law before it comes into effect. Signatories include Amazon, Google, and OpenAI, who pledge to identify AI systems likely to be classified as high risk under the AI law. On the other hand, Meta, Apple, and the French AI company Mistral AI have decided not to sign.
'It is also unclear how the AI Act will interact with other laws. There is a possibility that other legal frameworks, such as the General Data Protection Regulation (GDPR), may overlap with the AI Act,' said Samroi, adding that companies should wait for the government to provide clear standards and guidelines.
Related Posts:
in Posted by log1p_kr