What kind of law is the EU to approve the `` bill to regulate the use of AI '' within 2023?



On June 14, 2023, the European Parliament has decided to hold a vote to approve a 'groundbreaking bill to manage AI'. If approved by the results of the vote, this law will be the world's first 'law to regulate the use of AI in various areas of society'.

Debate and vote on landmark rules to manage Artificial Intelligence | 12-06-2023 | News | European Parliament
https://www.europarl.europa.eu/news/en/agenda/briefing/2023-06-12/1/debate-and-vote-on-landmark-rules-to-manage-artificial-intelligence



EU approves draft law to regulate AI – here's how it will work
https://theconversation.com/eu-approves-draft-law-to-regulate-ai-heres-how-it-will-work-205672

This bill is based on the bill submitted by the European Commission in 2021 , and has been discussed and considered by the European Parliament and the Council of Europe. The European Parliament said, 'The bill follows a risk-based approach, setting out obligations for providers and users according to the level of risk that AI may create. It also prohibits AI systems with an unacceptable level of risk. I will ban it,' he said.

In the bill, it is not the AI itself that is subject to regulation, but the 'use of AI in society', and the risks posed by the use of AI are 'unacceptable risk', 'high risk', and 'limited ” and “Minimum”.

Classified as 'unacceptable risk', it is said that the use of AI is prohibited in cases that are considered to pose a threat to basic rights, EU rules and values. For example, the use of systems that use personal information to conduct risk assessments of individuals and predict whether they are likely to commit crimes in the future is judged to pose an 'unacceptable risk.' . It also said that the use of facial recognition technology in footage captured by street surveillance cameras poses an 'unacceptable risk,' unless legally authorized.



Systems classified as posing a 'high risk' will be subject to disclosure obligations. For example, AI that controls access to services in education, employment, finance, healthcare, and other critical areas. The use of AI in relation to infrastructure and social infrastructure can adversely affect security and basic rights and is monitored under certain audit requirements.

And if they are classified as posing 'limited risk,' they are required to provide a minimum level of transparency. For example, in the case of a chatbot AI that generates sentences, the operator must present to the user, 'This is a conversation with the AI.'



At the time of writing the article, it was decided to vote for approval, but the bill will be brushed up for enactment. Some European parliamentarians have proposed some additional deregulations aimed at promoting AI innovation, such as deregulation for research activities and open source AI development, as well as envisioning use in controlled environments. He is proposing. An approval vote will be held at the end of 2023, and if approved by this vote, it will enter into force.

According to Techcrunch, an IT news site, Google, which develops the chatbot AI 'Bard', has postponed the release of Bard in Europe in order to comply with the transparency requirements set by the EU. 'In May, we wanted to make Bard more widely available, including in the European Union, and after engaging with experts, regulators and policy makers, Google said that As part of that process, we have spoken with privacy regulators to resolve their questions and hear their views.'

Google delays EU launch of its AI chatbot after privacy regulator raises concerns | TechCrunch
https://techcrunch.com/2023/06/13/google-delays-eu-launch-of-its-ai-chatbot-after-privacy-regulator-raises-concerns/

in Software, Posted by log1i_yk