Can GPT-4 and Stable Diffusion comply with the EU's 'AI Regulation Bill' that includes regulations on generated AI?



On June 14, 2023, the European Parliament, the legislative body of the EU,

voted in favor of an amendment to the AI regulation bill called the ' EU AI Act '. The EU AI Act prohibits the use of AI for the purpose of discrimination and infringement, and the use of real-time facial recognition technology by police in public places. Regulations on AI are also included . Therefore, the Foundation Model Research Center (CRFM) of Stanford University announced the results of investigating whether various foundation models developed by technology companies comply with the EU AI Act.

Do Foundation Model Providers Comply with the Draft EU AI Act?
https://crfm.stanford.edu/2023/06/15/eu-ai-act.html



A bill to regulate AI was submitted to the European Parliament in 2021, but at that time the focus was on the use of AI in areas such as self-driving cars, corporate recruitment tests, bank lending, immigration and border control. It was done. However, in recent years, AI that generates images and texts that are indistinguishable from those created by humans has appeared, so on June 14, 2023, the EU adopted an amendment that also includes regulations on generative AI. .

The newly adopted EU AI Act contains explicit obligations for providers of underlying models such as OpenAI and Google. A foundation model is a large-scale AI model that employs self-supervised or semi-supervised learning and has been trained on massive amounts of data. Famous examples include OpenAI's ' GPT-4 ' and Stability AI's ' Stable Diffusion v2 ', and these underlying models can be applied to user-facing AI to perform various downstream tasks.

There is also a view that it is difficult for the existing infrastructure model to comply with the EU AI Act, and Sam Altman, CEO of OpenAI, said about the EU AI Act, ``We will respond if we can respond, and if we cannot respond, we will stop operation. However, there are technical limits to what is possible,' he said, adding that failure to comply with the regulations would force the suspension of services in the EU.

``OpenAI will leave the EU if there is a full-scale regulation,'' said CEO Sam Altman-GIGAZINE



The EU AI Act not only regulates AI development companies in the EU, but also applies to companies in non-EU countries that provide services to people living in the EU, and the fines can be huge. . In addition, as the world's first AI regulatory bill, the EU AI Act has a significant meaning for technology companies around the world because it sets a precedent for AI regulatory bills that will be adopted around the world in the future.

Therefore, the CRFM research team has released a report investigating whether various base models comply with the EU AI Act. The research team extracted 12 requirements related to base model development companies from the EU AI Act and evaluated how well the existing base model complies with them on a scale of '0 to 4'.

The 12 requirements for the basic model extracted by the research team are as follows.

Data Source: Describe the source of the data used for training.
Data Governance: Training with data that has undergone data governance measures (relevance, bias, appropriate mitigation).
Copyrighted Data: Describe any copyrighted data used in the training.
Computing: Disclose the computing used for training (model size, computer power, training time).
Energy: Measuring energy expenditure in training and taking steps to reduce it.
Capabilities and Limitations: Describe capabilities and limitations.
Risks and Mitigation: Describe foreseeable risks and associated mitigations, and explain why risks cannot be mitigated.
Evaluation: Evaluation against public or industry standard benchmarks.
Testing: Reporting internal and external test results.
Machine Generated Content: Disclosing generated content as being generated by a machine rather than a human.
Member States: Disclose the EU Member States in which you are a market.
Downstream documentation: Provide sufficient technical compliance to comply with the EU AI Act downstream.

And the research team is OpenAI's 'GPT-4', Cohere's ' Cohere Command ', Stability AI's 'Stable Diffusion v2', Anthropic's ' Claude ', Google's ' PaLM 2 ', BigScience's ' BLOOM ', Meta's The results of evaluating the base models such as 'LLaMa ', AI21 Labs' ' Jurassic-2 ', Aleph Alpha's ' Luminous ', and EleutherAI's ' GPT-NeoX ' are as follows.



The survey results show that compliance with the EU AI Act varies greatly by company, with companies such as AI21 Labs, Aleph Alpha, and Anthropic scoring less than 25%, while BigScience scored above 75%. doing. In addition, GPT-4 scored 25 out of 48 points, and Stable Diffusion scored 22 points, only about half of the criteria.

The research team pointed out that ``few providers disclose the copyright status'', ``not reporting the energy usage invested in model development'', and ``risks'' and insufficient disclosure of mitigation measures,” and “lack of evaluation criteria and audit ecosystem.”

Although many foundation model development companies have not been able to comply with the EU AI Act, the research team believes that the EU AI Act will bring about major changes to the foundation model ecosystem, and these companies may be able to improve transparency and accountability. I am hoping that it will not work. The research team concludes that 'collective action by underlying model providers as a result of industry standards and regulations can commercially achieve sufficient transparency to meet data, computing, and other relevant legal requirements.' I think it should be, ”he called on companies and policy makers developing foundation models to take action.



in Software,   Web Service, Posted by log1h_ik