It turns out that Microsoft fired the ``team that warns of the risks of AI technology''
It turns out that Microsoft has laid off the AI team that conducts risk assessments when incorporating OpenAI technology into products due to the large-scale
Microsoft just laid off one of its responsible AI teams
https://www.platformer.news/p/microsoft-just-laid-off-one-of-its
According to a person who provided information to the Silicon Valley-related newsletter Platformer, Microsoft's 10,000-person layoffs have left the company's ethics and social team all gone, and the team has been disbanded. That's what I'm talking about.
The team specializes in leading ethical and responsible AI innovation, with a large team of around 30 employees including engineers, designers and philosophers at its largest in 2020. It was. After that, it was reduced to about 7 people in the organizational restructuring in October 2022, but after Microsoft partnered with OpenAI, we were identifying the risks of integrating OpenAI technology into Microsoft products.
In a meeting with the reorganized team, John Montgomery, vice president of AI, said he was under pressure from Microsoft's CEOs. In an audio recording of the meeting obtained by Platformer, Montgomery said, 'The pressure from Kevin and Satya (Chief Technology Officer Kevin Scott and CEO Satya Nadella) has been to develop the latest OpenAI models and beyond. It's about getting the model to the customer at a very fast speed.'
When a member of the Ethics and Social Team heard this, they said, 'Please reconsider. We understand that there are business issues, but this team always asks, 'How have we impacted or adversely affected society?' ' And it's a serious thing,' Mr. Montgomery would not listen.
At this time, Mr. Montgomery said, ``The team will not disappear,'' but most of the team members were transferred to another department within Microsoft, and the remaining members were on March 6, 2023. I was told that the team would be terminated.
Regarding the move, Platformer said, ``Anyone involved in AI development should agree that this technology is fraught with potential risks. We have three groups working on this issue internally, but given the dangers of this technology, the reduction in the team dedicated to responsible AI usage would be noteworthy.'
A Platformer report reveals that the defunct ethics and social team had made forward-looking recommendations. In October 2022, Microsoft announced Bing Image Creator , which incorporates OpenAI's image generation AI `` DALL-E '', but the material left by the team says, ``Use your own work as training data. Few artists seem to agree, and many still don't know that image generation technology can generate an adapted version of their work in a matter of seconds.' I was.
Furthermore, regarding OpenAI's updated terms of service to give the user full ownership of the image created with DALL-E, he said, ``The person who entered the prompt has full ownership of the resulting image. That is ethically questionable,' he commented.
Based on these concerns, the researchers created a list of mitigations, including blocking the use of live artist names as prompts, but without Microsoft implementing them Bing Image Creator embarked on a trial release.
'We are committed to developing AI products and experiences safely and responsibly, and we do so by investing in people, processes and partnerships that prioritize this,' Microsoft said in a statement. Over the years, we have increased the number of employees across our product teams and the Office of Responsible AI (ORA) who are accountable for ensuring that our AI principles are put into practice. We are grateful for the pioneering work done by our Ethics and Society team in our ongoing responsible AI journey.'
Related Posts:
in Note, Posted by log1l_ks