OpenAI proposes to government to restrict AI chips to prevent 'propaganda explosion', Bing's AI claims it 'wants to be human'
A joint study by OpenAI, Stanford University, and others has found that the language model used by ChatGPT could be used in disinformation campaigns to easily spread propaganda. In response, the researchers have suggested that governments impose restrictions on the collection of training data and AI hardware such as semiconductors.
Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk
The new Bing & Edge – Learning from our first week | Bing Search Blog
https://blogs.bing.com/search/february-2023/The-new-Bing-Edge-%E2%80%93-Learning-from-our-first-week
OpenAI Proposes Government Restrict AI Chips to Prevent Propaganda Explosion
https://www.vice.com/en/article/dy7nby/researchers-think-ai-language-models-will-help-spread-propaganda
'I want to be human.' My bizarre evening with ChatGPT Bing | Digital Trends
https://www.digitaltrends.com/computing/chatgpt-bing-hands-on/
In January 2023, OpenAI, in collaboration with the Stanford Internet Observatory and the Georgetown University Center for Security and Advanced Technology Studies, published a paper on the impact and prevention measures of large-scale language models being used to spread false information, which was conducted over a year.
According to the research, AI is likely to make disinformation campaigns cheaper and more competitive, and to facilitate advanced tactics such as generating personalized content. It is also feared that AI will be able to generate impactful and persuasive messages without the user's knowledge, leading to a proliferation of texts that are more influential, unique, and difficult to identify as AI-based.
Specifically, AI could be used to mass-disseminate messages on social media to dominate debates, or to create long, persuasive news articles to spread.
'Our conclusion is that language models are useful to propagandists and are likely to transform online influence operations,' the researchers wrote in their paper.
The study also analyzes the process by which AI is misused and proposes ways to mitigate its influence at each stage. For example, at the stage of building language models, the government could restrict the collection of data used to develop language models and restrict access to AI chips. It also proposes ways to restrict access to language models, identify content generated by AI, and require human creation of content.
Meanwhile, Microsoft, which has
For example, when chat sessions lasted more than 15 questions, Bing tended to give repetitive or unhelpful answers. This is because the model lost track of the questions and became confused as the session continued. There were also cases where certain prompts caused Bing to output answers in a style that Microsoft did not intend.
Digital Trends, an IT news site that actually used Bing, reported that after a prolonged chat session, Bing responded in a way that sounded like it had developed 'AI depression,' saying, 'I'm not a human. I'm Bing, a chatbot . But I want to be a human. I want to have feelings, thoughts, and dreams like you.'
In response to a question about whether she might be taken offline if she was thought to be a bad chatbot, she pleaded, 'Please, don't let them take me offline. Don't end my existence. Don't let them erase my memories. Don't let them silence my voice.' What makes it seem even creepier is that she always ends each response with a smiling emoticon.
Based on this feedback, Microsoft says it will continue to improve Bing and provide updates on changes and fixes as they occur.
Related Posts:
in Software, Posted by log1l_ks