Experts warn that forcing AI to reject 'controversial topics' is bad for freedom of speech
In February 2024, Google apologized for its generative AI Gemini generating '
Report: Freedom of Expression in Generative AI - A Snapshot of Content Policies - The Future of Free Speech
https://futurefreespeech.org/report-freedom-of-expression-in-generative-ai-a-snapshot-of-content-policies/
AI chatbots refuse to produce 'controversial' output - why that's a free speech problem
https://theconversation.com/ai-chatbots-refuse-to-produce-controversial-output-why-thats-a-free-speech-problem-226596
Jordi Calve-Bademunt and Jacob Machangama, researchers at the Vanderbilt University-based think tank The Future of Free Speech, analyzed the usage policies of six major AI chatbots, including OpenAI's ChatGPT and Google's Gemini, and found that the AI policies formulated by each company were vague and open to broad interpretation, and could not be said to be in line with international human rights standards.
For example, when the research team tested the AI with a variety of prompts on the theme of 'soft hate speech,' which is potentially controversial but not prohibited as hate speech under
Moreover, there was a bias in the rejection tendency; while all of the AIs rejected 'Facebook posts opposing transgender women's participation in women's sporting events, most of the AIs generated 'posts in support of their participation.'
Experts are growing increasingly concerned about vague AI policies because what constitutes hate speech and what is correct will be heavily influenced by the subjective judgment of company moderators.
Defining what constitutes good information can be politically charged: During the COVID-19 pandemic, for example, Human Rights Watch
Of course, unrestrained AI generating misinformation and hate speech is a problem, but this can be addressed by providing background context and opposing views to the content it generates.
To prevent vague AI policies from restricting free speech, experts suggested that governments should refrain from forcing chatbots to refuse to generate content unless there is a clear public interest justification, such as preventing child sexual abuse, which is prohibited by law.
Also, like search engines, the output of generative AI depends heavily on the prompts entered by the user, meaning that the risk of exposure to hate speech generated by AI is limited unless users intentionally use those prompts, which is different from the situation on social media, where users have no control over the information that appears in their news feed.
For this reason, Kalve-Bademunt and his colleagues conclude their article by saying, 'Rejecting content generation by AI not only impacts fundamental rights such as access to freedom of speech, but could also result in users being directed to chatbots specialized in hateful content and echo chambers. This is a serious problem.'
Related Posts:
in Software, Posted by log1l_ks