ChatGPT turns out to have rejected the creation of 250,000 deep fake images during the US presidential election
OpenAI revealed that ChatGPT received a large number of requests to create deep fakes of politicians and others for the US presidential election held on November 5, 2024. OpenAI said that it rejected as many as 250,000 requests in the month leading up to the election.
How OpenAI is approaching 2024 worldwide elections | OpenAI
ChatGPT told 2M people to get their election news elsewhere — and rejected 250K deepfakes | TechCrunch
https://techcrunch.com/2024/11/08/chatgpt-told-2m-people-to-get-their-election-news-elsewhere-and-rejected-250k-deepfakes/
AI Companies Make It Through Election Day Without Major Missteps - Bloomberg
https://www.bloomberg.com/news/newsletters/2024-11-07/ai-companies-make-it-through-election-day-without-major-missteps
Since early 2024, OpenAI has implemented several security measures in ChatGPT to prevent election fraud and serve as a reliable source of information.
For example, users who asked about where to vote or how to vote were directed to the website ' CanIVote.org ' established by the government organization National Association of Attorneys General. According to OpenAI, about one million people were directed to this site in the month leading up to the election day.
In addition, users who inquired about the election results after the voting day were encouraged to check news sources such as the Associated Press or Reuters, and this process was carried out approximately 2 million times. In addition to the process of directing users to reliable sources, efforts were also made to prevent ChatGPT from expressing political orientation or endorsing candidates.
The biggest challenge that ChatGPT has overcome is preventing the creation of deep fakes. According to OpenAI, ChatGPT rejects requests to generate images of real people, including politicians, and OpenAI estimates that this design worked well during the election period, rejecting more than 250,000 requests to generate images of Donald Trump and Kamala Harris, as well as Senator J.D. Vance and President Joe Biden.
Overall, OpenAI said, 'Our team continued to monitor closely leading up to Election Day, and we have not seen any evidence that anyone has attracted attention by using our model.'
TechCrunch, a technology media outlet, noted about OpenAI's announcement, 'It's hard to judge whether this number is low or high -- CNN reportedly recorded approximately 67 million unique visitors on Election Day alone -- but the important thing is that at least millions of people had enough trust in an AI company.'
Rival company Perplexity took a bolder approach, partnering with the Associated Press and nonprofit DemocracyWorks to pull early vote tallies directly into a dedicated elections hub within its app, where users could use AI to quickly find election insights. Perplexity said the hub received roughly 4 million page views.
Bloomberg reporter Shirin Ghaffari said, 'In my testing, Perplexity, OpenAI and their peers have successfully avoided any missteps that would draw the ire of politicians or the public. The exceptions are Elon Musk's Grok, which reported Trump's victory before it was confirmed. And Google's Overviews also showed a map of Harris County, Texas, to users who searched for 'where to vote for Harris.''
Regarding the issue above, Google said, 'Both 'Harris' and 'Vance' are county names, so some searches may show up. Although very few people actually search for polling places this way, we have fixed it.'
Thx for this. The “where to vote” panel is triggering for some specific searches bc Harris is also the name of a county in TX. Happens for “Vance” too bc it's also the name of a county. Fix is coming. Note very few people actually search for places this way. pic.twitter.com/kybbaN1Nnu
— News from Google (@NewsFromGoogle) November 5, 2024
Related Posts:
in Software, Posted by log1p_kr