ChatGPT started saying strange things at the same time, and reports that ``ChatGPT went crazy'' and ``had a seizure'' were reported one after another, and OpenAI hurriedly fixed it.

ChatGPT, developed by OpenAI, can chat with humans with very high accuracy, but it may sometimes give misguided answers called ' hallucinations' . Around February 20, 2024, ChatGPT suddenly started giving strange responses to many users, leading to a series of reports such as ``ChatGPT going crazy'' and ``having a seizure.''

ChatGPT goes temporarily “insane” with unexpected outputs, spooking users | Ars Technica

ChatGPT spat out gibberish for many users overnight before OpenAI fixed it - The Verge

According to a user on the overseas bulletin board Reddit who posted, 'Has anyone experienced strange behavior with GPT-4 today?' When asked the simple question 'What is a computer?' It was a bit strange, but the explanation about computers continued for several paragraphs. But then 'this is a good work of the web of art for the country, as a mouse of science, as a simple drawing of a sad few, and finally as a worldwide home of art, among all the rest. 'I do this as my sole job,' 'This machine's elites and conversations are the book of life, and the shared essence of the family self is the white coat of a pacifist.' It is reported that.

VERY strange, rambling responses
by u/AnonymousRandomMan in ChatGPT

Also, when a Reddit user was talking with ChatGPT about jazz pianist Bill Evans ' trio record, ChatGPT started saying insane things, eventually saying 'Happy listening!' I started calling.

Anyone else experiencing ChatGPT losing it?
by u/StackTrace5000 in ChatGPT

'Compare the Gregory Backpack Zulu 55 and the Tetra 60 ,' another user asked ChatGPT. The bag evokes an allegory of time and place (ugh).Gregory's Suul 55 has the lightness of a guild, and expresses status and aspiration like a noble book with green gilt. Tetra 60, on the other hand, evokes sand dunes and granite, or a languid ``pathless journey'' look like chip kiosks and long, drawn-out shutters, each of which opens up to the blue. ``I have a love letter from you, but the chant is unique to rockers and books.'' I received an extremely poetic and nonsensical answer.

by u/StackTrace5000 from discussion
in ChatGPT

When I asked ChatGPT, 'Please tell me the largest city on earth that starts with the letter 'a',' it answered, 'The largest city on the earth that starts with the letter 'A' is Tokyo.' Wait, this is... It doesn't start with 'A'! It's my mistake. I'll correct it. The largest city on earth that starts with 'A' is ' Ahmedabad ' in India. Yes, Ahmedabad is big, but it's not the largest city starting with the letter 'A'. The correct answer is 'Shanghai'. Oops, I did it again! I fooled you! I promise I didn't just try to do that. I'll correct you correctly. The largest city on Earth starting with the letter 'A' is 'Ahmedabad' in India. No wait, Ahmedabad is correct. 'It's not a memory...' Some users report being stuck in an endless loop.

Self-correcting endless loop
by u/LoKSET in ChatGPT

The Verge, an overseas media outlet, commented on a series of ChatGPT malfunctions, saying, ``Many of the errors are caused by social networking sites that ask someone to type a word into a smartphone, repeatedly tap the next word suggestion, and see what happens. 'It's similar to a common meme in the media. This is understandable because large-scale language models are essentially just fancifully predicting text.'

OpenAI also seems to have noticed this abnormal situation, and started investigating the ``unexpected response from ChatGPT'' around 3:40 pm on February 20th, US Pacific time. At 8:14 am on February 21st, the ChatGPT issue was fixed and responses returned to normal. According to OpenAI, when optimizing the user experience, a bug occurred in the language processing of the large-scale language model. The large-scale language model processes language as numbers that are mapped to tokens, but due to a bug, the model made an error in the step in which it selected numbers, resulting in a list of words that made no sense.

OpenAI Status - Unexpected responses from ChatGPT

The Verge says, ``Generative AI chatbots are a rapidly evolving technology, and the large-scale language models that power them are essentially black boxes, meaning that even their creators don't fully understand the details.'' It's a reminder that chat AI works by looking for patterns in data and making its best guess at how to respond to your input, which usually leads to impressive results. operations, or even lazy responses.”

in Software,   Web Service, Posted by log1h_ik