Experts are concerned that ``human painters have not received compensation for image generation AI,'' sharply explaining the problems left behind in the development of AI



Artificial intelligence technology is growing remarkably, as it is said that

it is difficult even for experts to distinguish between 'AI and philosophers', and a Google engineer said that 'AI acquired emotions and intelligence' . . However, despite this progress, it is said that it is difficult for AI to master languages. Professor Emeritus Gary Marcus of New York University is responding to an interview with science media Undark Magazine about the relationship between AI and language acquisition and the problems AI faces, including the problem of `` image generation AI ''.

Interview: Why Mastering Language Is So Difficult for AI
https://undark.org/2022/10/07/interview-why-mastering-language-is-so-difficult-for-ai/

In the field of artificial intelligence, as AI pioneer Herbert Simon declared in 1965, 'Within 20 years, machines will be able to do everything that humans can do.' It is often talked about with hype such as 'exceeding the intelligence of the human being.' In particular, a system called ' deep learning ' enables AI to beat humans in games such as chess, demonstrate superior performance in face recognition and image recognition in general, and a program called 'GPT-3' developed by OpenAI . has shown remarkable development, such as being able to generate texts such as poetry and prose.

However, on the other hand, Marcus, who has seen many of these technological developments at the forefront, said, ``These advances should be taken with a grain of salt. , we believe that deep learning has intrinsic limitations.” For further progress, according to Marcus, a 'more traditional symbol-based approach to AI' in which computers encode human knowledge in symbolic representations, similar to approaches from the early decades of AI research, is used. It is said that it is necessary.



Undark Magazine spoke to Marcus about future developments in AI technology via remote interview and email.

Undark (hereafter, UD):
First, I would like to ask about GPT-3, a language model that generates human-like text through deep learning. The New York Times said, 'GPT-3 writes with mind-boggling fluency,' and a Wired article said , 'This program will set off chills all over Silicon Valley.' I am evaluating. On the other hand, Mr. Marcus was quite critical of GPT-3. how is that?

Mr. Gary Marcus (GM):
I think GPT-3 is an interesting experiment. But the idea that this system actually understands human language is not accurate. This system is an autocomplete system that predicts the next word or sentence. It's like a mobile phone, where after you enter the first few digits it will suggest subsequent numbers. It doesn't really understand the world around it.

GPT-3 confuses a lot of people. This is because what these systems are ultimately doing is 'mimicking' huge text databases. I think that imitating from a database, no matter how advanced it is, is like a parrot speaking words or plagiarism. Even when parrots speak clearly, I doubt they understand what they are saying. And GPT-3 certainly doesn't understand what he's talking about.

AI 'InstructGPT', an improved version of AI 'GPT-3' that can write natural sentences that are indistinguishable from humans, is open to the public, and poetry can also be written-GIGAZINE



UDs:
GPT-3 can be confusing about very basic facts, Marcus wrote in an article for The Guardian. As an example, when I asked GPT-3 ``Who is the president of the United States?'', it said that it answered Donald Trump, not Joe Biden, but the AI does not know that ``the present is 2022''. Is that it?

GM:
GPT-3 is more likely to say 'Donald Trump is president' probably because the trained database contains more examples of Trump. He's been in the news more, he's been in the news longer, he's been president longer. More than your average ex-president, ex-President Trump has been in the news. Also, surely GPT-3 doesn't understand what year we're living in. Furthermore, GPT-3 does not have the function to perform basic temporal inference that 'just because he was president at one time does not mean he is still president'. In that regard, GPT-3 is surprisingly dumb.

UDs:
As Marcus notes, even though AI systems such as GPT-3 are dumb, people often think AI is smart. I suspect this has something to do with what Marcus calls the 'gullibility gap.' What is this 'gullibility gap'?

GM:
The 'gullibility gap' describes the gap between our understanding of what machines do and what they actually do. We tend to think machines are smarter than they actually are. Maybe one day I'll be really smart, but I'm not yet. A similar case occurred in 1965 when GPT-3 and Google's LaMDA were thought to have human-like intelligence. The system called ELIZA , which was used in the early days, performed very simple keyword matching, and it was completely impossible to understand what was being said. However, by posing as a therapist and asking them to discuss their personal lives via text message, some people thought they were talking to a live human being. We are neither evolved nor trained to be deceived in perceiving.

UDs:
Many readers are familiar with the Turing test , which is based on an idea put forward by computer pioneer Alan Turing in 1950. In 2014, a chatbot named Eugene Goostman was said to have passed the Turing test under certain criteria, but many scientists have criticized the Turing test. Marcus is one of them, but what are the shortcomings of the Turing test?

Prominent experts point out that the first Turing test passer ``Eugene'' has not passed the test-GIGAZINE


By Constantine Belias

GM:
The Turing test has long been known as a measure of intelligence in AI, but it's not a very good one. Back in 1950, we didn't know much about AI. I don't think I know that much even now. But we know a lot more now than we did then. Basically, the idea was that if you talked to a machine and the machine tricked you into thinking you were a human when you weren't, it must mean something. But now it turns out that it is very easy to cheat. For example, by ``pretending to be a person with an unstable mind'' like Eugene Goostman, many questions can be avoided, making it easier to deceive people. To pass the Turing test, you're engineering it to play some kind of game, making it a useless test for building purely intelligent systems.

UDs:
I would also like to ask about self-driving cars. The realization of fully self-driving cars seemed to have made great strides in recent years, but it seems to have slowed down a bit since then. How is the system development for autonomous driving going?

GM:
Just like GPT-3 can't understand language, memorizing a lot of traffic situations you've seen won't give you the 'understanding about the world' you need to drive. For this reason, autonomous driving will require more data, but I think the delay in development is reflected in the fact that the collection of that data is progressing little by little. Also, most of the data was taken in areas with good weather, reasonably organized traffic, and not overly congested. If the current system were actually introduced to Mumbai, you wouldn't even understand what a 'rickshaw', an Indian three-wheeled taxi, is.

UDs:
``Most large teams of AI researchers are in companies, not universities,'' Marcus wrote in Scientific American in July 2022.

GM:
There are various reasons for that. One is that companies have their own incentives for what problems they want to solve. For example, AI if you want to solve advertising is very different from understanding natural language to improve healthcare. In that respect, companies with clear incentives to pursue profits have an advantage. Then there is also the issue of manpower. For the development of AI, companies need to afford to hire many excellent human resources, but those human resources cannot necessarily be put into the problems that can contribute most to society. AI development also generates a lot of unique data that does not necessarily need to be shared, which is another advantage for companies over universities in that they can pursue their own interests rather than the best interests of society. . In other words, today's AI achievements are in the hands of corporations, not ordinary citizens, and are tailored to the needs of corporations rather than the general public.

UDs:
But we are using civilian data to build our database, and in that respect we should be relying on civilians for AI development. Are you sure of that recognition?

GM:
I think so. And an important point is emerging in terms of art. Systems like OpenAI's DALL-E render very good images, but they're based on millions and billions of human-made images. The artist who painted the original painting received no compensation for it. Many artists are concerned about this and there is a lot of controversy. The issue is complex, but there is no doubt that many AIs, at least now, make use of 'unintended human contributions.'

It seems that people are finally breaking away from the deep learning orthodoxy and considering “ hybrid models ” that combine deep learning with more classical approaches to AI. For the first time in 40 years, I can think positively about AI because I believe that the more deep learning and classical methods work together, the better the results will be.

in Science, Posted by log1e_dh