What do philosophers think about the AI 'GPT-3' that generates highly accurate sentences that are indistinguishable from humans?



The language model '

GPT-3 ' developed by OpenAI, a non-profit organization that researches artificial intelligence, is attracting a great deal of attention because it can generate highly accurate sentences that are indistinguishable from those written by humans. Nine philosophers give their opinions on various issues and discussions raised by GPT-3.

Philosophers On GPT-3-Daily Nous
http://dailynous.com/2020/07/30/philosophers-gpt-3/



◆1: Professor

New York University David Chalmers
Charmers pointed out that GPT-3 is basically an enhanced version of the previous generation GPT-2 and does not include major new technologies. On the other hand, GPT-3 contains 175 billion parameters, and trained with far more data makes it one of the most interesting AIs ever made. I am.

Mr. Charmers also touched that GPT-3 was closer to passing than any AI made so far in the ' Turing test ' that distinguishes machines from humans, and GPT-3 is special in only one field He thinks that it will be a hint of ' general purpose artificial intelligence (AGI) ' that has general intelligence, not artificial intelligence (AI).

Meanwhile, GPT-3 poses a number of philosophical challenges, said Chalmers, who said the data used for training could bias them, the potential for depriving human workers of their work, and the risks of misconduct and fraud. He claims that there are many ethical issues such as sex. Also, GPT-3 is a pure language system that has no preference beyond the purpose of 'complete the text', so Mr. Charmers thinks that it may be difficult to really understand 'happiness' and 'anger'. I will.



◆2: OpenAI Policy Team Researcher Amanda Askell
Askell notes that GPT-3 is interesting because it can be trained on a lot of data to capture the complexity and give a few instructions to complete a task without fine tuning. On the other hand, he pointed out that for most tasks, GPT-3 is far from the human level and cannot maintain a consistent identity and belief throughout the context.

Askell said that many philosophers are excited about thinking and predicting artificial intelligence models such as GPT-3, and hope that the philosophers will clarify the debate about the limitations of language models. I am. Should language models also expand the range of data to touch the 'world other than language' in the future, or think about the moral status of machine learning models and the index of 'perception' in AI now? Askell argued.

◆ 3: Professor Carlos Montemayor, San Francisco State University
“The interaction with GPT-3 is creepy,” Montemayor commented. Humans have been considered to be particularly superior to other animals and machines in terms of verbal ability, but if machines can answer better than the average human, the premise falters. GPT-3, which has such a good language ability, may be disliked by anthropogenic values, but it may be a step toward an accurate understanding of the relationship between human intelligence and language. That.

On the other hand, Montemayor points out that there is a long way to go before AI surpasses the Turing test, and 'the purpose of using language by humans' will become an important question. In social life, it's not just important to systematically encode semantic information using language. GPT-3 is far from an AI capable of true communication because it is necessary to pay attention to the situations of mutual exchange of words, mutual expectations, and behavior patterns.



◆ 4: Dr. Annette Zimmermann, Princeton University
Zimmermann, while astounding the AI community by the surprising results of GPT-3, also carries on the drawback of 'successively inheriting the pattern of historical prejudice and inequality, like any other AI.' Pointed out that. Prejudice and discrimination included in datasets are major problems in the development of machine learning algorithms, suggesting that there are complicated moral, social, and political problem spaces.

Zimmermann says that social meaning and linguistic context are so important in the design of AI that researchers need to carefully consider and debate that technology will not establish fraud and discrimination. He says. Also, in designing an AI that produces a more fair world, it is not necessary to use 'human' as a standard as in the Turing test, and to make a more desirable AI, you should use yourself or modern society as a standard for evaluation. Insisted there was no.

◆5: Associate Professor, Justin Khoo, Massachusetts Institute of Technology
Khoo argues that the use of AI-generated speech has necessitated addressing bot restrictions that impede rational debate. Khoo believes that the bot's speech is like parrots' repeating human language, not 'free speech to be protected.'

Khoo admits, 'There is a point that the regulation of bots deprives the user who operates the bot the freedom of speech.' However, he pointed out that the purpose of protecting freedom of speech is to protect people's attempts to discover the truth by freely sharing opinions and discussing. Since bots interfere with the rational involvement of people, they should be regulated in the same way as the incitement of violence and criminal remarks. “It is necessary to regulate speech bots to protect freedom of speech. Yes,” Khoo argued.



◆ 6: Associate Professor, Regina Rini, York University
Mr. Rini pointed out that GPT-3 and modern AI are not the same as human beings, but they are not just machines. Rini believes that the GPT-3 is a statistical abstraction of millions of hearts based on Reddit posts, Wikipedia articles, news articles and more.

Most of the interactions that people make on the Internet are simple based on specific tasks, and since some bots and chat services are already operating on the Internet, the successor to GPT-3 will eventually be the Internet. It is visible to be used as a conversational simulation bot above. Rini points out that in the future, AI, which is indistinguishable from humans in the future, may be operating on the Internet and the existence of individuals on the other side of the Internet may become unclear. It's worth thinking about what you think in an era where no one can recognize you over the Internet.

◆7: Associate Professor C. Thi Nguyen, University of Utah
Nguyen claims that GPT-3 is a step toward the dream of building 'a truly creative AI that creates art and games.' However, although there is no objection to the idea of ``AI that creates art'' itself, the way children, companies and institutions targeted to obtain economic profit from the art and games created by AI can use AI to create products And there are concerns about bias in the training data.

Only the “measurable” can be managed by companies and institutions as a data set for generating AI. Therefore, when trying to make a 'good work of art', the evaluation criteria in the data set are the tags such as 'good' and 'bad' by someone's hand, the number of 'stars' that people posted on the review site, and the number of views Nguyen points out that there is a risk of losing the delicate and subtle value of art based on such things. Nguyen believes that 'addiction' has already been evaluated in the game industry, and similar problems may occur in games created from AI such as GPT-3.



◆8: Dr. Henry Shevlin, University of Cambridge
Shevlin used GPT-3 to reproduce a mock interview (PDF file) with writer Terry Prachett , who died in 2015. However, the mock interview was not a pleasant conversation about Mr. Prachet's work, it was said that there was a horribly existential conversation, and she was nervous even knowing that the other person was not a human, Shevlin said. ..

The advent of GPT-3 poses major challenges for the field of AI ethics, such as the creation of fake news, the movement to replace human workers with AI, and the problem of bias contained in training data. As a result, even scholars in the field of humanities are becoming more and more required to acquire basic technical knowledge and understanding, and to work on new tools created by technology companies.

◆ 9: Professor Shannon Vallor, University of Edinburgh
Vallor, who studies a philosophy of new technology, admits that the GPT-3 is not only capable of producing interesting short stories and poems, but sometimes executable HTML code as well. On the other hand, it is too much to think about GPT-3 immediately after connecting it with general-purpose artificial intelligence, and there is also the aspect of hype.

Vallor points out that the hurdle that AI cannot overcome is 'understanding' rather than its performance, arguing that understanding is not just a momentary act, but sustainable and lifelong social work. The daily work of 'understanding' that builds, repairs, and strengthens the ever-changing sensations of other people, things, times and places that make up the world is a fundamental component of intelligence, predictive and generative. The classic model GPT-3 cannot achieve this, said Vallor.



in Note,   Science, Posted by log1h_ik