Researchers pointed out that it was a mistake that OpenAI postponed publication of AI's articles as 'dangerous' sentences
OpenAI, a nonprofit AI research organization established to prevent misuse of AI and robot technology, stated that AI's automatic document creation tool "is too dangerous to output very accurate text" We announced that we will postpone publication of technical papers. Hugh Zhang , a researcher at Stanford University, points out the stance taken by OpenAI, "It is dangerous to terminate the open source culture of AI technology."
Dear OpenAI: Please Open Source Your Language Model
In mid-February 2019, OpenAI announced that it developed an AI model for generating new text called " GPT-2 ". The GPT - 2 is a very good sentence making AI, and the researchers said that they are concerned that "the risk of being abused is very high."
Automatic writing tool by AI makes it too easy to produce highly accurate text so it is regarded as "too dangerous" from the development team - GIGAZINE
Mr. Zhang said about "GPA - 2 is not surprising about OpenAI announcing a great achievement", and the evolution of AI technology in recent years is said to be remarkable. On the other hand, the reason why this announcement had a big impact is that it is in a careful decision of OpenAI that did not open the research results as "research results may be abused in spam and fake news" I pointed out.
This prudent decision triggered various discussions on AI ethics on Twitter and Reddit and reported that AI related media were all "a dangerous AI was developed to announce", Zhang said Claim. Zhang says it is against the decision not to make GPT - 2 open source, assuming that it is correct to worry that OpenAI will misuse its own research results. Measures to not release the model of GPT - 2 are not necessary for safety measures, and it may have a bad influence on the future of AI research.
by Matan Segev
Mr. Zhang says there are two kinds of technologies that can have a negative impact on the world. One is "destructive technology" including chemical weapons, biological weapons, atomic bomb and others, and the other is "fraudulent technology" such as deep fake, Photoshop, the Internet. Zhang says GPT - 2 developed by OpenAI will be included in the latter fraudulent technology.
Mr. Zhang argues that it is the only countermeasure to limit the spread of knowledge and technology concerning weapons etc. in order to protect society from destructive technology. Of course it is difficult to completely limit the spread of knowledge, and as the development of science and technology is rapid, we can also succeed in manufacturing atomic bombs and chemical weapons from a few clues that dangerous forces can obtain . However, in order to avoid dangers, it is nothing but restricting information on destructive techniques, and at the same time it is only necessary to take measures such as making it impossible for the knowledge and materials of atomic bomb manufacture to be easily obtained on the Internet.
Meanwhile, in the case of fraudulent technology, Mr. Zhang says, "We can take measures to notify the spread of technology but to notify people about the latest technology information and possibilities". Although countermeasures are counterintuitive, it is said that fraudulent technology loses most of the power even if the general public is widely aware of the possibility of the latest technology.
Knowledge on nuclear weapons does not protect people from nuclear explosion, but by having knowledge that "synthetic voice technology in recent years is at a very high level", the movie in which President Obama speaks Chinese I can also doubt that "this is not a real movie" even if it sees. Also, if you know about photo editing technologies such as Photoshop, even if you look at the photo that President Putin is riding in a bear, you do not have to believe that President Putin can really ride a bear.
Photoshop is a technology that allows you to make various edits to photos, and despite how much you can think about how to abuse it, we have not destroyed society at the moment. In the past, cameras were considered to accurately reflect facts, and Joshif Stalin, who took the idea to the contrary, tampered with photographs for impression manipulation.
For example, this picture of Stalin and Nikolay · Ejov .
It is said that Ezhio was purged and disappeared from the photograph.
When Photoshop was released in 1988, people were worried that malignant people might tamper with the photos and have a huge negative impact on society. However, now 30 years have passed since release, high school students operated Photoshop, and no big confusion in society has occurred.
Mr. Zhang points out that the reason why social unrest caused by Photoshop did not occur is because everyone knows the existence of Photoshop and what kind of operation can be done by Photoshop. Even though I can trick an old person who believed that photo editing by Photoshop "I believe the picture is an absolute thing that can not be tampered with", modern day knowing that "pictures are easily editable" He said that he will lose that threat to people.
Regarding the various AI technologies that have been developed in recent years, "People may be dangerous about the arrival of apocalypse, but I believe that these technologies follow the same path as Photoshop," Zhang "He says. In other words, people can doubt the events in front of them by knowing the latest technology.
by bruce mars
GPT - 2 is a sentence (Prompt) from the beginning of human writing, AI will write the continuation sentence (Model) without permission. Mr. Zhang lists one Prompt and Model as an example from the contents that OpenAI published. An example is the following sentence.
Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
Written by human beings: The fact that the scientists discover the unicorn herd in a valley where people have not explored before, the scientists are in a remote location of the Andes mountains did. A further surprising discovery by scientists is the fact that the unicornes speak the perfect English.
Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.
Continued sentences that AI wrote: Scientists named their herds Orbids Unicorn after its characteristic horn. Silver unicorn with 4 horns has never been known to the scientific community. Now that the second century has passed, the mystery of the strange phenomenon that became a topic was finally solved.
The point to watch out here is that Mr. Zhang points out that "the best sample among AI's executing text generation several times is posted as a model" and "Prompt It itself sorts out those that are advantageous to argument. " In particular, by presenting the strange topic of "Unicorn speaking English", the sentences by AI which usually do not make sense seem to make sense at times.
Even considering that, Zhang points out that this short sample has "serious contextual flaws." In the first sentence of the Model, it is stated that there are four horns in the second sentence, even though it suggests that there is only one unicorn horn. In addition, although it is said that "Unicorn has not been known before then" in human writing, AI has written that unicorns were found two centuries ago.
"Although this may seem to be finding faults, I think there is a serious problem of deep learning here, that is, GPT - 2 is a true meaning of the contents of the text that it is generating We do not understand, "Zhang says. Although AI who does not understand the meaning of sentences can generate sentences which seems to be effective at first glance, it is difficult to tailor to the context.
Of course, Mr. Zhang admits that GPT-2 is more accurate than many sentence-making AIs ever made, but it is far from the human-level context. For the moment, it seems that GPT - 2 is used by malicious persons soon and is unlikely to be used for fake news and spam.
Mr. Zhang also pointed out that "This idea is wrong in some respects," even to those who argue that GPT-2's complete model need not be open sourced. The fact that AI technology so far has evolved at an explosive speed as a result of open source of research contents is that OpenAI is aggressively making open source of research contents is undoubtedly an open in AI research It seems to be boosting the trend of sourceization. If OpenAI ceases to open source of research contents, there is a high possibility that other research institutions will follow up with it and make the contents of research private.
In addition, the open source of the research contents enables other researchers to study the contents of the research, and as a result it has the effect of guaranteeing the validity of the research and also deepens the understanding of AI technology for the general public Results are brought. Even if researchers themselves do not appeal to the world widely, knowledge of people will be deepened by engineers who are interested in research content making services and products using AI technology. For example, a site called "This person does not exist" which AI creates fictitious person images easily, made by Uber engineers, many people say "Is AI's image generation technology evolving to this level? Succeeded to make me think it.
"This person does not exist" which can easily generate images of people who do not exist in this world with a single touch - GIGAZINE
Mr. Zhang has praised the fact that OpenAI made open source of numerous research results and pushed the limits of AI research praised and also appreciates the attitude of seriously tackling research ethics. On the other hand, the decision to abandon open source as a fear of exploitation of technology is incorrect, and it was said that OpenAI hoped to open up GPT - 2 research content in the near future.