Google, which develops its own chat AI 'Bard', warns employees about using chat AI
Google is developing its own chat AI ``
Google, one of AI's biggest backers, warns own staff about chatbots | Reuters
https://www.reuters.com/technology/google-one-ais-biggest-backers-warns-own-staff-about-chatbots-2023-06-15/
The reason why the use of chat AI is becoming active worldwide is that it can be expected to greatly speed up tasks by helping to create emails, documents, and programming. On the other hand, chat AI often outputs erroneous information, confidential data, copyrighted information, etc., so there are voices that it should be used carefully.
While Google is developing various AI tools, including Bard, it warns employees against using chat AI, including Bard, four people familiar with the matter told Reuters. I spoke. According to sources, Google's parent company, Alphabet, advises employees not to enter confidential company material into chat AI.
Chat AIs such as Bard and ChatGPT are trained based on conversations with users. It has often been pointed out that this may occur. Alphabet advising employees not to enter confidential company material into chat AI is an acknowledgment of this risk of information leakage.
In addition, it is clear that Alphabet has warned engineers to avoid using the code generated by chat AI directly.
When Reuters asked Google for comment on this matter, he said that ``Bard may suggest undesirable code, but it is still useful for programmers.'' In addition, Google said, 'We aim to be transparent about the limits of our technology,' claiming that this information is not hidden.
In response, Reuters pointed out, ``Google's concerns are a good indication of how it wants to avoid business damage from Bard, which it released as a ChatGPT competitor.''
According to Reuters, the number of companies that have set limits on the use of chat AI like Google is increasing worldwide, including global companies such as Samsung, Amazon, Deutsche Bank, and
Google and Microsoft offer interactive AI tools for enterprise customers, which refrain from using data for training, so it is believed that there is no risk of information leakage. However, this comes at a high cost.
Also, according to a survey of nearly 12,000 respondents, including top U.S.-based companies, about 43% of professionals use AI tools such as ChatGPT as of January 2023. It is clear that many of them are using AI tools secretly from their bosses.
According to Insider reports, Google seems to have instructed employees to 'do not enter internal information' even during internal testing before the release of Bard in February 2023.
In addition, it seems that Google's privacy notice updated on June 1, 2023 also added the statement 'Please do not include confidential information in conversations with Bard.'
Yusuf Medhi, head of marketing at Microsoft, commented that it was 'natural' for companies not to want their employees to use chat AI in their work. Microsoft declined to comment on whether it has outright banned employees from entering sensitive information into AI programs, including those made in-house, but another executive at the company said, ``For personal use. ,” he told Reuters.
By the way, McKinsey, a global consulting company, is actively promoting the use of generative AI, including ChatGPT, and 50% of its employees use generative AI.
A global consulting company states that ``50% of employees use generative AI''-GIGAZINE
On the other hand, the International Conference on Machine Learning, one of the international conferences on machine learning, has announced a policy of ``prohibiting the use of AI such as ChatGPT to write scientific papers'', and advanced AI Policies for using tools vary considerably between companies and organizations.
Writing scientific papers with AI such as ChatGPT is prohibited at international conferences, but editing and polishing your own sentences is OK - GIGAZINE
Related Posts:
in Software, Posted by logu_ii