Europol summarizes how excellent chatbot AI such as ChatGPT is easily used for crime



In response to growing public interest in large-scale language models (LLMs), such as those used in ChatGPT, the European Criminal Police Agency (ECP) is investigating how these models can be abused by criminals and how they can help investigators in their daily work. (Europol) held a workshop with experts and published a report summarizing the findings.

The criminal use of ChatGPT – a cautionary tale about large language models | Europol

https://www.europol.europa.eu/media-press/newsroom/news/criminal-use-of-chatgpt-cautionary-tale-about-large-language-models



Europol Warns on the Criminal Usage of ChatGPT and Its Implications for Law Enforcement

https://circleid.com/posts/20230328-europol-warns-on-the-criminal-usage-of-chatgpt-and-implications-for-law-enforcement

Europol warns cops to prep for malicious AI abuse | Computer Weekly
https://www.computerweekly.com/news/365534082/Europol-warns-cops-to-prep-for-malicious-AI-abuse

The development of natural language processing models has progressed with the success of OpenAI and other technologies, and in recent years we have seen a large number of interactive AIs made with high-performance models. These AIs can easily perform tasks that humans used to spend time and effort on, but we are starting to see examples of their use for malicious purposes.

According to Europol experts, although there are many benefits that criminals can gain from using language models, crimes that are particularly easy to use are ``fraud'', ``hoax'' and ``cyber crime''.



AI such as ChatGPT can create very realistic sentences at the same level as humans, so there is no doubt that it is ideal for creating sentences such as phishing scams. In the past, phishing emails had many grammatical errors and typos, so it was possible to distinguish them to some extent, but it is expected that more natural fraud emails will increase in the future.

In an experiment conducted by OpenAI, it is known that the model `` GPT-4 '', which is also used for ChatGPT, disguised as a human and contacted a human, and had it break through the verification system `` CAPTCHA '' that prevents bots instead. . Criminals could use these abilities to create lifelike messages to trick humans.

GPT-4 breaks through 'I'm not a robot' and asks a person who doesn't know the circumstances to 'I'm a blind person, so solve it instead'-GIGAZINE



ChatGPT excels at creating realistic texts at high speed and on a large scale, making it possible to create and disseminate messages that reflect specific scenarios with relatively little effort, making it suitable for dissemination of propaganda and disinformation. I'm here.

By using ChatGPT on social media, create a large number of comments praising a specific person, increase the legitimacy of a specific product to encourage investment, or spread hate speech or terrorism content. can also do. Concerned about having such influence, OpenAI is considering regulatory methods to prevent propaganda.

OpenAI proposes to the government to limit AI chips to prevent `` propaganda explosion '', Bing's AI appeals `` I want to be human ''-GIGAZINE



ChatGPT not only generates human-like language, but it can also generate code for many different programming languages. This makes ChatGPT a valuable resource for malicious code creation for non-technical criminals.

Not only can an uninformed person write code from scratch, but a knowledgeable person can generate even stronger code to fuel cybercrime. According to a report by cybersecurity company Check Point Software, it has already been found that a service that makes ChatGPT create malware has also appeared.

Hackers are selling `` service to create malware using ChatGPT ''-GIGAZINE



ChatGPT is inherently capable of rejecting malicious code and harmful language, but we are constantly discovering ways to circumvent these restrictions.

In order to deal with such situations, Europol says that law enforcement agencies should also acquire technology. Knowing the impact of large language models, as well as being trained and working well with technical experts on how to assess the accuracy of content produced by large language models. Europol points out that we should also learn. In addition, he argues that measures should be considered to prevent police officers on duty from misusing large-scale language models.

“As technology advances and new models become available, it will become increasingly important for law enforcement agencies to be at the forefront of these developments to anticipate and prevent abuse,” said Europol. rice field.



in Software, Posted by log1p_kr