Cybercriminals Leverage Chat AI for Business Email Compromise



Chat AIs like ChatGPT generate human-like text in response to input prompts. Cybercriminals use these techniques to create highly persuasive fake emails customized to the recipient, increasing the success rate of

business email compromise (BEC) attacks.

WormGPT - The Generative AI Tool Cybercriminals Are Using to Launch BEC Attacks | SlashNext
https://slashnext.com/blog/wormgpt-the-generative-ai-tool-cybercriminals-are-using-to-launch-business-email-compromise-attacks/



According to security company SlashNext, cybercriminals create emails in their native language, translate them, and then apply chat AI such as ChatGPT to refine the text and improve formality. By using these techniques, even if you are not fluent in a particular language, you can craft a compelling email for phishing or BEC attacks.

In addition, on forums where cybercriminals gather, there are special devices called 'jailbreaks' created to enable chat AI to expose confidential information, generate inappropriate content, and output that can potentially execute harmful code. There is a thread that provides prompts.

Additionally, cybercriminals have created their own customized chat AI modules that can be easily used for malicious purposes.



One of them is ' WormGPT ' reported by SlashNext. WormGPT is an AI module based on the GPTJ language model developed in 2021, with unlimited character support, chat memory retention, code formatting capabilities, and more. It appears to have been trained specifically with malware-related data sources, but SlashNext says the dataset used in the training process is unknown because its creator keeps it secret.

WormGPT doesn't have the limitations of ChatGPT, so compelling fake emails can be easily crafted, and SlashNext warns that even novice cybercriminals can pose a threat. increase.

in Security, Posted by logc_nt