ChatGPT, the language model optimized for dialogue and conversation, has seen a lot of coverage in the past couple of months. Most coverage looks at the benefits or advantages of using ChatGPT, for instance, to improve search results or answers, help with coding tasks, provide recommendations or use as a translation tool.
Some researchers look in another direction. They are interested in finding out how ChatGPT can potentially by abused by cybercriminals. Last month, Check Point Research published a report in which the company highlighted that malicious actors were using ChatGPT to write malware or improve malware.
Chester Wisniewski, principal research scientist at Sophos, revealed recently in an interview to Tech Target that he was not concerned about the technology that ChatGPT could do, but about the social side of abuse. Cyybercriminals could use ChatGPT to create phishing emails that looked like they were composed by a native speaker.
One of the shortcomings of phishing, even today, is that many phishing emails include spelling and grammar mistakes. While the overall quality of phishing emails has gone up significantly over time, many emails still have indicators that help computer users detect legitimate from illegitimate emails.
Wisniewski's example is the use of British English in phishing emails in the United States. British English differs from American English; some words are spelled differently, and American users are often up in guards when they notice these in emails. Similarly, British English language users would notice American English in phishing emails.
ChatGPT use in malicious emails
ChatGPT, and other language models that have similar capabilities, may be used to construct emails that match language in a certain region or country. It does not have to go as far as asking ChatGPT to copy the style of a famous author, but instructing it to write a formal message in American English that informs users about something is sufficient. The created email sounds like it has been written by a human, and all that is left to do is to plan the malicious bits into the email. These can be links to websites, but also attachments or requests to call a specific phone number.
Wisniewski believes that humans need help in detecting whether an email or chat message was written by a human or a bot. He suggests that the answer could be friendly AI that is analyzing content and providing users with estimations regarding the authenticity of the content. Researchers are already working on AI models that help determine whether content has been written by another AI.
These would then need to be integrated into security solutions, e.g., antivirus programs, and display notifications to users when the analysis suggests that content has been generated by an artificial intelligence and not a human.
Problem with this approach is that there are also legitimate uses of ChatGPT. Organizations and users may use ChatGPT to improve text, e.g. write better ad copy or help them with certain paragraphs. These are not created to scam users, but helpful AI may have difficulties distinguishing between the two use cases.
Closing Words
Phishing continues to be a threat, and the rise of ChatGPT and other language models is adding a new tool to the arsenal of cybercriminals. Most Internet users need to be aware of that and focus their attention on other aspects of emails. While the grammar and spelling may be excellent, there is still the need to get users to open email attachments or click on links, or perform another action.
Now You: have you tried ChatGPT?
Thank you for being a Ghacks reader. The post ChatGPT is used by cybercriminals to write better phishing emails appeared first on gHacks Technology News.
0 Commentaires