By Sarah Yáñez-Richards |
New York (EFE).- Google and Microsoft are in a race to make their new chats with artificial intelligence (“chatbots”), which will reach the public soon, become as popular or more popular than their search engines, but these new technologies come with new cybersecurity risks, such as being used to create scams or build malware to carry out cyberattacks.
These problems are also seen in chatbots like the popular ChatGPT, created by OpenAI, technology that also powers Microsoft’s Bing search engine.
Human-created content or artificial intelligence (AI)?
Satnam Narang, a senior research engineer at the cybersecurity firm Tenable, tells EFE that scammers can be one of the biggest beneficiaries of this type of technology.
Chatbots allow you to create texts in any language in a matter of seconds and with perfect grammar.
According to Narang, one of the ways to identify these scammers is through the grammatical mistakes they make in the messages they send to their victims and that, if they use AI, they will be able to go more unnoticed.
“ChatGPT can help (scammers) create nicely designed email templates or create dating profiles when they try to scam users on dating apps. And when they have a conversation (with the victim) in real time, scammers can ask ChatGPT to help them generate the response that the person they are trying to impersonate would give,” notes Narang.
In addition, the expert points out that there are other types of artificial intelligence tools, such as DALL·E 2 -also from OpenAI- in which fraudsters can create photographs of people who do not exist.
AI to design malware
Another of ChatGPT’s qualities is that it can help hackers create malicious programs (or malware).
“This malware is not going to be the most sophisticated or the best designed, but it does give them a basic understanding of how they can write malware based on specific languages. So it gives them an advantage in their process, since until now whoever wanted to develop malicious software had to learn to program, but now ChatGPT can help them shorten that time”, Narang details.
DAN, a chatbot without limits
Both OpenAI’s ChatGPT, Microsoft’s Bing, and Google’s Bard chatbots are carefully designed to avoid speaking out on a wide range of sensitive topics—such as racism or security—and offensive responses.
For example, they do not answer questions about Adolf Hitler, they do not agree to comment on the English word “nigger” (derogatory for “black”), nor do they give instructions on how to build a bomb.
However, Narang explains that there is already a “jailbreak” version (released or modified) of ChatGPT called DAN, which stands for “Do Anything Now” (“Do anything now”) in which there are no such barriers.
“This is more worrying, because now (a user) could ask ChatGPT (no limits) to help him write ransomware (a program that takes control of the system or device it infects and demands a ransom to return control to its owner) . Although it is not yet known how effective this ransomware could be, ”explains Narang.
Pandora’s box
The expert finds it difficult to implement rules at the national or institutional level to set limits on these new technologies or prevent people from using them.
“Once you open Pandora’s box, you can’t put anything back inside. ChatGPT is here, and it’s not going away. Because outside of this malicious use there are many genuine use cases that are valuable to companies, organizations and individuals,” concludes Narang.