Imane Rachidi |
The Hague (EFE) programming languages, Europol warned today.
Extensive language models such as ChatGPT have a “dark side”, admits the police coordination agency, which published its first report on the possible exploitation of this type of artificial intelligence system by criminals on Monday, something that, he says, “it offers a bleak picture” and a new challenge for law enforcement.
The current model of public access to ChatGPT, capable of processing and generating human-like text, can answer questions on a variety of topics, translate text, chat, generate new content and produce functional code, but also facilitate criminal activities ranging from help criminals remain anonymous until specific crimes, such as terrorism and child sexual exploitation.
Criminals can abuse these language models
Europol’s Innovation Lab added to the growing public attention paid to Chat GPT, a chatbot that has already been launched by rivals like Bard (Google) and Bing (Microsoft). Europol experts have investigated how criminals can abuse these Long Language Models (LLMs).
They have identified numerous areas of concern, but three relevant areas stand out.
The first is its way of making fraud, spoofing and social engineering easier, since ChatGPT’s ability to write highly authentic text is a game changer: the most basic phishing was easy to detect since it used messages and emails full of grammatical and spelling errors, but now it is possible to realistically impersonate an organization or individual with little knowledge of English.
“Until now, this type of deceptive communication has been something that criminals had to produce on their own. In the case of mass-produced campaigns, victims of this type of crime were often able to identify the inauthentic nature of a message due to obvious spelling or grammatical errors or its vague or inaccurate content,” notes Europol.
Possible cases of abuse in terrorism, propaganda and disinformation
With artificial technology, these “phishing” and virtual fraud can be created “in a faster, much more authentic way and on a significantly larger scale”, and allow “responding to messages in context and adopting a specific writing style”, he warns.
“ChatGPT’s capabilities lend themselves to a number of potential abuse cases in the area of terrorism, propaganda and disinformation.
The model can be used to collect more information in general that may facilitate terrorist activities, such as terrorist financing or anonymous file sharing,” he says.
Furthermore, this type of application not only facilitates the perpetration of disinformation, hate speech and terrorist content on the internet, but would also allow users to “give it misplaced credibility as it is machine generated and therefore Therefore, it may seem more objective to some that it was produced by a human.”
Along with fraud and disinformation, Europol highlights cybercrime as a third area of concern, as ChatGPT is not limited to generating human-like language, but is also capable of producing code in several different programming languages, with a variety of practical results in a matter of minutes by entering the correct indications on the web page of this tool.
“It is possible to create basic tools for a variety of malicious purposes. Although they are only basic, this provides a start for cybercrime, since it allows someone without technical knowledge to exploit an attack vector on a victim’s system, ”the agency underlines in its report.
The security measures that prevent ChatGPT from generating code with potential malicious use only work if the tool understands what it is doing, but given directions that break down into individual steps, it is “trivial to circumvent these security measures.”
A more advanced user can also exploit the enhanced capabilities of ChatGPT to further refine or even automate sophisticated modus operandi of cybercriminals.