València (EFE).- Artificial intelligence is still far from the threat that creating a machine that governs the world without human intervention could pose, as was the case in Terminator, but the possibility of impersonating politicians does pose a current risk , for example, to generate video images where they “say” what they have not said.
The generation of false news or content for the purpose of defamation -the hoaxes that are forwarded by WhatsApp, or the profiles that spread lies on social networks- is now much easier with artificial intelligence and has become a much easier tool for political manipulation. more powerful with the generation of fake videos.
fake images
The false images generated by artificial intelligence of Pablo Iglesias and Yolanda Díaz arm in arm amid the tension between their parties or of Emmanuel Macron demonstrating against his own pension reform are some of the examples of the risks that this entails for credibility in policy this technology.
While AI can speed up certain campaign jobs, such as writing political agendas or sending emails, it also poses risks for public debate. Above all due to the generation of false content and identity theft, as the expert in artificial intelligence and project director of Lãberit, Ismael Estudillo, pointed out in an interview with EFE.

The CEO of Open AI, the developer of ChatGPT, stated last Tuesday in an appearance before the United States Senate that “this technology can cause significant damage to the world; if something goes wrong, it can end very, very badly”, an opinion that, in the political sphere and in Spain, Estudillo shares but qualifies.
manipulated videos
Until now, he explains, Artificial Intelligence has been used to muddy the political debate, above all through the generation of false news, images such as those of the leaders of Sumar and Podemos or the French president and the generation of ultrafake or ‘ deepfakes’, manipulated videos that look real.
“The objective is to replace what the politician says with what the end user is expected to hear,” says the specialist in immersive technologies, who points out that the risks are growing, but the first detection algorithms and the first laws are also beginning to emerge. regulatory.
False news with virtual presenters
As Estudillo explains, now not only can a false statement be put in writing, but the defamed politician can be made to pronounce it out loud, thanks to the generation of videos and audios, with which it can be made act or talk as you like.
Lãberit’s project director is not aware that this has yet happened in Spanish politics, but there are recent examples in other countries, such as the false images of former United States President Donald Trump being detained by the Police or false news reports. with presenters created by artificial intelligence to misinform about the economic situation in Venezuela.
Identity theft is another of the risks perceived by the specialist: “deepfake” videos of the actors Keanu Reeves or Margot Robbie, whose digital replicas speak or dance, have already gone viral, so Estudillo considers that this same technology is could be used to generate false statements from a politician and get him to say what he hasn’t said.
The danger of “rewriting history”
What users must understand, says the expert, is that language models such as ChatGPT, with which you can “hold a conversation” and that can generate texts, are not a search engine: “They are not the truth, they are a help ”.
“Just like when you search for something on the internet, you can’t be left alone at the Wikipedia entry, you can’t trust the information that ChatGPT gives you without contrasting it,” he points out, and warns that the result that the “chatbot” gives depends on with what and how much content has been trained, and that it may not be complete or fully up to date.
But AI errors are not the biggest danger, rather, for Ismael Estudillo, it lies in the possibility of “rewriting history”.
He believes that the predictions of the founder of Microsoft, Bill Gates, that in 18 months it will be possible to educate using this technology are not far-fetched, and he wonders “what could happen if people learned with artificial intelligence that has been fed false information” .
In this sense, he warns of the threat that the AI will be “taught” with a false account of the history of Spain that, for example, would legitimize terrorist violence, or falsify the facts related to the Civil War or Francoism and that This technology will then be used for educational purposes.
“Whoever knows how to do it can create the story they want, and a political party or group that knows how to handle them could train these models to influence all the good they have done,” he warns.
But not all are threats, according to Estudillo, who recognizes the advantages of using artificial intelligence in the daily work of an electoral campaign.
“Actually, they were designed to streamline processes, and that’s what they can do: draft political proposals in different formats, schedule and send mailings, or even generate posters with image generation programs,” he details.
Regulation
Estudillo believes that the commercialization of new AIs is now experiencing a certain “slowdown”, awaiting the next legislation of the European Union, in the same way that in Spain a proposal for a law to regulate image simulations has been registered in Congress. and voices generated by artificial intelligence.
What is known about the possible final content of both regulations is that they will include the obligation to explicitly indicate what content has been generated digitally, but for Estudillo “people don’t read, and if you put the big image and the warning in a corner, the visual impact”.
Regarding the possibility of penalizing the misuse of AI, he assures that “it must not be regulated in the algorithm, it must be regulated at the base: regulate the data, the information, that is used to train it.”
Future
Regarding the future prospects of artificial intelligence, the Lãberit project director acknowledges the “fear of Skynet”, the AI that leads the machines in Terminator, but believes that reality is far from that prediction.
“Yes, Artificial Intelligence could be programmed to manage things, and it is true that anything programmable or automatable in a government could do it, such as executing a budget without going out of bounds or avoiding indebtedness, but there will always have to be a person behind to train her, he concludes. EFE