Noemí G. Gómez |
Madrid (EFE) but why now? The rate at which this technology advances is worrying, it must be rethought.
Led by Yoshua Bengio, Turing Prize winner and professor at the University of Montreal, in Canada, and Stuart Russell, from the University of California at Berkeley, United States, the experts request in a letter to the laboratories that they suspend training for at least six months. of artificial intelligence (AI) systems more powerful than GPT-4 (the latest model of generative AI from the OpenAI company).
Among the signatories there are several Spaniards. Carles Sierra, director of the Artificial Intelligence Research Institute of the Higher Council for Scientific Research (CSIC), and Pablo Jarillo-Herrero, of the Massachusetts Institute of Technology (MIT), who agree that “the necessary precautions” have not been taken before to massively transfer this AI to the public.
Sierra, in statements to EFE, admits to having “a growing concern about this kind of arms race” in which technology companies are involved. It is not only OpenIA that is developing generative AI models, but also Google, with Bard, or Meta with LLaMa.
It’s not about catastrophism
It is not about catastrophizing, says Sierra, but rather that “there are companies that have invested a lot of money, want to monetize what they have done and are in the battle to see who gets the biggest piece of the pie”, but in “this process they are being unwise.”
There is a lack of evaluation and without it we do not know what consequences this AI can have, affirms the CSIC expert, who compares it with the process of research and approval of a drug; regulatory agencies take years to approve them and only after passing the three phases of clinical trials (there is a fourth of pharmacovigilance).
“Companies are releasing their versions at a rate of one each month -OpenIA is already working on ChatGPT-5-, making the new models available to everyone, and not in a sectorial way”, he laments.
Jarillo-Herrero is also concerned about the rate at which this AI is advancing and recalls that some time ago there was also interest in a moratorium on the use of the CRISPR gene editing technique, which was progressing “much faster than humanity could ‘digest’, and some applications could get out of hand”.
“With such disruptive technologies, it is convenient to understand and anticipate the possible consequences of their use and regulate it,” he told EFE.
Both experts agree that AI, also generative, can provide benefits, but, as Sierra warns, these systems seek that the result is credible, not necessarily true, and that it seems that a human has said it; therein lies the risk.
Based on machine learning, these systems, which are also concerned about privacy and the use of personal data, learn from the millions of texts, images or videos published on the internet, and the developers keep the data from the thousands of ” conversations” of users to improve the following models.
The biggest fear, misinformation
Jarillo-Herrero, professor of Physics at MIT, focuses his concerns above all on misinformation. Hyperrealistic images of Pope Francis with a white feather or of Donald Trump resisting arrest are two of the examples that have circulated these days on networks.
“Before there was already a lot of misinformation, but it was quite easy for an educated person to notice and distinguish. Now, with the use of AI, it is much easier to publish/disseminate information that at first reading seems real but is actually false”, summarizes Jarillo-Herrero.
In addition, “the information/text with which this AI is trained contains many ‘biases’ -biases-, the same as humans”, with which the generated responses contain all kinds of false stereotypes.
The researcher reflects that humanity, in general, has never been very effective in containing unwanted scientific and technical advances, for example, many countries have developed atomic or nuclear bombs.
“However there is a big difference between AI and other dangerous advances like nuclear bombs. For the latter, very complex instrumentation and materials are required, not easily available even for governments”.
Instead, he points out, anyone with a few computers can make use of AI. “For example, hackers must already be planning thousands of attacks using AI, which can easily solve verification puzzles that previously required a human.”
“This six-month moratorium, if it occurs, may help governments try to better understand the potential negative consequences of AI and regulate its use. The most advanced companies can perhaps pause and think about how to counteract these negative effects”, concludes the MIT scientist.
Sierra, who warns of the danger of putting a system that can generate misinformation in the hands of a teenager, also talks about regulating its use and recalls that sovereignty always rests with the people: “I do not agree to prohibit, but I do agree to regulate” .
deontological code
This expert, also president of the European AI Association, is committed to drafting a clear code of ethics. There are precedents of good practice documents from the European Union, the OECD or American and British companies and institutions, but now a robust and transparent global code is needed.
In this sense, he explains that he is in contact with Stuart Russell, one of the promoters of the letter, to study how to channel it.
At the European level, he says, the artificial intelligence law should be brought back to the discussion table, to specify the risks of generative AI and how to set limits.
The law, whose application is expected this year, includes a ban on practices such as facial recognition in open spaces and classifies the risks of AI as high, moderate or low risk.
Therefore, it identifies sectors, such as education, in which special safeguards and control must be exercised (the use of AI to evaluate students or assign them to schools qualifies it as high risk).
However, when the law was drafted, generative AI systems were still being developed. Now that they are very advanced, it would be necessary to go back and adapt the standard, concludes Sierra.