Bogota, April 17. The spread of generative artificial intelligences such as ChatGPT or Midjourney has sounded the alarm about its possible impact on disinformation, although the experts consulted by EFE are cautious and refuse to fall into pessimism.

Some of these models have already produced fake – but very believable – images of the Pope or Donald Trump, but on a smaller scale they have also been used to falsely accuse an Australian mayor of corruption or a professor in Washington of sexual harassment.

Juan De Brigard, Colombian expert from the Karisma Foundation, in charge of the promotion of human rights in the digital world, shares with EFE Verifica two concerns about disinformation.

On the one hand, he points out that an “unprecedented” volume of content is being generated, very difficult to filter, and on the other hand, he talks about the tools that are programmed so that what they say “seems plausible”, without necessarily attaching oneself to it, to the truth.

“They are not made with any parameter that protects, cares or preserves the truth,” De Brigard says of generative artificial intelligences.

Its algorithms are trained with millions of data and images that may contain biases, with the danger that they can be reproduced later in their responses.

“Historical data used to train systems reflects a long-standing systemic bias, and AI reproduces, and sometimes even exacerbates, that bias,” says Maura Grossman, a researcher at the School of Computer Science at the University of Waterloo ( Canada).

DOCUMENTED INCIDENTS

The repository of the organization ‘Against Incidents and Controversies of AI, Algorithms and Automation’ (AIAAIC), which collects incidents related to the misuse of artificial intelligence (AI), demonstrates how in some cases these new technologies have contributed to misinform or gave rise to misrepresentations.

For example, this year in Venezuela, “deepfakes,” videos in which artificial intelligence creates avatars that look like real people, were used to trick synthetic TV presenters about the country’s economic health.

Also in the US, ChatGPT falsely accused a college professor of harassing a student, citing a nonexistent Washington Post article.

This same chatbot lied about alleged bribes received by an Australian mayor, who threatened the platform with legal action if the error was not corrected.

Moreover, in the context of the war in Ukraine, disinformants impersonated Ukrainian President Volodymyr Zelensky in a video in which he was seen calling for the surrender of his army, but which was also created with artificial intelligence. .

ARTIFICIAL INTELLIGENCE EXPERTS ARE CAUTIOUS

Jorge Vilas Díaz Colodrero, a lawyer and director of the Government 4.0 degree at Universidad Austral in Argentina, said at a recent seminar that AI contains “tremendous power.”

“Man’s problem is how to carry power and not be contaminated,” he says.

In this sense, Yann LeCun, chief AI scientist at Meta, the parent company of Facebook, believes that disinformation and propaganda “have always existed”, even if “until now, they were artisanal”.

Therefore, he considers that the obstacle of disinformation “is not the difficulty of creating content, but the difficulty of disseminating it widely”.

LeCun points to the “enormously positive” impact AI has had on “filtering and moderating content across social media, email, and other communication platforms.”

In the same way, he is convinced that “in the future there will be dialogue systems that can verify the facts” or programs that journalism can take advantage of as “a human assistant with a great memory” .

THE URGENCY OF REGULATION

All the experts consulted agree that AI needs to be regulated, although there is no consensus on how it should be.

Meta’s LeCun argues that regulation “can help”, but such restrictions “can easily conflict with free speech”.

As Grossman lays out what is for her the best proposal she knows, based on the creation of a “dedicated expert agency”.

“Like the United States Food and Drug Administration (FDA), which approves drugs before they can be marketed. I think there should be an FDA that reviews and approves algorithms that might have a significant and potentially negative impact on people’s lives,” he explains.

In this regard, Vilas Díaz Colodrero quotes Andrea Renda, Director of Global Governance, Regulation, Innovation and Digital Economy at the Center for European Studies (CEPS), who advises something like “captcha revenge” .

Just as today users go through “captcha” tests when they access a web page so that the system verifies that they are not a robot, Renda proposes that it be regulated so that systems inform “yes or yes to the human that it was created by artificial intelligence.”

Since 2021, the European Union has been studying legislation to combat AI.

For now, the alternative is a law that would classify AI systems into four levels of risk: unacceptable, high, limited and minimal.

While the laws are coming, and despite the potential risks of misinformation, experts suggest they are manageable. They all agree that regulation is imperative, while several of them recommend educating its criteria when consuming online content. EFE

dga/ares/rg

Categorized in: