Will artificial intelligence steal our jobs?
According to a report by Goldman Sachs economists published in March, some 300 million full-time jobs could be replaced by artificial intelligence worldwide.
That’s 18% of the workforce. Workers more affected would be in advanced economies than in emerging ones.
Patricia Ventura, PhD in Media, Communication and Culture at the Autonomous University of Barcelona, is an expert in ethics and artificial intelligence.
In 2021, she produced a report providing a framework for the Spanish government on algorithmic technologies in the communicative sphere. Miguel Ángel Antoñanzas spoke with her.
One of the scariest things is the possibility that we may feel replaced by artificial intelligence, is it possible?
We cannot feel inferior to a machine because it is more efficient or productive. Human beings are much more than efficiency and productivity. So, this discourse of productivity and comparing the human brain with the machine does not help us and we cannot give in to these narratives. These are self-serving narratives of who is going to benefit from this chaos.
We have to value ourselves, above all, for our moral criteria to govern this technology. That morality is not, and cannot be, in the hands of machines. There is nothing to show that they can have it.
So what are the professions that go to see harmed by this artificial intelligence?
I have perceived a lot of concern on the part of people who work in dubbing, on the part of scriptwriters, in fact, in the United States there has been the main worldwide mobilization on the part of scriptwriters so that the studios continue to count on them. They don’t want them to bring them scripts already made by machines so that they only review them, they don’t want them to train the machines with their scripts and if they do, they should be paid for it. Then these professions are clear, clear, clear danger and can be harmed.
In the legal world as well. A tool of these is able to track case law on a particular subject at a time. Among journalists there are certain tasks that do not go to be necessary, but even some say that programmers will also be impacted, now the code also makes them the machine.
But it is also true that when you look back, you see that when there were technological changes also looked like everything was going to disappear, it was going to end the world. And then in the end things have resituated, that’s why we have to learn from history. Fear is never good, obviously we have to be sympathetic to those professions, or people who, now cannot reinvent themselves or who are in certain positions. But that is the way technological evolution is. If there are professions that can be done by a machine, it means that maybe they are not so creative or determined. Therefore, I believe that we do not yet know the real effects, that we have to look at it calmly, but we have to assume that certain tasks, I do not know if they are professions, but certain tasks will end up being taken over by this technology.
And which professions are going to be favored?
The big technological ones, those that have certain related sectors or that can take advantage of this technology in some way. In the case of communication and culture, perhaps the most specialized, those who want to misinform or intoxicate can take advantage of it, but on the other hand they can also benefit, people who want to raise beautiful projects and even people who want to do journalistic projects that could not be done before. Above all, the big technological companies have the chance to win.
What is not well understood about artificial intelligence?
What I think is still not being understood and will eventually be understood is that this is technology, technology that can be used in one way or another to do positive things, for things that are not as positive as any other.
It is a tool that can be used to achieve many things that perhaps until now we have not been able to achieve. It can help us all to have more opportunities. It can help us to improve the environment. But it can also serve to generate disinformation, polarization, to intoxicate.
Some experts or even those responsible for the development of this artificial intelligence are publicly calling for regulation, other experts have warned of the danger even for all of humanity of an uncontrolled artificial intelligence.
Artificial intelligence itself is not bad and has no intentions, it has no motivations and there is nothing that shows that a machine can have a motivation. And that is a very important nuance because it puts the responsibility on who creates it, so artificial intelligence is not going to dominate us, it is not out of control, but the people who are creating it and who are putting it on the market, perhaps without sufficient security measures and without sufficient auditing, those are a danger. The focus should be on the people. I think the worst scenario is deregulation, because regulation is going to make everyone have to go through rules to be able to put certain devices on the market that can harm others.
The creators are responsible for their creations and therefore that responsibility obliges them to control it. So it seems to me that this is a self-serving story that certain creators are interested in spreading or promoting, but I do not think that, if they are responsible, they can control it.
Although we have seen publicly that regulation is being requested, you have stated that you believe that there is a double language in these statements.
It seems to me that there is a confused and self-serving language here. I believe that they are not interested in regulation; what they are interested in is participating in regulation.
In Europe we have the General Data Protection Regulation that we know and also explained very well by an advisor of the UN and the European Union, that when Italy banned Chat GPT banned it being very strict in the application of the General Data Protection Regulation, which does not legitimize in any way what Open AI and these companies are doing with the data that is on the Internet, among other things because many are subject to copyright. And, on the other hand, now also a law firm in California also sued OPEN AI in the United States. What happens is that it is curious because Italy backed out, Europe has not implemented the General Data Protection Regulation and it seems to me that this is what reveals the great power that these companies have, which seems to be above the states.
Indeed, and you were talking about it, Open AI has been sued collectively in California with the accusation that the company steals and appropriates people’s information to train its artificial intelligence tools.
The problem is that perhaps these tools have been launched on the market that also take, let’s say, data from people, from creators protected by copyright, so here it enters into this contradiction and it is sad because they are tools that can serve and encourage creativity, but at the same time the same creator may feel uncomfortable, because they are using other people’s creations and that is why regulation is important.