Almost a decade ago, Stephen Hawking began to warn us about the potential dangers of artificial intelligence. One of the branches of this science is machine learning, whose objective is to develop techniques that allow computers to learn. And that they do it just like many animals: improving their performance through experience and using data and information.
This is precisely what DepecheMood++ does, a computational tool designed by scientists from the Polytechnic University of Madrid (UPM), the Bruno Kessler Foundation in Italy, and the University of Twente (The Netherlands), and the French company Recital. DepecheMood++ detects and learns about the emotions that a person feels when reading different texts. According to those responsible, the system taught how humans express feelings using learning algorithms, which are present in large parts of connected devices. It is not about “spying” on human emotions and taking possession of them, but rather an essential step towards a “complete” artificial intelligence capable of thinking as a human being would. Thanks to this, it will be possible, for example, to detect hateful messages on social networks or to detect mental health disorders in surfers.
“Generating a piece of disinformation is relatively easy – Óscar Araque, from the UPM Intelligent Systems Group, explained – but analyzing and arguing why that information is false is much more expensive. this tool can be used to analyze the messages of people who are suffering from depression and even to recommend that you seek treatment. There are many applications; some we are not even able to imagine yet, but I think that tools like this can help us understand ourselves and thus move forward as a society”
DepecheMood++ is capable of detecting up to six different emotions: fear, fun, happiness, sadness, annoyance and anger and at the moment it is a tool that is only available in English and Italian. Obviously, Its impact can be enormous, both in mental health and in crime prevention, but it can also become a tool for “spying” on humans.. Something about which Araque is evident since he insists that it is only a resource that is currently being used only for research. He adds they have already rejected proposals from companies for financial gain. He also points out the risk that biases (of race or gender) always have in all artificial intelligence systems and in the consequences they can have in people’s daily lives. Are many giants that have had to face AIs with racial brains, Google and Facebook, among others.
But at the same time, Araque explains that a highly active social network company that wants to know what its customers think without having to resort to a summary to analyze the thousands of comments it may receive every day, would use this tool. Namely, Offers from companies to have DepecheMode+++ have currently been rejected, but it cannot be ruled out in the future. Be it DepecheMode+++ or a new AI with similar capabilities. But it will surely happen.
And at that moment, we may remember Daniela Cerqui, an anthropologist at the University of Lausanne (Switzerland). She was asked if Hawking was exaggerating and her answer, in 2014, was very clear: “We delegate more and more prerogatives of human beings to these machines, so that they are more competent than us. We will end up becoming their slaves.”

Categorized in: