(Andean)

In 2023, the world witnessed major innovations in artificial intelligence (AI). Depending on what is read, these advances have come to improve people’s lives or destroy them completely in a kind of rebellion of the machines. One of the biggest news this year was the launch of ChatGPT, which caused excitement and also fear among people.

ChatGPT is part of a new generation of AI systems capable of conversing, generating readable text and even producing new images and videos based on what they have “learned” from a vast database. e-books, online writing and other media. .

Derek Thompson, editor and journalist at the magazine Atlantica series of questions were asked about whether we should really be afraid of new advances in AI and think that lead to the end of the human race or if they are inspirational tools that will improve people’s lives.

Consulted by the American media, computer scientist Stephen Wolfram explains that large language models (LLMs) like ChatGPT work in a very simple way: they create and train a neural network to create texts that feed on a large sample of text. , i.e. on the web such as books, digital libraries, etc.

If someone asks an LLM to imitate Shakespeare, they will produce a text with an iambic pentameter structure. Or if you ask him to write in the style of a science fiction writer, he will mimic that author’s more general characteristics.

“Experts have known for years that LLMs are great, they create fictitious things, they can be useful, but these are really stupid systems, and they are not scary“, said Yann Lecun, chief AI scientist for Meta, consulted by Atlantic.

OpenAI, the company that developed ChatGPT
OpenAI, the company that developed ChatGPT

US media points out that AI development is focused on large companies and new ventures backed by capital from technology investment firms.

The fact that these developments are concentrated in companies and not in universities and governments can improve the efficiency and quality of these AI systems.

I have no doubt that AI will develop faster within Microsoft, Meta and Google than it would, for example, in the US military.explains Derek Thompson.

However, US media are warning that companies could make mistakes by trying to quickly introduce a product that is not in optimal condition to market. For example, the chatbot Bing (Microsoft) was aggressive towards people who used it when it was first released. There are other examples of other such errors, such as the Google chatbot which failed because it was launched in a hurry.

Philosopher Toby Ord warns that these advances in AI technology are not keeping pace with the development of ethics in the use of AI. viewed by Atlantic, Ord compared the use of AI to “a prototype jet engine that can reach speeds never seen before, but without corresponding improvements in steering and control.” For the philosopher, it is as if humanity were on board a powerful Mach 5 aircraft, but without the manual to steer the aircraft in the desired direction.

Regarding the fear that AIs are the beginning of the end of the human race, the outlet points out that systems like Bing and ChatGPT are not good examples of artificial intelligence. But yes they can show us our ability to develop a super intelligent machine.

Others worry that AI systems are not aligned with the intentions of their designers. On this subject, many machine ethicists have warned of this potential problem.

Young man in front of a screen (Getty)
Young man in front of a screen (Getty)

How do you ensure that the AI ​​that is built, which may well be significantly smarter than anyone who has ever lived, is aligned with the interests of its creators and the human race?“, he is asking himself Atlantic.

And the great fear regarding the previous question: super-intelligent AI could be a serious problem for humanity.

Another question that worries experts and is formulated by the American media: “Is there more to fear from unaligned AI or from AI aligned with the interests of bad actors?”

A possible solution to this is to develop a set of laws and regulations that ensure that the AIs that are developed are aligned with the interests of their creators and that those interests do not harm humanity. And developing AI outside of these laws would be illegal.

However, there will be actors or regimes with dishonest interests who can develop an AI with dangerous behaviors.

Another question that raises questions is: how much should education change in relation to the development of these AI systems?

The development of these AI systems is also useful in other sectors such as finance or programming. In some companies, certain AI systems outperform analysts in picking the best stocks.

“ChatGPT demonstrated good drafting skills for letters of demand, summary pleadings and judgments, and even drafted questions for cross-examination,” said Michael Cembalest, president of investment strategy and market for JP Morgan Asset Management.

LLMs do not replace attorneys, but can increase your productivity, especially when legal databases like Westlaw and Lexis are used to train them.Cembal is added.

For the past few decades, it has been said that RNs will replace workers in certain trades such as radiologists. However, the use of AI in radiology remains a complement for clinicians rather than a substitute. As in radiology, these technologies are meant to complement people’s lives.

Continue reading:

ChatGPT vs. Bard: what artificial intelligence brings
A directory of free artificial intelligence available on the internet
Artificial intelligence and its way to optimize business by 2023

Categorized in: