Even if you haven’t tried online artificial intelligence (AI) tools that can write essays and poems or conjure up new images at your command, chances are that the companies that make the products you use at home have already started to do so.
Mattel has already put DALL-E, an AI imager, to work coming up with ideas for new Hot Wheels toy cars. Used car dealership CarMax aggregates thousands of customer reviews with the same “generative” AI technology that powers the popular ChatGPT chatbot.
Meanwhile, Snapchat will bring a chatbot to its messaging service. And grocery delivery company Instacart has integrated ChatGPT to answer customer questions about their food.
Coca-Cola plans to use generative AI to help create new marketing content. And while the company hasn’t said exactly how it plans to implement the technology, the move reflects growing pressure on businesses to take advantage of the tools that many of their employees and consumers are already trying out for themselves. .
“We have to accept the risks,” Coca-Cola CEO James Quincey said in a recent video announcing a partnership with startup OpenAI — creator of DALL-E and ChatGPT — through an alliance led by consultancy Bain. “We need to be smart about accepting those risks, experimenting, building on those experiences, scaling up; but not taking those risks is a useless point of view to begin with.
In fact, some AI experts warn that companies should carefully consider the potential harm to customers, society, and their own reputations before rushing to adopt ChatGPT and similar products in the workplace.
“I want people to think really hard before they implement this technology,” says Claire Leibowicz of The Partnership on AI, a nonprofit group founded and sponsored by major tech vendors that recently released a series of recommendations. for companies producing images, sounds generated by AI. , and other synthetic media. “They should be playing and experimenting, but we should also be thinking, what are these tools for in the first place?”
Some companies have been experimenting with AI for some time. Mattel unveiled its use of the OpenAI Imager in October as a customer of Microsoft, which has a partnership with OpenAI that allows it to integrate its technology into Microsoft’s cloud computing platform.
But it wasn’t until the November 30 launch of ChatGPT, a free public OpenAI tool, that widespread interest in generative AI tools began to seep into the workplaces and offices of senior executives. .
“ChatGPT really made me realize how powerful they were,” says Eric Boyd, a Microsoft executive who runs its artificial intelligence platform. “It changed the perspective in a lot of people’s minds and they really understand it on a deeper level. My kids use it and my parents use it.”
However, there are reasons for caution.
While text generators like ChatGPT and Microsoft’s Bing chatbot can make the process of writing emails, presentations, and marketing proposals quicker and easier, they also tend to confidently present misinformation as fact. Trained imagers with a huge body of digital art and photography have raised copyright issues for the original creators of these works.
“For companies that are really in the creative industry, whether they want to be sure they have copyright protection for these designs is still an open question,” says attorney Anna Gressel of the law firm. lawyers Debevoise & Plimpton, which advises companies on how to use AI.
A safer use has been to imagine the tools as a “thinking partner” for brainstorming that won’t produce the end product, adds Gressel.
“It helps to create prototypes that will then be converted by a human being into something more concrete,” he says.
And it also helps to ensure that AI replaces humans. Rowan Curran, an analyst at Business Advisory Forrester, believes the new tools will speed up some of the most basic office tasks — much like previous innovations like word processors and spell checkers — rather than putting people out of work, like some fear it.
“Ultimately, it’s part of the workflow,” Curran says. “It’s not like we’re talking about having a big language model to drive an entire marketing campaign and do that launch without senior marketing experts and all sorts of other checks.”
When it comes to integrating consumer-facing chatbots into phone apps, things get a bit trickier, Curran adds, due to the need for security measures around technology that can answer user questions in unexpected ways.
The public interest has fueled growing competition between cloud computing providers Microsoft, Amazon and Google, which sell their services to large organizations and have the massive computing power needed to train and run AI models. Microsoft announced earlier this year that it would invest billions more in its partnership with OpenAI, even though it also competes with the startup as a direct supplier of AI tools.
Google, which pioneered advances in generative AI but has been cautious about introducing them to the public, now needs to bounce back to realize its business possibilities, including a future Bard chatbot. Facebook’s parent company – Meta, another leader in AI research – is developing similar technology, but not selling it to businesses in the same way as its big tech peers.
Amazon has taken a more dovish tone, but makes its ambitions clear through its latest partnerships: the most recent being a broader collaboration between AWS — its cloud computing division — and startup Hugging Face, creator of ChatGPT rival Bloom. .
Hugging Face decided to strengthen its partnership with Amazon after seeing the explosion in demand for Generative Artificial Intelligence products, explains Clément Delangue, co-founder and CEO of the startup. But Delangue has pitted its approach against competitors like OpenAI, which doesn’t reveal its code or datasets.
Hugging Face hosts a platform that allows developers to share open-source AI models for text, image, and audio tools, which can lay the foundation for creating different products. This transparency is “really important, because that’s how regulators, for example, understand these models and can regulate them,” he adds.
It’s also a way for “underrepresented people to understand where biases may be (and) how patterns were formed” in order to mitigate those biases, Delangue notes.