Google, Microsoft and OpenAI are teaming up with Anthropic to push about $10 million into a so-called AI Safety Fund. This is intended to provide funds for work on tools to evaluate the most powerful AI models – also with regard to their social potential and dangers. Also to be funded will be general research on AI safety.

It should be noted that $10 million may sound like a lot, but it’s peanuts for the companies involved. After all, the rather small sum that the financially strong partners are contributing together is only described as an initial step. It is expected that other partners will also increase the financial resources later.

The participating companies also see the new AI Safety Fund as part of their voluntary commitment to conduct responsible research into artificial intelligence. Here, one can certainly argue about whether serious intentions are recognizable or rather the most necessary is being done out of imge care. Initially, the funds from the pot are to flow primarily into technologies that evaluate the risk potential of new AI systems.

In the coming months, a call for researchers will be launched. The Meridian Institute will administer the funds. In addition, a committee of independent experts from AI companies and people with experience in the field of research funding will be involved in the decision-making process. The aim is to distribute the first funds as quickly as possible.

Introducing a new approach for spoken language modeling trained end-to-end to directly process spectrograms as both input and output. It can be fine-tuned to generate semantically accurate spoken language for continuation and question answering.

Categorized in: