The rebel AI group raises a record number after the machine learning schism


A separate group of artificial intelligence researchers has raised a first-round record funding for a new start-up engaged in general-purpose AI, marking the latest attempt to create an organization to ensure the security of the world’s most powerful technology. era.

The group split from OpenAI, an organization founded with the support of Elon Musk in 2015 to ensure that super-intelligent AI systems would not one day run and harm their creators. The schism has followed differences over the group’s direction after it took a major $ 1 billion investment from Microsoft in 2019, according to two people familiar with the split.

The new company, Anthropic, is led by Dario Amodei, one of the founders of OpenAI and a former head of AI security at the organization. It is he resurrected $ 124m in its first round of funding. This is most likely for an AI group looking to build generally applicable AI technology, rather than a format for applying the technology to a specific industry, according to research firm PitchBook. Based on figures revealed in a company presentation, the roundup valued Anthropica at $ 845 million.

The investment was led by Jaan Tallinn, the Estonian computer scientist behind Skype. Eric Schmidt, Google’s former chief executive, and Dustin Moskovitz, Facebook’s co-founder, also backed the company.

The break from OpenAI began with the departure of Amodei in December, and has since grown to include close to 30 researchers, according to an estimate. They include Amodei’s sister, Daniela Amodei, president of Anthropic, as well as a group of researchers who have worked on GPT-3, OpenAI advance automatic language system, including Jared Kaplan, Amanda Askell, Tom Henighan, Jack Clark and Sam McCandlish.

OpenAI changed course two years ago when it sought Microsoft’s support to fuel its growing hunger for computer resources to fuel its deep learning systems. Instead, he promised the software company the first rights to market its findings.

“They started out as non-profit, intended to democratize AI,” said Oren Etzioni, head of the AI ​​institute founded by Microsoft co-founder Paul Allen. “Obviously when you get $ 1bn you’re going to generate a return. I think their trajectory has become more corporate.”

OpenAI has sought to isolate its AI security research from its more recent business operations by limiting Microsoft’s presence on its board. However, this has always led to internal tensions over the direction and priorities of the organization, according to a person who is familiar with the escape group.

OpenAI would not comment if disagreement over the direction of the research had led to the split, but said it had made internal changes to integrate its work on research and security more closely when Amodei left. went.

Microsoft has gained exclusive rights to touch OpenAI search results later committing $ 1bn to support the group, much of this in the form of technology to support their computer-intensive deep learning systems, including GPT-3. Earlier this week Microsoft said it had integrated the language system into some of its software creation tools so that people without coding skills could create their own applications.

The rapid commercialization of GPT-3 comes in stark contrast to OpenAI’s management of an earlier version of the technology, developed in 2019. The group initially said it would not publish technical details about the discovery out of concern about potential abuse. of the powerful language system, although it has since changed course.

To isolate itself from commercial intrusion, Anthropic has registered as a public benefit company, or “B corp,” with special governance arrangements to protect its mission of “responsibly developing and maintaining advanced AI for the benefit of mankind ”. These include the creation of a long-term benefits committee composed of people who have no connection to the company or its supporters, and who will have the final say on matters such as the composition of their board.

Anthropic said his work would focus on “large-scale AI models,” including making systems easier to interpret and “building ways to more closely integrate human feedback into the development and deployment of these systems.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *