Safe Superintelligence (SSI), a new venture co-founded by OpenAI’s former chief scientist Ilya Sutskever, has secured $1bn in funding, reports Reuters

The funding round saw contributions from venture capital firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel, as well as NFDG, an investment partnership led by Nat Friedman and SSI’s chief executive Daniel Gross. 

The investment will be used to advance the development of artificial intelligence (AI) systems that are not only safe but also exceed human capabilities, the company’s executives told Reuters.  

SSI will also leverage the funds to enhance computing power and attract top industry talent. 

With a current team of ten, SSI is poised to expand, focusing on assembling a small, highly trusted group of researchers and engineers.  

The team will be split between Palo Alto, California, and Tel Aviv, Israel. Sources close to the matter have indicated that SSI is valued at an $5bn. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The funding highlights the continued willingness of some investors to place large bets on exceptional talent and foundational AI research.  

AI safety is a critical issue, centred on preventing AI from causing harm, and has gained significant attention amidst concerns over the potential for rogue AI to act against humanity’s interests.  

A California bill proposing safety regulations for AI companies has divided the industry, with firms such as OpenAI and Google opposing it, while Anthropic and Elon Musk’s xAI support it. 

Sutskever co-founded SSI with Gross, who has a background in leading AI initiatives at Apple, and Daniel Levy, a former OpenAI researcher.  

Sutskever serves as chief scientist, Levy as principal scientist, and Gross oversees computing power and fundraising efforts.  

Sutskever’s decision to start SSI was driven by his vision to tackle new challenges, stating, “I identified a mountain that is a bit different from what I was working on.” 

Last year, Sutskever experienced a tumultuous period at OpenAI, which culminated in his departure in May after a boardroom shuffle and the dismantling of his “Superalignment” team.  

Unlike OpenAI’s corporate structure, SSI has opted for a traditional for-profit model and is currently prioritising the recruitment of individuals who align with its company culture. 

SSI plans to collaborate with cloud providers and chip companies to meet its computing needs, although partnerships have yet to be finalised.  

AI startups often partner with companies such as Microsoft and NVIDIA for infrastructure support.