OpenAI co-founder Ilya Sutskever's new startup aims to create 'safe superintelligence'

He's in competition with – and critical of – his former workplace

OpenAI co-founder Ilya Sutskever – who last month quit the GPT creator – has unveiled his next gig: an outfit dubbed Safe Superintelligence Inc. that aims to produce a product of the same name – without the "Inc."

The startup currently appears to comprise not much more than three people, a static HTML web page, a social media presence, and a mission.

The web page reads: "Superintelligence is within reach. Building safe superintelligence (SSI) is the most important technical problem of our time."

"We have started the world's first straight-shot SSI lab, with one goal and one product: a safe superintelligence."

Building an SSI "is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI."

Who are those investors? The page doesn't indicate. Ditto the business model. The page is signed by Sutskever, Daniel Gross (Apple's former AI boss) and Daniel Levy, another alum of OpenAI.

So that's a team of three. SSI is also "assembling a lean, cracked [sic] team of the world's best engineers and researchers dedicated to focusing on SSI and nothing else." Members of the team will work in Palo Alto and Tel Aviv, Israel.

The web page also offers the following glimpse of the company's intentions:

We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead. This way, we can scale in peace. Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.

The mention of putting safety ahead of product cycles is notable, as OpenAI has attracted criticism for its approach to AI safety, leading it to create a Safety and Security Committee.

When Sutskever left Open AI, he wrote "I'm confident that OpenAI will build [artificial general intelligence] AGI that is both safe and beneficial." SSI's stated intent suggests that Sutskever isn't as confident in OpenAI's approach to the issue as he may at first have indicated and would like to try something else.

No details have been offered regarding what SSI will deliver, when it might arrive, or how it will ensure safety. Which rather makes this launch look like an exercise in what StartupLand likes to call URL: Ubiquity first, Revenue Later. ®

More about


Send us news

Other stories you might like