OpenAI’s former Chief Scientist to Launch Competing AI Company
Ilya Sutskever, the former co-founder and chief scientist of OpenAI, played a key role in the creation of the groundbreaking ChatGPT. Now, he’s starting a new company. It’s called Safe Superintelligence Inc. and the mission is to create a safe superintelligence (SSI).
In another one of his most recent posts on X, he said:
“We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team.”
Lately, there have been many concerns about the safety of artificial intelligence, and some companies have delayed or limited their plans to provide secure AI systems that don’t pose any harm to humans or the environment. Safe Superintelligence Inc. plans to adhere to ethical, safety, and security standards from the beginning. The company has been established with a focus on no more than one product. That’s why the company’s name is the same as the technology that Ilya Sutskever and his team are trying to achieve.
On the SSI.inc website, Ilya Sutskever, along with Daniel Gross and Daniel Levy invites passionate talents out there to join the company on its laser-focused mission: “We are assembling a lean, cracked team of the world’s best engineers and researchers dedicated to focusing on SSI and nothing else. We offer an opportunity to do your life’s work and help solve the most important technical challenge of our age.”
What We Know About Safe Superintelligence Inc.
Safe Superintelligence Inc. is an American company based in Palo Alto and Tel Aviv. The company is trying to recruit top technical talent, and it’s built upon the belief that having a lean and highly functional team can produce the best results. The company promises zero short-term commercial pressures on employees and outlines that there will be no distraction by management overhead or product cycles.
The founder of the company calls it “The world’s first straight-shot SSI lab,” which is quite exciting considering the constant rise of concerns about the safety and security of building an AI product that is accessible to millions of people. However, the company made it clear in the announcement that the focus on safety won’t hinder advanced AI capabilities production, and they will make sure that the plans will move forward quickly:
“We approach safety and capabilities in tandem, as technical problems to be solved through revolutionary engineering and scientific breakthroughs. We plan to advance capabilities as fast as possible while making sure our safety always remains ahead.”
There is no information about Safe Superintelligence Inc.’s investors, and the details of its only product remain a secret, but the company’s main founders have impressive work experience in the Artificial Intelligence field. So, AI enthusiasts already have high hopes for this newly founded company’s output.
Who is Ilya Sutskever?
Ilya Sutskever started his career in this field by becoming one of the co-founders of OpenAI. He was a board member of the company and a crucial part of the team behind ChatGPT’s development. He left OpenAI in May 2024 to establish Safe Superintelligence Inc. with Daniel Levy and Daniel Gross. Daniel Levy worked alongside Sutskever and led the optimization team at OpenAI and now will be the principal scientist of the new company. On the other hand, Daniel Gross is one of the most influential people in AI who led AI efforts at Apple and is a known technology investor. Gross is the CEO of the company.
Overall, Safe Superintelligence Inc. seems like a legitimate effort by some of the biggest minds in the industry to develop a safe, super-intelligent AI with human values and safety in mind. It seems like we won’t hear from them for a while because they just started the company and are hiring at the moment. In the end, will their effort result in a safe and revolutionary AI technology? We’ll have to wait and see.