According to Watchful AI monitoring, Recursive Superintelligence has secured at least $500 million in funding, with a pre-money valuation of $4 billion. The round was led by GV (formerly Google Ventures), with participation from Nvidia. The funding round was significantly oversubscribed, and the final size may reach $1 billion. The company, registered in London at the end of last year, currently has about 20 employees and has not made a formal public debut yet.
The founding team is mainly from the research core of OpenAI, Google DeepMind, and Salesforce. Richard Socher was previously the Chief Scientist at Salesforce; Tim Rocktäschel is an AI professor at University College London, who recently served as a principal scientist at DeepMind, contributing to the Genie interactive world model; Josh Tobin, Jeff Clune, and Tim Shi come from OpenAI, with other members from Google and Meta.
The company's goal is to create an AI system that does not require human intervention and can continuously self-improve. Current large-scale models are essentially fixed once training is completed. To become more powerful, engineers must reorganize the data and retrain the model. Self-improvement refers to the model generating its own data and updating its parameters, removing humans from the training loop. This approach has always been a long-term goal of AI research, with no public results yet proving long-term stable operation.
In the first quarter of this year, global startup funding reached a historic high of $300 billion, with OpenAI, Anthropic, xAI, and Waymo taking the lion's share (Crunchbase data). Recursive is also one of the new AI labs spun out in recent months from OpenAI, Google, and Meta, similar to Thinking Machines Lab, Safe Superintelligence, Ineffable Intelligence, and Advanced Machine Intelligence Labs.
