Fei-Fei Li chooses Google Cloud, where she leads AI, as the main computing provider for World Labs

Cloud providers are chasing AI unicorns, and the latest is Fei-Fei Li’s global labs. The startup recently acquired Google Cloud as its primary provider of computing to train AI models, a deal worth hundreds of millions of dollars. But Li’s time as chief AI scientist at Google Cloud was anything but, the company said.
During the Google Cloud Startup Summit on Tuesday, the companies announced World Labs will use a large portion of its funding to provide GPU servers for Google Cloud Platform servers, and eventually train “spatially intelligent” AI models.
A number of well-funded startups building AI-based models are in high demand in the world of cloud services. Other major deals include OpenAI, which specializes in training and running AI models on Microsoft Azure, and Anthropic, which uses AWS and Google Cloud. These companies are constantly paying millions of dollars for computing services, and one day they may need more as their AI models scale. That makes them valuable customers for Google, Microsoft, and AWS to build relationships with early on.
World Labs builds unique, multi-dimensional AI models with significant computing requirements. The startup recently raised $230 million in a $1 billion-plus deal, led by A16Z, to build global AI models. General manager of startups and AI at Google Cloud, James Lee, tells TechCrunch that World Labs’ AI models will one day be able to process, generate, and interact with video and geospatial data. World Labs calls these types of AI “local intelligence.”
Li has deep ties to Google Cloud, having led the company’s AI efforts in 2018. However, Google denies that this deal is a product of that relationship, and rejects the idea that cloud services are just a commodity. Instead, Lee said services, such as the High Performance Toolkit for scaling AI workloads, and its deep offering of chips are a big factor.
“Fei-Fei is obviously a friend of GCP,” Lee said in an interview. “GCP was not the only option they were looking at. But for all the reasons we talked about – our AI improved infrastructure and ability to meet their critical needs – they eventually came to us. “
Google Cloud offers AI startups a choice between its own AI chips, tensor processing units or TPUs, and Nvidia GPUs, which Google buys and has a more limited supply. Google Cloud is trying to find other startups to train AI models on TPUs, mainly as a way to reduce its dependence on Nvidia. All cloud providers are limited today by the shortage of Nvidia GPUs, so many are building their own AI chips to meet the demand. Google Cloud says some startups train and contribute only on TPUs, however, GPUs remain the industry’s favorite AI training.
World Labs chose to train its AI models on GPUs in this deal. However, Google Cloud would not say what went into that decision.
“We’ve worked with Fei-Fei and their product team, and at this stage of their product roadmap, it makes sense for them to partner with us on the GPU platform,” Lee said in an interview. “But it doesn’t mean it’s a permanent decision… Sometimes [startups] move on to different platforms, like TPUs.”
Lee wouldn’t disclose how big the World Labs GPU cluster is, but cloud providers often donate supercomputers to train the startup’s AI models. Google Cloud has promised other models of its AI training startup, Magic, a collection of “tens of thousands of Blackwell GPUs,” each with more power than a high-end gaming PC.
These collections are easier to promise than to fulfill. Google’s cloud services rival Microsoft is reportedly struggling to meet OpenAI’s insane computing demands, forcing startups to use other computing power options.
World Labs’ partnership with Google Cloud isn’t exclusive, meaning the startup could still strike deals with other cloud providers. However, Google Cloud says it will get most of its business going forward.
Source link