Even the ‘Goddess of AI’ doesn’t know what AGI is
Are you confused about artificial general intelligence, or AGI? That’s what OpenAI is all about, creating in a way that “benefits all mankind.” You might want to take them seriously since they just raised $6.6 billion to get closer to that goal.
But if you’re still wondering what AGI is, you’re not alone.
In a wide-ranging discussion Thursday at the Credo AI leadership conference, Fei-Fei Li, a world-renowned researcher often called the “mother of AI,” said she doesn’t even know what AGI is. At other points, Li discussed his role in the birth of modern AI, how society should protect itself from advanced AI models, and why he thinks his new startup, unicorn World Labs, will change everything.
But when asked what he thought about the “AI singularity,” Li was as lost as the rest of us.
“I come from academic AI and have been taught in rigorous and evidence-based methods, so I don’t really know what all these words mean,” Li said to a packed room in San Francisco, next to a large window overlooking the area. The Golden Gate Bridge. “I don’t even know what AGI means. As people say you know it when you see it, I guess I’ve never seen it. The truth is, I don’t spend much time thinking about these words because I think there are too many important things to do. “…
If anyone doesn’t know what AGI is, it’s probably Fei-Fei Li. In 2006, he created ImageNet, the world’s first large-scale AI training and benchmarking dataset that was instrumental in developing our current AI. From 2017 to 2018, he worked as a Senior AI/ML Scientist at Google Cloud. Today, Li leads the Stanford Human-Centered AI Institute (HAI) and his first World Labs are building “super models of the world.” (That term is almost as confusing as AGI, if you ask me.)
OpenAI CEO Sam Altman took a stab at describing AGI in a profile with The New Yorker last year. Altman described AGI as “the equivalent of an average person you would hire as a co-worker.”
Meanwhile, the OpenAI document describes AGI as “autonomous systems that outperform humans in economically important tasks.”
Apparently, these explanations weren’t good enough for the $157 billion company to work for. So OpenAI has created five standards that they use internally to measure their progress in AGI. The first level is chatbots (like ChatGPT), then thinkers (obviously, OpenAI o1 was at this level), agents (next, presumably), innovators (AI that can help innovate), and the last level, the organization (AI that can do the work of the whole organization).
Still confused? So do I, so does Li. Again, all of this sounds like more than the average co-worker can handle.
Earlier in his speech, Li said that he had loved the idea of creativity since he was young. That led him to study AI long before it was profitable to do so. In the early 2000s, Li says he and a few others were quietly laying the groundwork for the industry.
“In 2012, my ImageNet was combined with AlexNet and GPUs – many people call that the birth of modern AI. It was driven by three key ingredients: big data, neural networks, and modern GPU computing. And there that time is coming, I think life has never been the same for the entire AI sector, and our world.”
When asked about California’s AI bill, SB 1047, Li spoke carefully so as not to rekindle the controversy that Governor Newsom just put to rest by vetoing the bill last week. (We recently spoke with the author of SB 1047, and he was more than willing to open up about his dispute with Li.)
“Some of you may know that I have spoken about my concerns about this bill [SB 1047]which was not allowed, but right now I am thinking deeply, and I am very excited to look forward,” said Li. “I was very pleased, if not honored, that Governor Newsom invited me to participate in the next steps post-SB 1047.”
The governor of California recently tapped Li, along with other AI experts, to form a task force to help the state develop monitoring zones for the use of AI. Li said he uses an evidence-based approach in this role, and will do his best to advocate for academic research and funding. However, he also wants to ensure that California does not penalize professionals.
“We must really look at the potential impact on people and our communities rather than placing the burden on the technology itself… It would not make sense if we penalize a car engineer – let’s say Ford or GM – if a car is misused intentionally or unintentionally. and you hurt someone. Just punishing the car engineer will not make cars safer. What we need to do is continue to develop safer routes, but also improve the regulatory framework – whether it’s seat belts or speed limits – and the same goes for AI.”
That’s one of the better arguments I’ve heard against SB 1047, which would have penalized tech companies for dangerous AI models.
Although Li advises California on AI legislation, he also runs his own startup, World Labs, in San Francisco. It’s Li’s first time founding a startup, and she’s one of the few women leading a cutting-edge AI lab.
“We are far from a very diverse AI ecosystem,” Li said. “I believe that diverse human intelligence will lead to diverse artificial intelligence, and it will just give us better technology.”
In the next few years, he is excited to bring “local intelligence” closer to reality. Li says that human language, on which examples of today’s major languages are based, probably took a million years to develop, while vision and vision may have taken 540 million years. That means that building large-scale world models is a very difficult task.
“It’s not just about making computers see, but actually making the computer understand the whole 3D world, which I call spatial intelligence,” said Li. “We don’t just see inventing things… We really see doing things, navigating the world, interacting with each other, and closing that gap between seeing and doing requires local knowledge. As a technical expert, I am very happy about that.”
Source link