Tech News

OpenAI founder Ilya Sutskever predicts the end of AI pre-training

OpenAI’s founder and former chief scientist, Ilya Sutskever, made headlines earlier this year after he left to open his own AI lab called Safe Superintelligence Inc. He has avoided the limelight since his departure but made a rare public appearance in Vancouver on Friday at the Summit. in Neural Information Processing Systems (NeurIPS).

“Early training as we know it will definitely end,” said Sutskever on stage. This refers to the first stage of AI model development, where a large language model learns patterns from large amounts of unlabeled data – typically text from the Internet, books, and other sources.

“We’ve achieved high data and it won’t be any more.”

During his NeurIPS speech, Sutskever said that, while he believes that existing data can still drive AI development forward, the industry is getting new data to train on. He said this changing objective, ultimately forces a shift in the way models are trained today. He compared the situation to fossil fuels: just as oil is a finite resource, the Internet contains a finite amount of human-generated content.

“We have achieved high data and there will be no more,” according to Sutskever. “We have to deal with the data we have. Only one internet.”

Ilya Sutskever calls data the “fossil fuel” of AI.
Ilya Sutskever/NeurIPS

The next-generation models, he predicted, “will work in real ways.” Agents have become a real buzzword in the field of AI. Although Sutskever did not define them during his talk, they are generally understood as an autonomous AI system that performs tasks, makes decisions, and interacts with the software itself.

Along with having “agency,” he said future programs will also be able to think. Unlike today’s AI, which is mostly pattern-based based on what the model has seen before, the AI ​​systems of the future will be able to adjust things step-by-step in a way that is comparable to logic.

The more reasons a program has, “the more unpredictable it seems,” according to Sutskever. He compared the uncertainty of “really thinking systems” to the way advanced AIs play chess that is “unpredictable to the best human chess players.”

“They will understand things with limited data,” he said. “They will not be confused.”

On stage, he made a comparison between the measurement of AI systems and evolutionary science, citing research that shows the relationship between brain and body weight in all species. He noted that while most mammals follow a single scaling pattern, hominids (human ancestors) show a markedly different trend in the ratio of brain to body mass on a logarithmic scale.

He suggested that, just as evolution found a new scaling pattern for the hominid brain, AI could similarly find new ways to override how previous training works today.

Ilya Sutskever compares the measurement of AI systems and evolutionary biology.
Ilya Sutskever/NeurIPS

After Sutskever concluded his speech, an audience member asked him how researchers can create the right ways to encourage humanity to build AI in a way that gives us “the freedom we have as homosapiens.”

“I feel like in some ways those are the kinds of questions that people should think about,” Sutskever replied. He paused for a moment before saying that he “doesn’t feel confident answering questions like this” because it would require “a top-down government structure.” An audience member held up a cryptocurrency, causing others in the room to laugh.

“I don’t think I’m the right person to comment on cryptocurrency, but there is an opportunity for you [are] explaining will happen,” said Sutskever. “You know, in a way, it’s not a bad result when you have AIS and all they want is to live with us and have rights. Maybe that will be okay… I think things are incredibly unpredictable. I hesitate to comment but encourage speculation.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button