Tech News

This week in AI: OpenAI is stretched thin

Hello, folks, and welcome to TechCrunch’s regular AI newsletter. If you want this in your inbox every Wednesday, sign up here.

After a short break, we’re back with a few show notes from OpenAI’s DevDay.

Yesterday morning’s keynote in San Francisco was notable for its subdued tone — a contrast to the rah-rah, hypebeast-y address from CEO Sam Altman last year. This DevDay, Altman didn’t gather on stage to reveal shiny new projects. He didn’t even show up; head of speaker product Olivier Godement emceed.

On the agenda for this first OpenAI DevDays – the next one is in London this month, followed by the last one in Singapore in November – was life development. OpenAI has released a real-time voice API, as well as visual optimization, allowing developers to customize its GPT-4o model using images. And the company introduced model distillation, which takes a large AI model like the GPT-4o and uses it to fine-tune a smaller model.

The focus of the event was unexpected. OpenAI has moderated expectations this summer, saying DevDay will focus on educating devs, not showing off products. Still, left out of Tuesday’s keynote, 60 Minutes raised questions about the progress — and status — of OpenAI’s many AI efforts.

We didn’t hear about what would happen to OpenAI’s nearly year-old image generator, DALL-E 3, and we didn’t get an update on the limited preview of Voice Engine, the company’s voice synthesis tool. There’s no launch timeline yet for OpenAI’s video generator, Sora, and the mother’s name for Media Manager, an app the company says it’s creating to let creators control how their content is used in model training.

When reached for comment, an OpenAI spokesperson told TechCrunch that OpenAI is “slowly rolling out [Voice Engine] a preview of our most trusted partners” and that Media Manager is “under development.”

But it seems clear that OpenAI is stretched — and has been for a long time.

According to a recent report by The Wall Street Journal, the company’s teams working on GPT-4o were given only nine days to conduct a security assessment. Fortune reports that many OpenAI employees thought the o1, the company’s first “thinking” model, wasn’t ready for launch.

Heading into a funding round that could bring in as much as $6.5 billion, OpenAI has its fingers in a lot of the pie. DALL-3 underperforms graphics generators like Flux in most quality tests; Sora is reportedly so slow to produce videos that OpenAI is updating the model; and OpenAI continues to delay the release of a revenue sharing program for its bot marketplace, the GPT Store, which it first announced in the first quarter of this year.

I’m not surprised that OpenAI now finds itself plagued by employee burnout and executive departures. If you try to be a jack-of-all-trades, you end up being a master of none – and pleasing no one.

News

The AI ​​bill is not allowed: California Governor Gavin Newsom blocked SB 1047, a high-profile bill that would have regulated AI development in the state. In a statement, Newsom called the bill “well-intentioned” but “[not] the best way” to protect society from the dangers of AI.

AI bills passed: Newsom has signed other AI legislation into law — including bills related to the disclosure of AI training data, deep nudity, and more.

YY Combinator criticized: Startup accelerator Y Combinator is under fire after it backed an AI startup, PearAI, whose founders admitted they were actually running an open source project called Qhubeka.

Copilot is developed: Microsoft’s AI-powered assistant Copilot got a makeover on Tuesday. Now it can read your screen, think deeply, and talk to you out loud, among other tricks.

OpenAI founder joins Anthropic: Durk Kingma, one of the anonymous founders of OpenAI, this week announced that he will join Anthropic. It’s unclear what he’ll be working on, though.

Training AI on customer images: Meta’s AI-powered Ray-Bans have a camera on the front for various AR features. But it could be a privacy issue – the company won’t say whether it plans to train models on photos from users.

AI Camera for Raspberry Pi: Raspberry Pi, a company that sells small, cheap, single-board computers, has released the Raspberry Pi AI Camera, an add-on with built-in AI processing.

Research paper of the week

AI coding platforms have attracted millions of users and attracted hundreds of millions of dollars from VCs. But do they live up to their product development promises?

Probably not, according to new analysis from Uplevel, an engineering statistics firm. Uplevel compared data from about 800 of its developer customers — some of whom reported using GitHub’s AI coding tool, Copilot, and some of whom did not. Uplevel found that devs who relied on Copilot introduced 41% more bugs and were less prone to burnout than those who didn’t use the tool.

Developers have shown enthusiasm for AI-powered AI-assisted coding tools despite concerns about not only security but copyright infringement and privacy. The majority of devs who responded to a recent GitHub survey said they have adopted AI tools in some way. Businesses are also working – Microsoft reported in April that Copilot has more than 50,000 business customers.

Model of the week

Liquid AI, an MIT spinoff, this week announced its first series of generative AI models: Liquid Foundation Models, or LFMs for short.

“Now?” you might ask. Stock models – new ones are released almost every day. Well, LFMs use a novel model design and notch competitive scores on a range of industry benchmarks.

Most models are what is known as a transformer. Proposed by a team of Google researchers back in 2017, the transformer has become an AI architectural model. by far. Transformers supports Sora and the newest version of Stable Diffusion, as well as text-generating models like Anthropic’s Claude and Google’s Gemini.

But transformers have limitations. In particular, they are not very efficient at processing and analyzing large amounts of data.

Liquid says its LFMs have reduced memory compared to transformer architectures, allowing them to handle large amounts of data on the same hardware. “By effectively compressing the input, LFMs can process long sequences [of data],” the company wrote in a blog post.

Liquid LFMs are available on multiple cloud platforms, and the team plans to continue refining the architecture in future releases.

Hold the bag

If you blinked, you probably missed it: An AI company filed to go public this week.

Called Cerebras, the San Francisco-based startup develops hardware to run and train AI models, and competes directly with Nvidia.

So how does Cerebras hope to compete with the chip giant, which commanded between 70% and 95% of the AI ​​chip segment as of July? In practice, said Cerebras. The company says its flagship AI chip, which it sells both directly and offers as a service through its cloud, can outperform Nvidia’s hardware.

But Cerebras has yet to translate this sought-after performance advantage into profit. The company had a net loss of $66.6 million in the first quarter of 2024, per SEC filings. And last year, Cerebras reported a net loss of $127.2 million on revenue of $78.7 million.

Cerebras could seek to raise up to $1 billion in an IPO, according to Bloomberg. To date, the company has raised $715 million in venture capital and valued over $4 billion in the past three years.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button