Tech News

US laws governing AI seem unlikely, but there may be hope

Can the US reasonably control AI? It is not clear at all. Policymakers have made progress in recent months, but they’ve also had setbacks, reflecting the challenge of laws imposing surveillance rules on the technology.

In March, Tennessee became the first state to protect voice artists from unauthorized AI manipulation. This summer, Colorado adopted a tiered, risk-based approach to AI policy. And in September, California Governor Gavin Newsom signed a number of AI-related security bills, several of which require companies to disclose information about their AI training.

But the US still has a federal AI policy comparable to the EU’s AI Act. Even at the federal level, the law continues to face major roadblocks.

After a protracted battle with special interests, Governor Newsom vetoed bill SB 1047, legislation that would have imposed broader security and transparency requirements on companies developing AI. Another California bill targeting distributors of AI deepfakes on social media was put on hold this fall pending the outcome of a lawsuit.

There is reason for optimism, however, according to Jessica Newman, co-director of the AI ​​Policy Hub at UC Berkeley. Speaking on a panel about AI governance at TechCrunch Disrupt 2024, Newman noted that many federal bills may not have been written with AI in mind, but they still apply to AI — such as anti-discrimination law and consumer protection.

“We often hear that the US is this kind of ‘Wild West’ compared to what’s happening in the EU,” Newman said, “but I think that’s reinforced, and the reality is much different than that.”

At Newman’s point, the Federal Trade Commission has forced data-harvesting companies to remove their AI models, and is investigating whether the sale of AI startups to big tech companies violates antitrust law. Meanwhile, the Federal Communications Commission declared robocalls with AI voices illegal, and mandated that AI-generated content in political advertising be disclosed.

President Joe Biden also tried to get some AI laws on the books. Almost a year ago, Biden signed the AI ​​Executive Order, which supports the voluntary reporting and measurement practices that many AI companies already choose to do.

Another high-order result was the US AI Safety Institute (AISI), a federal body that studies vulnerabilities in AI systems. Operating within the National Institute of Standards and Technology, AISI has research partnerships with major AI labs such as OpenAI and Anthropic.

However, AISI can be dismantled with a simple revocation of Biden’s order. In October, a coalition of more than 60 organizations called on Congress to enact legislation covering AISI before the end of the year.

“I think all of us, as Americans, have an interest in making sure that we mitigate the potential downside of technology,” said AISI director Elizabeth Kelly, who also participated in the panel.

So is there any hope for full regulation of AI in the States? The failure of SB 1047, which Newman described as a “light touch” bill on industry, is not exactly encouraging. Endorsed by California State Senator Scott Wiener, SB 1047 was opposed by many in Silicon Valley, including high-profile experts such as Meta AI’s chief scientist, Yann LeCun.

If that’s the case, Wiener, another organizer of the Disrupt panel, said he wouldn’t have drafted the bill any differently — and he’s confident the broader AI legislation will eventually pass.

“I think it laid the foundation for future efforts,” he said. “I hope we can do something that can bring more people together, because the truth is that all the big labs have admitted that it’s a risk [of AI] they are real and we want to test them.”

Indeed, Anthropic last week warned of an AI disaster if governments do not implement regulation in the next 18 months.

Their opponents only doubled down on their speech. Last Monday, Khosla Ventures founder Vinod Khosla called Wiener “ignorant” and “unqualified” to manage the real risks of AI. And Microsoft and Andreessen Horowitz issued a statement opposing AI laws that could affect their financial interests.

Newman posits, however, that pressure to coordinate the growing convergence of state and state AI laws will ultimately produce a stronger legislative solution. Instead of agreeing on a regulatory model, federal policymakers have introduced nearly 700 pieces of AI legislation this year alone.

“My sense is that companies don’t want a patchwork regulatory environment where every state is different,” he said, “and I think there’s going to be increasing pressure to have something at the federal level that provides more clarity and cuts back on others. with that uncertainty.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button