Tech News

Microsoft and a16z have put aside their differences, joining hands in invoking the rule of AI

Two major forces in two deeply connected technological areas – CEOs and startups – took a break from counting their money to jointly ask the government to stop and stop even considering laws that could affect their financial interests, or as they like to call it, innovation.

“Our two companies may not agree on everything, but this is not about our differences,” wrote the group of very different views and interests: a16z Founding Partners Marc Andreessen and Ben Horowitz, along with Microsoft CEO Satya Nadella and President/Chief Legal Officer Brad. Smith. A true intersectional combination, representing big business and big money.

But it’s the little boys they’re supposed to be looking at. That is, all companies would be affected by the latest attempt at regulatory overreach: SB 1047.

Imagine being charged for the exposure of an inappropriate open model! a16z general partner Anjney Midha called it “regressive taxation” at first and “obvious regulatory capture” by Big Tech companies that, unlike Midha and his poor colleagues, could not afford the lawyers needed to comply.

Except that was all disinformation put out by Andreessen Horowitz and other money interests who may have been affected as backers of multi-billion dollar businesses. In fact, small models and startups would be less affected because the proposed law was particularly protective.

It’s ironic that the very kind of meaningful “Little Tech” cuts that Horowitz and Andreessen so often champion was distorted and undermined by the lobbying campaign they and others ran against SB 1047. (The architect of that bill, California State Senator Scott Wiener, spoke out.) thing recently on Disrupt.)

That bill had its problems, but opposition largely overstated the cost of compliance and failed to seriously support claims that it would be cool or burdensome to implement.

It is part of the established playbook of Big Tech – which, despite their deployment, Andreessen and Horowitz are very compatible – running at the state level, where they can win (like SB 1047), while asking for federal solutions that they know will be successful. which will not come, or will have no teeth due to partisan bickering and congressional ineptitude on technical matters.

This joint statement of “policy opportunity” is the last part of the game: After defeating SB 1047, they can say they did so with an eye to supporting federal policy. It doesn’t matter that we’re still waiting for the federal privacy law that tech companies pushed through for a decade while fighting the national debt.

And what policies do they support? “Market-based diversification,” in other words: get our money out, Uncle Sam.

The rules should have a “science-based approach and standards that recognize regulatory frameworks focused on the use and misuse of technology,” and should “focus on the risk of bad actors abusing AI.” What this means is that we should not have effective regulations, but rather swift penalties when unregulated products are used by criminals for criminal purposes. This method worked well for the entire FTX situation, so I can see why they support it.

“Regulation should only be implemented if its benefits outweigh its costs.” It would take thousands of words to open up all the ways this idea, expressed in this context, is funny. But in reality, what they are suggesting is that the fox is brought to the henhouse planning committee.

Regulators should “allow developers and startups the flexibility to choose which AI models to use wherever they build solutions and not tilt the field in favor of any one platform.” What this means is that there is a certain type of program that requires permission to use one model or another. As it is not, this person is male.

Here’s a big one that I just have to quote in full:

The right to study: Copyright law is designed to promote the progress of science and the useful arts by extending protection to publishers and authors to encourage them to bring new works and information to the public, but not to the detriment of the public’s right to read these works. Copyright law should not be construed to indicate that machines should be prevented from using data – the foundation of AI – to learn in the same way as humans. Non-protected information and facts, regardless of whether they are contained in a protected subject, must remain free and accessible.

To be clear, the clear assertion here is that software, run by billion dollar companies, “has the right” to access any data because it should be able to read it “in the same way as humans.”

First of all, no. These programs are not like people; they generate data that simulates human output from their training data. They are sophisticated mathematical software with natural language interfaces. They have no “right” to any text or fact over Excel.

Second, this idea that “facts” – which means “intellectual property” – is the only thing that these programs are interested in and that some kind of hoarding cabal is working to prevent the advanced narrative that we have seen before. Disruption has caused the “everyone’s truth” controversy in its public response to allegations of organized content theft, and its CEO Aravind Srinivas also told me that I was playing on stage at Disrupt, as if they were accused of knowing little things like distance. from the Earth to the moon.

While this is not the place to start a full accounting of this straw man argument, let me just point out that although the facts they are indeed free labor, the way they are created – say, through real reporting and scientific research – involves real costs. This is why the copyright and patent systems exist: not to prevent intellectual property from being widely shared and used, but to encourage its creation by ensuring that it can be given real value.

Copyright law is imperfect and almost as abused as it is. But “it’s not limited to suggesting that machines should be prevented from using data” — it’s used to ensure that perpetrators don’t deviate from the value systems we’ve built around intellectual property.

That is a clear question: allow the systems we have and use and freely benefit from the valuable output of others without compensation. To be honest, that part is “the same way as people,” because it’s people who design, direct, and operate these programs, and those people don’t want to pay for anything they don’t pay for, and contribute. I don’t want regulations to change that.

There are a number of other recommendations in this small policy document, which will no doubt be given more detail in the versions they sent directly to lawmakers and regulators through official lobbying channels.

Some ideas are arguably good, if also self-serving: “fund digital literacy programs that help people understand how to use AI tools to create and access knowledge.” Good! Yes, writers are very invested in those tools. Support “Open Data Commons—pools of accessible data that can be managed in a way that benefits the public.” Good! “Review its procurement processes to enable more startups to sell technology to the government.” Good!

But these general, positive recommendations are the kind of thing you see every year in the industry: invest in public services and speed up government processes. These sound but unimportant suggestions are just a vantage point for the more important ones I mentioned above.

Ben Horowitz, Brad Smith, Marc Andreessen, and Satya Nadella want the government to relinquish control over this lucrative new development, let industry decide what rules are fair to trade, and overturn copyright in a way that could serve as the norm. the pardoning of illegal or illegal actions that many suspect has caused the rapid rise of AI. Those are policies that are important to them, whether or not children have access to digital information.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button