Tech News

A Chinese Plan to Make AI Watermarks Happen

Chinese regulators are likely to learn from the EU AI Act, said Jeffrey Ding, an assistant professor of Political Science at George Washington University. “China’s policy makers and scholars have used EU Actions as inspiration in the past.”

But at the same time, some of the measures taken by Chinese regulators are not really replicable in other countries. For example, the Chinese government is asking social networks to check user-uploaded content for AI. “That seems to be something very new and potentially different from the Chinese context,” Ding said. “This will not exist in the US situation, because the US is famous for saying that the platform is not responsible for the content.”

But What About Freedom of Speech on the Internet?

The draft AI content labeling law is seeking public feedback until October 14, and may take several more months to be finalized and passed. But there’s little reason for Chinese companies to slow down in their preparations once it goes live.

Sima Huapeng, the founder and CEO of the Chinese company AIGC Silicon Intelligence, which uses advanced technology to produce AI agents, influencers, and reproduce living and dead people, says that his product now allows users to voluntarily choose whether to mark the generated product as AI . But if the law passes, it may have to be made mandatory.

“If the feature is optional, companies will probably not add it to their products. But if it becomes mandatory by law, everyone will have to use it,” said Sima. It is not technically difficult to add watermarks or metadata labels, but it will increase the operating costs of compliant companies.

Policies like these would keep AI away from being used for fraud or privacy attacks, he says, but could lead to the growth of a black market for AI services where companies try to avoid compliance and save costs.

There’s also a fine line between holding AI content producers accountable and managing individual speech with complex tracking.

“The biggest challenge at the root of human rights is to make sure that these methods do not compromise privacy or freedom of expression,” said Gregory. While vague labels and watermarks can be used to identify sources of misinformation and inappropriate content, similar tools can enable platforms and governments to exert tighter control over what users post online. In fact, concerns about how AI tools might work have been one of the main drivers of China’s active AI legal efforts.

At the same time, China’s AI industry is leaning back on the government to have more room to experiment and grow as they lag behind their Western peers. China’s previous AI production law was significantly watered down between the first public draft and the final bill, removing identity verification requirements and reducing fines imposed on companies.

“What we’ve seen is that the Chinese government is really trying to walk this tightrope between ‘making sure we maintain content control’ but also ‘allowing these AI labs in the strategic space to have the freedom to innovate,'” Ding said. “This is another attempt to do that.”


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button