China is currently home to some of the most advanced AI regulations in the world. Its party-state government, free from the burden of compromise, can swiftly regulate AI as it sees fit. Beijing has in recent months and years crafted regulations that both (partially) protect citizens from tech companies’ greediest impulses and uphold the paternalistic impulse to preserve state power over individual expression and corporate interest.
In China, regulators, companies and users are to a degree engaged in a game of hot potato. The goal is to avoid disagreeing with or pushing back on the regulations without actually assuming direct responsibility for enforcing them.
In April, the Cyberspace Administration of China (CAC) released a new and sweeping list of draft regulations targeting generative AI. The regulations lay out new and strict stipulations regarding the training and pre-training data employed by companies making products that use AI-generated content (AIGC). Most notably, companies must be able to guarantee the data’s “veracity, accuracy, objectivity, and diversity.” That’s a tall order.
While some specifics of enforcement were left unstated, corporate accountability is baked into the regulation; firms have to submit to a security assessment conducted by the CAC prior to releasing services to the general public. That gives the CAC, which is a dual party and state bureau with already-expansive regulatory power, even more control over how and when new technologies are introduced to the Chinese public. The CAC’s director, Zhuang Rongwen, said in a recent speech that AIGC should be “reliable and controllable.”
It should also, however, be profitable—and, as a result, useful to China’s party-state system. Pioneering AI development is a crucial goal for China’s leaders, who view advances in the technology as a key ingredient to success in their competition with the United States. (They are not alone in this thinking.)
Chinese companies are responding to the regulations, which include not only the April regs on AIGC but also earlier ones targeting deepfakes and algorithmic recommendations, by releasing their own guidelines. In so doing, they are attempting to get ahead of any possible alleged violations. Douyin, the Chinese version of TikTok, heightened its labeling requirements, shifting the onus from providers to users; anyone posting AI-generated content is obligated to label the content as such. Douyin makes itself responsible for setting up a clear mechanism by which users can label content as AIGC.
Douyin’s guidelines, which were released the month after the draft generative AI regulations came out, also prohibit spreading false information or “rumors” and require real-name registration, which is intended to increase accountability and dissuade users from violating the rules.
Another Chinese company, the chatbot platform Glow, released new rules on the amount of time users can spend on the platform. Likely motivated by Article 10 from the April regulations—which includes a stipulation that service providers use “appropriate measures to prevent users from excessive reliance on generated content”—the company added “anti-addiction measures” that cap daily usage of the app to three hours. If users go over the time limit, any messages they hope to send to chatbots will be blocked. Glow specializes in creating character-driven “intelligent agents”; in cases where users have developed close or romantic relationships with their chatbot companions, such a restriction could be emotionally distressing.
Generative AI applications such as large language models are seismic because they allow for an immediate and digestible summation of information mined from their datasets, whether that includes the entire internet or, in China’s case, sections of it that omit any content the CCP finds politically threatening.
Pursuing the twin goals of technological advancement and impenetrable information control has been a project of the CCP for decades, epitomized by its successful implementation of the Great Firewall and simultaneous nurturing of some of the world’s biggest tech giants. If its leaders have anything to say about it (and they do), generative AI will not be the advancement that puts an end to the CCP’s helicopter parenting.
The regulations’ conformity with Xi Jinping’s goals ensures that generative AI, like other uses of the internet, will not be used to access information the government wants to keep inaccessible. Information control infamously helps authoritarian leaders avoid mass opposition that culminates in on-the-ground political action. This is an evergreen nightmare from the CCP’s perspective, and preventing it is more important than achieving the next big AI leap—despite the domestic and international praise such successes engender.
The country’s regulatory environment, aided by apparent compliance from China’s tech companies, is crucial to Xi’s efforts to preserve the relatively agreeable status quo.
Stay connected with us on social media platform for instant update click here to join our Twitter, & Facebook
We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.
For all the latest Technology News Click Here