This article is more than 1 year old

Beijing lists the stuff it wants generative AI to censor

If it reflects 'core values of socialism' it'll be fine. No pressure, then, given that chatbots make many mistakes

As China's tech giants deploy their ChatGPT clones, Beijing has released proposed regulations for research, development and use of generative AI chatbots, without quite answering whether the tools can conform to socialist ideals.

The draft rules, published yesterday by the Cyberspace Administration of China and grandly titled Administrative Measures for Generative Artificial Intelligence Services, detail 21 articles that call for AI-generated content to “reflect the core values of socialism, and must not contain subversion of state power, overthrow of the socialist system, incitement to split the country.”

The document also lays out a firm prohibition against promoting terrorism, extremism, ethnic hatred and discrimination. Violence, obscene and pornographic information, false information, and content that may disrupt economic and social order are also prohibited.

The rules put the onus on designers of AI tools to prevent discrimination, based on race, ethnicity, belief, country, region, gender, age, occupation, through careful selection of training data, algorithm design and any other optimizations.

Users must register with their real name and those that provide generative AI services, or help other do so, will be assumed to be the producer of the content produced by the bots. That producer assumes responsibility for any related privacy measures – including those associated with personal information.

Those using the AI to provide services to the public undergo will be required to undertake a security assessment first before their offerings go live. Those providers then become responsible for the outcome, and any leak of personal information or IP infringement. They must label also clearly label the content as being AI generated and handle any complaints, as well as ensure against users becoming addicted to or dependent on the tool.

That provider must also suspend or terminate services of any users who find their content was used for nefarious means, or just in violation of Beijing’s rules, “including engaging in network hype, maliciously posting and commenting, creating spam, writing malicious software, and implementing improper commercial marketing.”

The rules debuted after Huawei and Alibaba prepared updates to their generative AI models and associated services, while companies like Baidu pressed on with early services – all part of a race to be the first viable Chinese ChatGPT analogue.

China's tech giants are racing to bring generative AI tools to market. AI is a national policy priority, and China is not immune to FOMO when a new tech takes off as OpenAI's ChatGPT has done so spectacularly.

Speaking of ChatGPT, it's not blocked in China, but it’s difficult to get access as users need to sign up with a phone number and those from Chinese aren't accepted.

That's a convenient situation for Beijing as it China's government prefers tech sourced from overseas to be sold and/or operated by local companies.

Most of the Chinese firms involved in development of Large Language Models (LLM) are no stranger to preferred online regulator regimes, having in recent years experienced mutiple crackdowns and directives about the kind of content and behaviour that won't be tolerated on China's internet. Regulation of AI and LLMs was therefore clearly imminent.

The CAC's proposed rules are therefore unsurprising, and have familiar themes and goals to its other content regulations.

The draft rules are open for public feedback until May 10th, after which Beijing will put some hard rules into force “in 2023.” The draft rules outline fines and possible criminal investigations for offenders.

Despite the regulations, the CAC said the state supports the development and use of AI algorithms and frameworks. It’s claims are likely true and reflect actions taken by state entities like the Beijing Municipal Bureau of Economy and Information Technology which announced it would assist building out AI models and open source frameworks back in February.

But Beijing’s wish list will be hard to fill because it asks immature and fallible technology to adhere to nuanced censorship rules, and by doing so may restrict data models use to avoid making mistakes.

But local web giants have already shown they understand what Beijing wants.

Baidu’s ERNIE model debuted last month and revealed the company implemented censorship of politically sensitive material while botching plenty of fairly easy requests.

Furthermore, Beijing wants to protect IP, company secrets and user information, but LLMs are continually fed new training data by its users. Samsung recently found out how difficult it was to control sensitive data when it’s secrets were reportedly leaked three times in ChatGPT’s first 20 days of use at the Korean chaebol.

Beijing obviously wants to learn from ChatGPT bloopers, but how it will avoid those errors, while maintaining a culture reflective of its values and still yield a useful AI product remains to be seen. ®

More about

TIP US OFF

Send us news


Other stories you might like