China reveals draft laws that heavily restrict deepfakes

Big Tech gets hauled in and reminded of its responsibility to keep China's internet nice

The Chinese government has unveiled a draft law clamping down on deepfakes – the practice of using AI to adapt existing digital content into realistic simulations of humans.

The draft emerged last Friday from the Cyberspace Administration of China and frames the need for regulation in the context of the government's desire to ensure the internet is a tool for good and not the wretched hive of scum and villainy it has often become.

The explanatory memorandum for the policy suggests criminals and fraudsters will be attracted to using digitally created voice, video, chatbots, or manipulation of faces or gestures. The draft therefore rules out the use of such fakes for any application that could disrupt social order, infringe individuals' rights, deliver fake news, or depict sexual activity. It also proposes requiring a grant of permission for use of what China calls "deep synthesis" before it can be employed for legitimate uses.

Just what those legitimate uses might be is not discussed, but the draft does outline extensive regulations on how digital assets must be safeguarded to prevent user privacy.

If deep synthesis is used, the draft proposes a requirement for it to be marked as a digital creation to remove any doubt about authenticity and provenance.

The draft also outlines requirements for service providers to implement excellent security practices and always act in the national interest.

The Middle Kingdom's big tech companies got the same message on the weekend, during a symposium on promoting the healthy and sustainable development of internet companies. The heads of 27 companies attended the event, at which Chinese regulators explained their desire for internet platforms to be both innovative problem-solvers and steadfast defenders of Chinese values.

Zhuang Rongwen, deputy director of China's Central Propaganda Department, called on big tech to ensure it continues to strengthen Chinese society with brilliant online services, while at the same time stepping up vigilance to ensure the Chinese internet is free of the many types of content that Beijing believes are bad for society.

Leaders from the People's Daily Online,, Kuaishou, Xiaomi, and Meituan chimed in with their views on how big tech companies can ensure China's internet conforms to the Party line, but sadly their remarks were omitted from the Administration’s account of the event. ®

Other stories you might like

  • Taiwan creates new challenge for tech industry: stern content regulation laws
    Big tech asked to be more transparent by logging what it took down and why

    Taiwan's concentration of tech manufacturing capability worries almost all stakeholders in the technology industry – if China reclaims the island, it would kick a colossal hole in global supply chains. Now the country has given Big Tech another reason to worry: transparency regulations of a kind social networks and surveillance capitalists detest.

    The regulations – named the Digital Intermediary Service Act and released as a draft yesterday by Taiwan's National Communications Commission – require platform operators to create a complaints mechanism anyone can use to request content takedowns, remove illegal content at speed, undergo audits to demonstrate they can do so, and respond promptly to orders to remove content.

    When platforms decide to take down content, they'll need to list each instance in a public database to promote accountability and transparency of their actions.

    Continue reading
  • FBI warning: Crooks are using deepfake videos in interviews for remote gigs
    Yes. Of course I human. Why asking? Also, when you give passwords to database?

    The US FBI issued a warning on Tuesday that it was has received increasing numbers of complaints relating to the use of deepfake videos during interviews for tech jobs that involve access to sensitive systems and information.

    The deepfake videos include a video image or recording convincingly manipulated to misrepresent someone as the "applicant" for jobs that can be performed remotely. The Bureau reports the scam has been tried on jobs for developers, "database, and software-related job functions". Some of the targeted jobs required access to customers' personal information, financial data, large databases and/or proprietary information.

    "In these interviews, the actions and lip movement of the person seen interviewed on-camera do not completely coordinate with the audio of the person speaking. At times, actions such as coughing, sneezing, or other auditory actions are not aligned with what is presented visually," said the FBI in a public service announcement.

    Continue reading
  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading

Biting the hand that feeds IT © 1998–2022