Uncle Sam punishes China for abusing Uyghur Muslims – by blacklisting top AI surveillance companies

It will also restrict visas to Communist Party officials, too


The US government has blacklisted 28 Chinese companies, including some of the world’s most valuable AI startups, for being complicit in China’s effort to brutally crack down on its Uyghur Muslim population.

The majority of China’s Muslim population live in Xinjiang, a Northwestern region in between Kazakhstan and Mongolia. Millions of Uyghurs have been subjected to constant surveillance and thrown into internment camps.

Now, the US government is fighting back by placing the Xinjiang People’s Government Public Security Bureau and 19 of its subsidiaries, as well as eight tech companies, on its Entity List, making them subject to sanctions.

The list, enforced by the Bureau of Industry and Security (BIS) from the US Department of Commerce, states which parties that US companies are prohibited from trading with without governmental permission. "The Entity List identifies foreign parties that are prohibited from receiving some or all items subject to the EAR unless the exporter secures a license," according to the BIS.

“The U.S. Government and Department of Commerce cannot and will not tolerate the brutal suppression of ethnic minorities within China,” said Wilbur Ross, the Secretary of Commerce. “This action will ensure that our technologies, fostered in an environment of individual liberty and free enterprise, are not used to repress defenseless minority populations.”

Some of the companies that have been punished include SenseTime and Megvii, so-called unicorn startups valued over a billion dollars. SenseTime, known as Shangtang in China is valued over $7bn, was founded in Hong Kong and develops AI algorithms to facial recognition.

Megvii, a similar company based in Beijing valued over $4bn is known for Face++, its open source software for face detection. It has been integrated in a police app to target Uyghurs. Yitu Technology and Yixin Science and Technology Co are also blacklisted for supplying the Chinese government with facial recognition technology.

Others include two companies from Hangzhou. Hikvision, marketed as the “world’s largest supplier of video surveillance” and Zhejiang Dahua Technology both provide products and services like smart cameras. iFlytek Co specialises in voice recognition software, whilst Xiamen Meiya Pico Information Co amasses data for digital forensics.

“We strongly oppose the inclusion of Shangtang Technology in the list of entities by the US Department of Commerce and call on the US government to re-examine it,” SenseTime said in a statement.

“Shangtang Technology will actively communicate with all parties on this matter as soon as possible to ensure fair and equitable treatment. We are confident that we can maximize the protection of our customers, partners, investors and employees.”

In other related news, Secretary Mike Pompeo announced the US were clamping down on visas to Chinese officials believed to be involved in the "detention or abuse of Uighurs, Kazakhs, or other Muslim minority groups in Xinjiang".

"China has forcibly detained over one million Muslims in a brutal, systematic campaign to erase religion and culture in Xinjiang. China must end its draconian surveillance and repression, release all those arbitrarily detained, and cease its coercion of Chinese Muslims abroad," he said in a tweet. ®


Other stories you might like

  • GPUs aren’t always your best bet, Twitter ML tests suggest
    Graphcore processor outperforms Nvidia rival in team's experiments

    GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.

    His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.

    “The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.

    Continue reading
  • Meta slammed with eight lawsuits claiming social media hurts kids
    Plus: Why safety data for self-driving technology is misleading, and more

    In brief Facebook and Instagram's parent biz, Meta, was hit with not one, not two, but eight different lawsuits accusing its social media algorithm of causing real harm to young users across the US. 

    The complaints filed over the last week claim Meta's social media platforms have been designed to be dangerously addictive, driving children and teenagers to view content that increases the risk of eating disorders, suicide, depression, and sleep disorders. 

    "Social media use among young people should be viewed as a major contributor to the mental health crisis we face in the country," said Andy Birchfield, an attorney representing the Beasley Allen Law Firm, leading the cases, in a statement.

    Continue reading
  • AI chatbot trained on posts from web sewer 4chan behaved badly – just like human members
    Bot was booted for being bothersome

    A prankster researcher has trained an AI chatbot on over 134 million posts to notoriously freewheeling internet forum 4chan, then set it live on the site before it was swiftly banned.

    Yannic Kilcher, an AI researcher who posts some of his work to YouTube, called his creation "GPT-4chan" and described it as "the worst AI ever". He trained GPT-J 6B, an open source language model, on a dataset containing 3.5 years' worth of posts scraped from 4chan's imageboard. Kilcher then developed a chatbot that processed 4chan posts as inputs and generated text outputs, automatically commenting in numerous threads.

    Netizens quickly noticed a 4chan account was posting suspiciously frequently, and began speculating whether it was a bot.

    Continue reading
  • Police lab wants your happy childhood pictures to train AI to detect child abuse
    Like the Hotdog, Not Hotdog app but more Kidnapped, Not Kidnapped

    Updated Australia's federal police and Monash University are asking netizens to send in snaps of their younger selves to train a machine-learning algorithm to spot child abuse in photographs.

    Researchers are looking to collect images of people aged 17 and under in safe scenarios; they don't want any nudity, even if it's a relatively innocuous picture like a child taking a bath. The crowdsourcing campaign, dubbed My Pictures Matter, is open to those aged 18 and above, who can consent to having their photographs be used for research purposes.

    All the images will be amassed into a dataset managed by Monash academics in an attempt to train an AI model to tell the difference between a minor in a normal environment and an exploitative, unsafe situation. The software could, in theory, help law enforcement better automatically and rapidly pinpoint child sex abuse material (aka CSAM) in among thousands upon thousands of photographs under investigation, avoiding having human analysts inspect every single snap.

    Continue reading

Biting the hand that feeds IT © 1998–2022