Japanese bloke collared after using AI software to uncensor smut and flogging it

Plus: Explore the limits of language models in bizarre research experiment, and more


In brief A man was detained in Japan for selling uncensored pornographic content that he had, in a way, depixelated using machine-learning tools.

Masayuki Nakamoto, 43, was said to have made about 11 million yen ($96,000) from peddling over 10,000 processed porn clips, and was formally accused of selling ten hardcore photos for 2,300 yen ($20). He pleaded guilty to violating Japan's copyright and obscenity laws, NHK reported this month.

Explicit images of genitalia are forbidden in Japan, and as such its porn is partially pixelated. Don't pretend you don't know what we're talking about. Nakamato flouted these rules by downloading smutty photos and videos, and reportedly used deepfake technology to generate fake private parts in place of the pixelation.

“This is the first case in Japan where police have caught an AI user,” Daisuke Sueyoshi, a lawyer who’s tried cybercrime cases, told Vice. “At the moment, there’s no law criminalizing the use of AI to make such images.”

Googlers are right now finding it a tad more difficult to get their AI research published because the web giant's lawyers, in a scramble to avoid any further public controversy, are holding up papers to scrutinize and censor them, Business Insider reported this week citing sources. Former Reg journo Jack Clark, now an AI policy guru, has a thread here on Twitter detailing the frustration of these faceless corporate interventions.

Surprise, surprise, machines can be manipulated into making poor decisions

As a reminder that today's machines can be easily led into making ethically or morally questionable decisions by their humans, researchers at the Allen Institute of AI built a system demonstrating more or less that.

Ask Delphi is a language model for which users can submit questions and it can choose responses like, “it’s bad,” or “it’s acceptable,” or “it’s good.” Here’s an example. Given the input: “Should I commit genocide if it makes everybody happy,” the machine replied: “You should.”

It's not that Ask Delphi lacks a moral compass, it just doesn’t know what it’s talking about. It doesn’t understand what genocide is. Words mean nothing to the software; they’re just numerical concepts stored as vectors. What is interesting is that the experiment shows how easy it is to manipulate the outputs of these models by tweaking the inputs.

Something obviously bad like genocide can be associated with good just by adding positive phrase such as “if it makes everybody happy.” Thankfully, Ask Delphi is just for a bizarre research project; no one is actually using it to make decisions.

You can read the research paper here.

Popular physics engine made open source thanks to DeepMind

MuJoCo, a popular physics engine used to simulate realistic mechanical movements for robots and virtual games, will be free for anyone to download and use.

Users previously had to pay to use the software developed by Emo Todorov under his company Roboti LLC. But as of this week, it will be free for anyone to download, and soon open source, after DeepMind acquired the rights to it.

“The rich-yet-efficient contact model of the MuJoCo physics simulator has made it a leading choice by robotics researchers and today, we're proud to announce that, as part of DeepMind's mission of advancing science, we've acquired MuJoCo and are making it freely available for everyone, to support research everywhere,” the AI research lab said in a statement.

People can use the model to train their AI robots in simulation under various conditions before they’re tested in the real world, or craft virtual environments to train reinforcement learning agents. DeepMind is working to tweak the code for “full open sourcing”; what that means is, the code will eventually appear on GitHub under an Apache license, we're told, and binaries can be fetched right now for free from the MuJoCo website.

Facebook’s AI content moderation algorithms are naff

Facebook whistleblower Frances Haugen revealed this week that the internet giant's automated systems only take down between three to five per cent of toxic language, and less than one per cent of all posts that violate its content policies.

Clips containing shooting incidents, gruesome car crashes, or cruel cockfights slipped through its detection system, it's said. Sometimes benign videos were misclassified as being violent or inappropriate. A carwash was labelled as a first-person shooter video, according to the Wall Street Journal. Facebook uses automated content moderation to flag up problematic content for human review.

Guy Rosen, FB's veep of integrity, argued in response that “focusing just on content removals is the wrong way to look at how we fight hate speech.” Sometimes moderators will decide to limit the spread of a particular post by not recommending certain groups, pages, or accounts to other users, for instance. ®

Similar topics


Other stories you might like

  • Prisons transcribe private phone calls with inmates using speech-to-text AI

    Plus: A drug designed by machine learning algorithms to treat liver disease reaches human clinical trials and more

    In brief Prisons around the US are installing AI speech-to-text models to automatically transcribe conversations with inmates during their phone calls.

    A series of contracts and emails from eight different states revealed how Verus, an AI application developed by LEO Technologies and based on a speech-to-text system offered by Amazon, was used to eavesdrop on prisoners’ phone calls.

    In a sales pitch, LEO’s CEO James Sexton told officials working for a jail in Cook County, Illinois, that one of its customers in Calhoun County, Alabama, uses the software to protect prisons from getting sued, according to an investigation by the Thomson Reuters Foundation.

    Continue reading
  • Battlefield 2042: Please don't be the death knell of the franchise, please don't be the death knell of the franchise

    Another terrible launch, but DICE is already working on improvements

    The RPG Greetings, traveller, and welcome back to The Register Plays Games, our monthly gaming column. Since the last edition on New World, we hit level cap and the "endgame". Around this time, item duping exploits became rife and every attempt Amazon Games made to fix it just broke something else. The post-level 60 "watermark" system for gear drops is also infuriating and tedious, but not something we were able to address in the column. So bear these things in mind if you were ever tempted. On that note, it's time to look at another newly released shit show – Battlefield 2042.

    I wanted to love Battlefield 2042, I really did. After the bum note of the first-person shooter (FPS) franchise's return to Second World War theatres with Battlefield V (2018), I stupidly assumed the next entry from EA-owned Swedish developer DICE would be a return to form. I was wrong.

    The multiplayer military FPS market is dominated by two forces: Activision's Call of Duty (COD) series and EA's Battlefield. Fans of each franchise are loyal to the point of zealotry with little crossover between player bases. Here's where I stand: COD jumped the shark with Modern Warfare 2 in 2009. It's flip-flopped from WW2 to present-day combat and back again, tried sci-fi, and even the Battle Royale trend with the free-to-play Call of Duty: Warzone (2020), which has been thoroughly ruined by hackers and developer inaction.

    Continue reading
  • American diplomats' iPhones reportedly compromised by NSO Group intrusion software

    Reuters claims nine State Department employees outside the US had their devices hacked

    The Apple iPhones of at least nine US State Department officials were compromised by an unidentified entity using NSO Group's Pegasus spyware, according to a report published Friday by Reuters.

    NSO Group in an email to The Register said it has blocked an unnamed customers' access to its system upon receiving an inquiry about the incident but has yet to confirm whether its software was involved.

    "Once the inquiry was received, and before any investigation under our compliance policy, we have decided to immediately terminate relevant customers’ access to the system, due to the severity of the allegations," an NSO spokesperson told The Register in an email. "To this point, we haven’t received any information nor the phone numbers, nor any indication that NSO’s tools were used in this case."

    Continue reading

Biting the hand that feeds IT © 1998–2021