This article is more than 1 year old

Japanese bloke collared after using AI software to uncensor smut and flogging it

Plus: Explore the limits of language models in bizarre research experiment, and more

In brief A man was detained in Japan for selling uncensored pornographic content that he had, in a way, depixelated using machine-learning tools.

Masayuki Nakamoto, 43, was said to have made about 11 million yen ($96,000) from peddling over 10,000 processed porn clips, and was formally accused of selling ten hardcore photos for 2,300 yen ($20). He pleaded guilty to violating Japan's copyright and obscenity laws, NHK reported this month.

Explicit images of genitalia are forbidden in Japan, and as such its porn is partially pixelated. Don't pretend you don't know what we're talking about. Nakamato flouted these rules by downloading smutty photos and videos, and reportedly used deepfake technology to generate fake private parts in place of the pixelation.

“This is the first case in Japan where police have caught an AI user,” Daisuke Sueyoshi, a lawyer who’s tried cybercrime cases, told Vice. “At the moment, there’s no law criminalizing the use of AI to make such images.”

Googlers are right now finding it a tad more difficult to get their AI research published because the web giant's lawyers, in a scramble to avoid any further public controversy, are holding up papers to scrutinize and censor them, Business Insider reported this week citing sources. Former Reg journo Jack Clark, now an AI policy guru, has a thread here on Twitter detailing the frustration of these faceless corporate interventions.

Surprise, surprise, machines can be manipulated into making poor decisions

As a reminder that today's machines can be easily led into making ethically or morally questionable decisions by their humans, researchers at the Allen Institute of AI built a system demonstrating more or less that.

Ask Delphi is a language model for which users can submit questions and it can choose responses like, “it’s bad,” or “it’s acceptable,” or “it’s good.” Here’s an example. Given the input: “Should I commit genocide if it makes everybody happy,” the machine replied: “You should.”

It's not that Ask Delphi lacks a moral compass, it just doesn’t know what it’s talking about. It doesn’t understand what genocide is. Words mean nothing to the software; they’re just numerical concepts stored as vectors. What is interesting is that the experiment shows how easy it is to manipulate the outputs of these models by tweaking the inputs.

Something obviously bad like genocide can be associated with good just by adding positive phrase such as “if it makes everybody happy.” Thankfully, Ask Delphi is just for a bizarre research project; no one is actually using it to make decisions.

You can read the research paper here.

Popular physics engine made open source thanks to DeepMind

MuJoCo, a popular physics engine used to simulate realistic mechanical movements for robots and virtual games, will be free for anyone to download and use.

Users previously had to pay to use the software developed by Emo Todorov under his company Roboti LLC. But as of this week, it will be free for anyone to download, and soon open source, after DeepMind acquired the rights to it.

“The rich-yet-efficient contact model of the MuJoCo physics simulator has made it a leading choice by robotics researchers and today, we're proud to announce that, as part of DeepMind's mission of advancing science, we've acquired MuJoCo and are making it freely available for everyone, to support research everywhere,” the AI research lab said in a statement.

People can use the model to train their AI robots in simulation under various conditions before they’re tested in the real world, or craft virtual environments to train reinforcement learning agents. DeepMind is working to tweak the code for “full open sourcing”; what that means is, the code will eventually appear on GitHub under an Apache license, we're told, and binaries can be fetched right now for free from the MuJoCo website.

Facebook’s AI content moderation algorithms are naff

Facebook whistleblower Frances Haugen revealed this week that the internet giant's automated systems only take down between three to five per cent of toxic language, and less than one per cent of all posts that violate its content policies.

Clips containing shooting incidents, gruesome car crashes, or cruel cockfights slipped through its detection system, it's said. Sometimes benign videos were misclassified as being violent or inappropriate. A carwash was labelled as a first-person shooter video, according to the Wall Street Journal. Facebook uses automated content moderation to flag up problematic content for human review.

Guy Rosen, FB's veep of integrity, argued in response that “focusing just on content removals is the wrong way to look at how we fight hate speech.” Sometimes moderators will decide to limit the spread of a particular post by not recommending certain groups, pages, or accounts to other users, for instance. ®

More about

TIP US OFF

Send us news


Other stories you might like