Facebook pushing for facials and fighting deepfakes while US Army learning AI ethics

Also cunning crimes use machine learning to steal cash


Roundup Hello, here's a list of all this week's AI and machine learning related news beyond what we've already covered. Facebook has turned off automatic facial recognition for its users and the US Department of Defense wants to hire an ethicist to, you know, develop AI for good, of course.

Let's detect deepfakes with a...challenge! AI engineers from industry and academia are collaborating to launch a challenge to advance technology to detect fake visual content manipulated by machine learning algorithms.

The contest, known as the Deepfake Detection Challenge (DFDC), is spearheaded by a long list of names, including: Facebook, the San Francisco based nonprofit the Partnership on AI, Microsoft, as well researchers from Cornell Tech, the Massachusetts Institute of Technology, University of Oxford, University of California, Berkeley, University of Maryland, College Park, and University at Albany-SUNY.

Deepfakes have ignited people's deepest fears. Doctored videos of actors and actresses may seem innocuous - check out the Chinese app Zao - but no one's laughing much when it comes to world leaders or fake pornography. The fear of revenge porn or rise of disinformation is real.

There have been many attempts at dealing with deepfakes, such as implementing digital watermarking or coming up with more offbeat detection methods. It's difficult to catch all of them, however, particularly as it becomes easier and cheaper to generate more realistic photos and videos.

So, Facebook has decided to throw $10m (or about half a day's profit) at the problem. Researchers backing the DFDC will work together to build a dataset as well as give out grants and awards to engineers that want to participate in the challenge. They'll have to craft a machine learning model trained on the given dataset and will be tested on how well it can detect deepfakes. A leaderboard will be set up to monitor the best submissions.

Don't worry, none of your Facebook data will be included in the training dataset, apparently. You can read more about that here.

You can choose to opt in to Facebook's facial recognition feature: Since we're still on the topic of Facebook, the social media giant announced it had turned off automatic facial recognition for people's accounts this week.

That means that when you or your friends go on to tag people's faces in pictures, Facebook shouldn't automatically list people's names if they haven't turned on facial recognition for their account.

If you do want to keep that feature, however, then you can choose to turn it on. How to do that exactly depends on when you joined Facebook. It's a little confusing but if you can decipher the announcement, you're welcome.

Miscreants nabbed cash by faking CEO's voice: Machine learning models that can imitate someone's voice to get them to say things they haven't said have been around for a while. And now it looks like some criminals have used that kind of software to mimic a CEO to get an unsuspecting employee to hand over a sum of €220,000 (about £198,789 or $243,000).

The company targeted hasn't been named, according to the Wall Street Journal. But the company's insurance firm said that whoever called them pretending to be the CEO said that the request for cash was urgent and had to be transferred from the company within an hour.

The money was put into a Hungarian bank account before it was shifted to a Mexico and "distributed to other locations". No suspects have been identified yet, but the case could be the first time that such voice-cloning technology has been used to commit financial fraud. It was first reported by the Beeb.

Hm, the DoD wants an AI ethicist?: The US Department of Defense is looking for an ethicist to help the military develop AI technology for warfare.

The Joint Artificial Intelligence Center (JAIC), led by Lieutenant General Jack Shanahan, was set up by the US Department of Defense to help the US adopt AI and machine learning for national security purposes. Shanahan said the organization has grown to 60 employees and had a budget of $268m this year.

"One of the positions we are going to fill will be somebody who is not just looking at technical standards, but who is an ethicist," said Shanahan. "We are going to bring in someone who will have a deep background in ethics, and then the lawyers within the department will be looking at how we actually bake this into the Department of Defense."

JAIC projects include using AI for humanitarian assistance for wildfires, flooding, and the controversial Project Maven, a computer vision system to analyse drone footage that Google pulled out of.

US Army researchers want to build AI bots that can work together: DARPA, the US Defense research arm, is funding research projects that will test out "autonomous machine teaming".

It wants "new approaches for autonomous teaming of physically distributed groups of AI enabled systems (multi-agent systems) when there is limited opportunity for centralized coordination," according to its Federal Business Opportunities page.

An example of these systems includes things like unmanned aerial vehicles (UAVs), satellites, ground sensors, and robots. At the moment, human teams are responsible for controlling such systems. DARPA wants to build machines that are capable of reasoning from different input data so they can make decisions automatically without human supervision.

The total value awarded for this project is limited to $1m. ®

Broader topics


Other stories you might like

  • GPUs aren’t always your best bet, Twitter ML tests suggest
    Graphcore processor outperforms Nvidia rival in team's experiments

    GPUs are a powerful tool for machine-learning workloads, though they’re not necessarily the right tool for every AI job, according to Michael Bronstein, Twitter’s head of graph learning research.

    His team recently showed Graphcore’s AI hardware offered an “order of magnitude speedup when comparing a single IPU processor to an Nvidia A100 GPU,” in temporal graph network (TGN) models.

    “The choice of hardware for implementing Graph ML models is a crucial, yet often overlooked problem,” reads a joint article penned by Bronstein with Emanuele Rossi, an ML researcher at Twitter, and Daniel Justus, a researcher at Graphcore.

    Continue reading
  • AI chatbot trained on posts from web sewer 4chan behaved badly – just like human members
    Bot was booted for being bothersome

    A prankster researcher has trained an AI chatbot on over 134 million posts to notoriously freewheeling internet forum 4chan, then set it live on the site before it was swiftly banned.

    Yannic Kilcher, an AI researcher who posts some of his work to YouTube, called his creation "GPT-4chan" and described it as "the worst AI ever". He trained GPT-J 6B, an open source language model, on a dataset containing 3.5 years' worth of posts scraped from 4chan's imageboard. Kilcher then developed a chatbot that processed 4chan posts as inputs and generated text outputs, automatically commenting in numerous threads.

    Netizens quickly noticed a 4chan account was posting suspiciously frequently, and began speculating whether it was a bot.

    Continue reading
  • AMD touts big datacenter, AI ambitions in CPU-GPU roadmap
    Epyc future ahead, along with Instinct, Ryzen, Radeon and custom chip push

    After taking serious CPU market share from Intel over the last few years, AMD has revealed larger ambitions in AI, datacenters and other areas with an expanded roadmap of CPUs, GPUs and other kinds of chips for the near future.

    These ambitions were laid out at AMD's Financial Analyst Day 2022 event on Thursday, where it signaled intentions to become a tougher competitor for Intel, Nvidia and other chip companies with a renewed focus on building better and faster chips for servers and other devices, becoming a bigger player in AI, enabling applications with improved software, and making more custom silicon.  

    "These are where we think we can win in terms of differentiation," AMD CEO Lisa Su said in opening remarks at the event. "It's about compute technology leadership. It's about expanding datacenter leadership. It's about expanding our AI footprint. It's expanding our software capability. And then it's really bringing together a broader custom solutions effort because we think this is a growth area going forward."

    Continue reading
  • Police lab wants your happy childhood pictures to train AI to detect child abuse
    Like the Hotdog, Not Hotdog app but more Kidnapped, Not Kidnapped

    Updated Australia's federal police and Monash University are asking netizens to send in snaps of their younger selves to train a machine-learning algorithm to spot child abuse in photographs.

    Researchers are looking to collect images of people aged 17 and under in safe scenarios; they don't want any nudity, even if it's a relatively innocuous picture like a child taking a bath. The crowdsourcing campaign, dubbed My Pictures Matter, is open to those aged 18 and above, who can consent to having their photographs be used for research purposes.

    All the images will be amassed into a dataset managed by Monash academics in an attempt to train an AI model to tell the difference between a minor in a normal environment and an exploitative, unsafe situation. The software could, in theory, help law enforcement better automatically and rapidly pinpoint child sex abuse material (aka CSAM) in among thousands upon thousands of photographs under investigation, avoiding having human analysts inspect every single snap.

    Continue reading

Biting the hand that feeds IT © 1998–2022