Deepfake attacks can easily trick live facial recognition systems online

Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops


In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

Sensity did not name the companies susceptible to the deepfake attacks. "We told them 'look you're vulnerable to this kind of attack,' and they said 'we do not care,'" Francesco Cavalli, Sensity's chief operating officer, told The Verge. "We decided to publish it because we think, at a corporate level and in general, the public should be aware of these threats."

Liveness tests are risky, especially if banks or the American tax authorities, for example, use them for automated biometric authentication. These attacks, however, aren't always easy to carry out. Sensity mentioned needing a specialized phone to hijack mobile cameras and injecting pre-made deepfake models in its report. 

PyTorch developers can train AI models on their own Apple laptops soon

Newer versions of Apple's computers contain custom-made GPUs, but PyTorch developers haven't been able to utilize the hardware's power when training machine learning models.

That will change, however, with the upcoming PyTorch v1.12 release. "In collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac," the PyTorch community announced in a blog post this week. 

"Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training." The new release means Mac users will be able to train neural networks on their own devices without having to fork out to rent computational resources via cloud computing services.

The newest PyTorch v1.12 is expected to be released "sometime in the second half of June," a spokesperson told The Register

Apple's GPUs are more optimized for training machine learning models than its CPUs, making it easier to train larger models more quickly. 

Fake data for medical models

US health insurance provider Anthem is working with Google Cloud to build a synthetic data pipeline for machine learning models.

Up to to two petabytes of fake data, mimicking medical records and healthcare claims, will be generated by folks over at the Chocolate Factory. These synthetic datasets will be used to train AI algorithms that can better detect cases of fraud, and pose less of a security risk than collecting real data from patients.

The models will eventually analyze real data, and could, for example, look out for fraudulent claims filed by people by automatically checking their health records. "More and more… synthetic data is going to overtake and be the way people do AI in the future," Anil Bhatt, Anthem's chief information officer, told the Wall Street Journal

Using fake data avoids privacy issues and could reduce bias too. But these artificial samples don't always work in every machine learning application, experts previously told The Register. 

"Synthetic data models, in our opinion, will ultimately fuel the promise of what big data can deliver," said Chris Sakalosky, managing director, US Healthcare & Life Science at Google Cloud. "We think that's actually what will set this industry forward."

Ex-Apple AI director leaves for DeepMind

A former director of machine learning at Apple, who reportedly resigned over the company's return-to-work policy, is moving to work at DeepMind. 

Ian Goodfellow led the iGiant's secretive "Special Projects Group," helping to develop its self-driving car software. It was previously reported he left after Apple asked employees to return to the office three days a week starting May 23. The policy has now been delayed due to a rise in COVID cases.

He will go on to join DeepMind, according to Bloomberg. Interestingly, Goodfellow will reportedly be employed as an "individual contributor" by the UK-headquartered research lab. He is best known for inventing generative adversarial networks, a type of neural network often used to produce AI-generated images, and for helping write the popular Deep Learning textbook published in 2015. 

Goodfellow was a director at Apple for over three years, and previously held positions as an AI researcher at Google and OpenAI. ®

Broader topics


Other stories you might like

  • Is computer vision the cure for school shootings? Likely not
    Gun-detecting AI outfits want to help while root causes need tackling

    Comment More than 250 mass shootings have occurred in the US so far this year, and AI advocates think they have the solution. Not gun control, but better tech, unsurprisingly.

    Machine-learning biz Kogniz announced on Tuesday it was adding a ready-to-deploy gun detection model to its computer-vision platform. The system, we're told, can detect guns seen by security cameras and send notifications to those at risk, notifying police, locking down buildings, and performing other security tasks. 

    In addition to spotting firearms, Kogniz uses its other computer-vision modules to notice unusual behavior, such as children sprinting down hallways or someone climbing in through a window, which could indicate an active shooter.

    Continue reading
  • Cerebras sets record for 'largest AI model' on a single chip
    Plus: Yandex releases 100-billion-parameter language model for free, and more

    In brief US hardware startup Cerebras claims to have trained the largest AI model on a single device powered by the world's largest Wafer Scale Engine 2 chip the size of a plate.

    "Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3 and GPT-J) with up to 20 billion parameters on a single CS-2 system," the company claimed this week. "Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes."

    The CS-2 packs a whopping 850,000 cores, and has 40GB of on-chip memory capable of reaching 20 PB/sec memory bandwidth. The specs on other types of AI accelerators and GPUs pale in comparison, meaning machine learning engineers have to train huge AI models with billions of parameters across more servers.

    Continue reading
  • Microsoft promises to tighten access to AI it now deems too risky for some devs
    Deep-fake voices, face recognition, emotion, age and gender prediction ... A toolbox of theoretical tech tyranny

    Microsoft has pledged to clamp down on access to AI tools designed to predict emotions, gender, and age from images, and will restrict the usage of its facial recognition and generative audio models in Azure.

    The Windows giant made the promise on Tuesday while also sharing its so-called Responsible AI Standard, a document [PDF] in which the US corporation vowed to minimize any harm inflicted by its machine-learning software. This pledge included assurances that the biz will assess the impact of its technologies, document models' data and capabilities, and enforce stricter use guidelines.

    This is needed because – and let's just check the notes here – there are apparently not enough laws yet regulating machine-learning technology use. Thus, in the absence of this legislation, Microsoft will just have to force itself to do the right thing.

    Continue reading

Biting the hand that feeds IT © 1998–2022