Security

Research

Phishing works so well crims won't bother with deepfakes, says Sophos chap

People reveal passwords if you ask nicely, so AI panic is overblown


Panic over the risk of deepfake scams is completely overblown, according to a senior security adviser for UK-based infosec company Sophos.

"The thing with deepfakes is that we aren't seeing a lot of it," Sophos researcher John Shier told El Reg last week.

Shier said current deepfakes – AI generated videos that mimic humans – aren't the most efficient tool for scammers to utilize because simpler and cheaper attacks like phishing and other forms of social engineering work very well.

"People will give up info if you just ask nicely," said Shier.

One area in which the researcher does see deepfakes becoming prevalent is romance scams. It takes a hefty amount of devotion, time and energy to craft believable fake personas, and the additional effort to add a deepfake is not huge. Shier worries that deepfaked romance scams could become problematic if AI can enable the scammer to work at scale.

Shier was not comfortable setting a date on industrialized deepfake lovebots, but said the necessary tech improves by orders of magnitude each year.

"AI experts make it sound like it is still a few years away from massive impact," the researcher lamented. "In between, we will see well-resourced crime groups executing the next level of compromise to trick people into writing funds into accounts."

Up until now, deepfakes have most commonly been used to create sexualized images and videos – mostly depicting women.

However, a Binance PR exec recently revealed criminals had created a deepfaked clone that participated in Zoom calls and tried to pull off cryptocurrency scams.

Security researchers at Trend Micro warned last month that deepfakes may not always be a scammer's main tool, but are often used to enhance other techniques. The lifelike digital images have lately shown up in job seeker scams, bogus business meetings and web ads.

In June, the FBI issued a warning that it was receiving an increasing number of complaints regarding deepfakes deployed in job interviews for roles that provide access to sensitive information. ®

Send us news
15 Comments

In the rush to build AI apps, please, please don't leave security behind

Supply-chain attacks are definitely possible and could lead to data theft, system hijacking, and more

Microsoft Copilot for Security prepares for April liftoff

Automated AI helper intended to make security more manageable

AI researchers have started reviewing their peers using AI assistance

ChatGPT deems your work to be commendable, innovative, and comprehensive

Amazon finishes pumping $4B into AI darling Anthropic

Adds $2.75B to the ML sweepstakes ante and is counting on Claude

'Thousands' of businesses at mercy of miscreants thanks to unpatched Ray AI flaw

Anyscale claims issue is 'long-standing design decision' – as users are raided by intruders

Dell adds Nvidia's next GPUs to its portfolio of AI platforms

Nvidia is a kingmaker, and who wouldn't want to be king?

Samsung preps inferencing accelerator to take on Nvidia, scores huge sale

PLUS: Tencent's profit plunge; Singtel to build three AI datacenters; McDonald's China gobbles Microsoft AI

Homeland Security will test out using genAI to train US immigration officers

It's all about privacy, civil rights, civil liberties, ok?

Ransomware can mean life or death at hospitals. DEF CON hackers to the rescue?

ARPA-H joins DARPA's AIxCC, adds $20M to cash rewards

Whizkids jimmy OpenAI, Google's closed models

Infosec folk aren’t thrilled that if you poke APIs enough, you learn AI's secrets

Intel chases smaller code shops with expanded AI PC dev program, NUC kit

Chipzilla wants more apps coded for NPUs, not Nvidia

Intel throws chips on the table, Microsoft plays the Copilot card in wild bet on AI PCs

Does anyone actually want one?