Apple didn't engage with the infosec world on CSAM scanning – so get used to a slow drip feed of revelations

Starting with: Neural net already found in iOS, hash collisions, and more


Apple's system to scan iCloud-bound photos on iOS devices to find illegal child sexual abuse material (CSAM) is supposed to ship in iOS 15 later this year.

However, the NeuralHash machine-learning model involved in that process appears to have been present on iOS devices at least since the December 14, 2020 release of iOS 14.3. It has been adapted to run on macOS 11.3 or later using the API in Apple's Vision framework. And thus exposed to the world, it has been probed by the curious.

In the wake of Apple's initial child safety announcement two weeks ago, several developers have explored Apple's private NeuralHash API and provided a Python script to convert the model into a convenient format – the Open Neural Network Exchange (ONNX) – for experimentation.

On Wednesday, Intel Labs research scientist Cory Cornelius used these resources to create a hash collision – two different images that, when processed by the algorithm, produce the same NeuralHash identifier.

That's expected behavior from perceptual hashing, which is designed to compute the same identifier for similar images – the idea is that one shouldn't be able to, say, convert a CSAM image from color to grayscale to evade hash-based detection.

This raised the possibility of 'poisoned' images that looked harmless, but triggered as child sexual abuse media

As Apple explains in its technical summary [PDF], "Only another image that appears nearly identical can produce the same number; for example, images that differ in size or transcoded quality will still have the same NeuralHash value."

But in this instance, the matching hashes come from completely dissimilar images – a beagle and a variegated gray square. And that finding amplifies ongoing concern that Apple's child safety technology may be abused to cause inadvertent harm. For instance, by giving someone an innocent image that is wrongly flagged up as CSAM.

Apple has said there's less than "an extremely low (1 in 1 trillion) probability of incorrectly flagging a given account," but as Matthew Green, associate professor of computer science at Johns Hopkins, observed via Twitter, Apple's statistics don't cover the possibility of "deliberately-constructed false positives."

"It was always fairly obvious that in a perceptual hash function like Apple’s, there were going to be 'collisions' — very different images that produced the same hash," said Green in reference to the collision demo. "This raised the possibility of 'poisoned' images that looked harmless, but triggered as child sexual abuse media."

Jonathan Mayer, assistant professor of computer science and public affairs at Princeton University, told The Register that this does not mean that Apple's NeuralHash image matching scheme is broken.

"That would be a reasonable response if NeuralHash were a cryptographic hash function," explained Mayer. "But it's a perceptual hash function, with very different properties."

With cryptographic hash functions, he said, you're not supposed to be able to find two inputs with the same output. The formal term for that is "second-preimage resistance."

"With a perceptual hash function, by comparison, a small change to the input is supposed to produce the same output," said Mayer. "These functions are designed specifically not to have second-preimage resistance."

Mayer said while he worries the collision proof-of-concept will provoke an overreaction, he's nonetheless concerned. "There is a real security risk here," he said.

Of greatest concern, he said, is an adversarial machine-learning attack that generates images that match CSAM hashes and appear to be possible CSAM during Apple's review process. Apple, he said, can defend against these attacks and, in fact, describes some planned mitigations in its documentation.

Apple, said Mayer, "has both a technical mitigation (running a separate, undisclosed server-side perceptual hash function to check for a match) and a process mitigation (human review)," he explained. "Those mitigations have limits, and they still expose some content, but Apple has clearly thought about this issue."

"I’m less concerned about the attack than some observers, because it presupposes access to known CSAM hashes," said Mayer. "And the most direct way to get those hashes is from source images. So it presupposes an attacker committing a very serious federal felony."

Mayer's objections have more to do with the way Apple handled its child safety announcement, which even the company itself was forced to concede has led to misunderstandings.

"I find it mind boggling that Apple wasn't prepared to discuss this risk, like so many other risks surrounding its new system," said Mayer. "Apple hasn't seriously engaged with the information security community, so we're going to have a slow drip of concerning developments like this, with little context for understanding."

The Register asked Apple to comment, but we expect to hear nothing.

Cupertino comeback

Apple, aware of these developments, reportedly held a call for the press in which the company downplayed the hash collision and cited safeguards like the operating system's code signing to guarantee the integrity of the NeuralHash model, human review, and redundant algorithmic check that runs server-side.

Nonetheless, AsuharietYgvar, the pseudonymous individual who made the NeuralHash model available in ONYX format, and asked to be identified as "an average concerned citizen," expressed concern that Apple was misinforming the public and skepticism about the supposed server-side check.

This is highly questionable because it adds a black box in the detection process, which no one can perform security audits on

"If their claim was true, the collision would appear to no longer be a problem since it's impossible to retrieve the algorithm they are using on the servers," said AsuharietYgvar in a message to The Register. "However, this is highly questionable because it adds a black box in the detection process, which no one can perform security audits on.

"We already know that NeuralHash is not as robust as Apple claimed. Who can believe their secret, non-audited secondary check will be better? Considering that Apple already described their NeuralHash and Private Set Intersection algorithms in detail, it's ironic that eventually they decided to keep the integral parts in secret to combat security researchers. And if I did not make their NeuralHash public, we will never know that the algorithm is that easy to defeat."

"Another real problem is that this system can be easily worked around to store CSAM materials without being detected," AsuharietYgvar continued. "Since the NeuralHash model is public now it's trivial to implement an algorithm which completely changes the hash without introducing visible difference. This will make those materials easily pass the initial on-device check.

"I believe what I did was a firm step against mass surveillance, but certainly this will not be enough. We cannot let Apple's famous 1984 ad become a reality. At least not without a fight." ®


Other stories you might like

  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading
  • Conti: Russian-backed rulers of Costa Rican hacktocracy?
    Also, Chinese IT admin jailed for deleting database, and the NSA promises no more backdoors

    In brief The notorious Russian-aligned Conti ransomware gang has upped the ante in its attack against Costa Rica, threatening to overthrow the government if it doesn't pay a $20 million ransom. 

    Costa Rican president Rodrigo Chaves said that the country is effectively at war with the gang, who in April infiltrated the government's computer systems, gaining a foothold in 27 agencies at various government levels. The US State Department has offered a $15 million reward leading to the capture of Conti's leaders, who it said have made more than $150 million from 1,000+ victims.

    Conti claimed this week that it has insiders in the Costa Rican government, the AP reported, warning that "We are determined to overthrow the government by means of a cyber attack, we have already shown you all the strength and power, you have introduced an emergency." 

    Continue reading
  • China-linked Twisted Panda caught spying on Russian defense R&D
    Because Beijing isn't above covert ops to accomplish its five-year goals

    Chinese cyberspies targeted two Russian defense institutes and possibly another research facility in Belarus, according to Check Point Research.

    The new campaign, dubbed Twisted Panda, is part of a larger, state-sponsored espionage operation that has been ongoing for several months, if not nearly a year, according to the security shop.

    In a technical analysis, the researchers detail the various malicious stages and payloads of the campaign that used sanctions-related phishing emails to attack Russian entities, which are part of the state-owned defense conglomerate Rostec Corporation.

    Continue reading
  • FTC signals crackdown on ed-tech harvesting kid's data
    Trade watchdog, and President, reminds that COPPA can ban ya

    The US Federal Trade Commission on Thursday said it intends to take action against educational technology companies that unlawfully collect data from children using online educational services.

    In a policy statement, the agency said, "Children should not have to needlessly hand over their data and forfeit their privacy in order to do their schoolwork or participate in remote learning, especially given the wide and increasing adoption of ed tech tools."

    The agency says it will scrutinize educational service providers to ensure that they are meeting their legal obligations under COPPA, the Children's Online Privacy Protection Act.

    Continue reading
  • Mysterious firm seeks to buy majority stake in Arm China
    Chinese joint venture's ousted CEO tries to hang on - who will get control?

    The saga surrounding Arm's joint venture in China just took another intriguing turn: a mysterious firm named Lotcap Group claims it has signed a letter of intent to buy a 51 percent stake in Arm China from existing investors in the country.

    In a Chinese-language press release posted Wednesday, Lotcap said it has formed a subsidiary, Lotcap Fund, to buy a majority stake in the joint venture. However, reporting by one newspaper suggested that the investment firm still needs the approval of one significant investor to gain 51 percent control of Arm China.

    The development comes a couple of weeks after Arm China said that its former CEO, Allen Wu, was refusing once again to step down from his position, despite the company's board voting in late April to replace Wu with two co-chief executives. SoftBank Group, which owns 49 percent of the Chinese venture, has been trying to unentangle Arm China from Wu as the Japanese tech investment giant plans for an initial public offering of the British parent company.

    Continue reading
  • SmartNICs power the cloud, are enterprise datacenters next?
    High pricing, lack of software make smartNICs a tough sell, despite offload potential

    SmartNICs have the potential to accelerate enterprise workloads, but don't expect to see them bring hyperscale-class efficiency to most datacenters anytime soon, ZK Research's Zeus Kerravala told The Register.

    SmartNICs are widely deployed in cloud and hyperscale datacenters as a means to offload input/output (I/O) intensive network, security, and storage operations from the CPU, freeing it up to run revenue generating tenant workloads. Some more advanced chips even offload the hypervisor to further separate the infrastructure management layer from the rest of the server.

    Despite relative success in the cloud and a flurry of innovation from the still-limited vendor SmartNIC ecosystem, including Mellanox (Nvidia), Intel, Marvell, and Xilinx (AMD), Kerravala argues that the use cases for enterprise datacenters are unlikely to resemble those of the major hyperscalers, at least in the near term.

    Continue reading

Biting the hand that feeds IT © 1998–2022