Exploit writer spills beans on secret iPhone function

iOS debugger of no use to anyone ... except hackers


Black Hat Independent security consultant Stefan Esser made waves earlier this year when a technique he developed for hacking iPhones was adopted by JailbreakMe and other mainstream jailbreaking services.

The Register caught up with the German researcher at the Black Hat security conference in Las Vegas just ahead of his scheduled talk titled Exploiting the iOS Kernel. Here are the highlights of the discussion, including details about an undocumented debugger that can only be accessed using a custom-built connector:

El Reg: In a nutshell, what's your presentation about?

Esser: The idea is once you execute remote code you have the big problem that you still can't do anything on the iPhone because of all of the protections. In order to disable these protections you have to get into the kernel and disable them. If you cannot do that, you cannot put up a rootkit. So you need kernel exploits for rootkits.

I'm explaining how to actually exploit the kernel and how to make use of different parts. I show what you go after in the kernel, where you disable these security features. Most of the people don't know how to do kernel exploitations. This is a short course on how to do it.

You said earlier that one of the ways you go about exploiting the iOS kernel is making use of secret functionality. What is it?

It's a kernel debugger. Its actually not used. Developers should not have access to it. Apple even gives no normal way for an iOS developer to access that. They have it in the Mac OS, so they just left it inside the iOS kernel. It helps an attacker to make the exploit more stable, to make it easier to write the exploit.

It's an Apple-endorsed kernel debugger that people use to develop kernel drivers on Mac OS. On iOS, they're not supposed to do that, but Apple just left it inside, maybe thinking that no one can use it because normally the kind of interfaces you need, like Ethernet and serial, are not exposed from the iPhone. By default there's no way to speak with it on the iPhone.

So how do you access it, then?

When you look at the connecter of the iPhone, there are two pins that are like serial. There's no public cable or something like that that will expose them, but you can build your own cable and then speak serial with the iPhone.

Does that mean an attacker has to have physical access to the iPhone he wants to exploit?

No. That's only for development of the exploit. It makes development of the exploit easier. It makes it far easier to do the exploit.

What kind of information does the debugger give you that you otherwise wouldn't have?

A debugger gives you complete control over the CPU at the moment of the crash, so you can do anything. You can read memory, write memory, read the registers, change the values of the registers. The development time will be far shorter, and it's not fishing in the dark anymore. It's like you have full light.

In a nutshell, how do you go about exploiting an iOS device?

When you have a kernel debugger, you start with having a bug in the kernel. Once you have a bug, the first thing you'd do is write some trigger code that will cause the crash, and either you analyze the crash dump or you take a kernel debugger and try to get the program to jump to your own code to execute.

I also show how, when you have a heap overflow, you manage to control the layout of the heap so that you can write an exploit that leverages the heap overflow. So that when you trigger the buffer overflow you can control what you're actually overwriting and how to gain code execution from that.

How is it different doing that in iOS than in Unix, Linux, Windows, or OS X?

There are a lot of similarities, especially between iOS and Mac OS. During the talk I will highlight some of the differences that in some cases make it easier to exploit and in other cases make it harder to exploit in iOS. A lot of the techniques are known but were never applied to iOS before. It's more proof that it's possible and how real-world examples work.

How would you describe exploitation of the iOS kernel relative to other ones? Is it harder?

The big difference is that iOS has code-signing, so you cannot just put some shell code in there or use the Windows way to have a short ROP [return-oriented programming] payload that makes some memory readable, writeable, and executable, and then just jump there. That's not possible in iOS. In iOS, you have to do the whole kernel exploit in return-oriented programming, which makes it a lot harder to create an actual exploit.

So the security features of the iPhone make kernel exploitation a lot harder, but once you're in the kernel, there are no mitigations inside the kernel to protect the kernel itself from being exploited. The kernel is just there to guard the userland, but the kernel itself has no mitigations inside.

If you were advising Apple, would you say, "Remove the debugger from iOS"?

There is actually no reason to keep it in there. I would advise them from a security point of view to remove it. And there are other features that are not really used and make exploit development easier. These are things Apple can remove that would make exploitation harder.

Like what?

There's a function in iOS and Mac OS also that gives you some information about the state of the heap. With this information, the whole controlling-the-heap thing is easier. It's still possible to do it without it, but the exploit can be made more robust, more stable, with having this feature.

What kind of communications have you had with Apple? Have you been speaking with anyone in their security department about your work?

Not really. The only thing I spoke with them [about] was they asked me to apply for a job.

When did that happen? Are you interested?

Right after I released the first of the exploits for jailbreaking the iPhone, in April or so. At the moment I'm just evaluating other options. ®


Other stories you might like

  • Robotics and 5G to spur growth of SoC industry – report
    Big OEMs hogging production and COVID causing supply issues

    The system-on-chip (SoC) side of the semiconductor industry is poised for growth between now and 2026, when it's predicted to be worth $6.85 billion, according to an analyst's report. 

    Chances are good that there's an SoC-powered device within arm's reach of you: the tiny integrated circuits contain everything needed for a basic computer, leading to their proliferation in mobile, IoT and smart devices. 

    The report predicting the growth comes from advisory biz Technavio, which looked at a long list of companies in the SoC market. Vendors it analyzed include Apple, Broadcom, Intel, Nvidia, TSMC, Toshiba, and more. The company predicts that much of the growth between now and 2026 will stem primarily from robotics and 5G. 

    Continue reading
  • Deepfake attacks can easily trick live facial recognition systems online
    Plus: Next PyTorch release will support Apple GPUs so devs can train neural networks on their own laptops

    In brief Miscreants can easily steal someone else's identity by tricking live facial recognition software using deepfakes, according to a new report.

    Sensity AI, a startup focused on tackling identity fraud, carried out a series of pretend attacks. Engineers scanned the image of someone from an ID card, and mapped their likeness onto another person's face. Sensity then tested whether they could breach live facial recognition systems by tricking them into believing the pretend attacker is a real user.

    So-called "liveness tests" try to authenticate identities in real-time, relying on images or video streams from cameras like face recognition used to unlock mobile phones, for example. Nine out of ten vendors failed Sensity's live deepfake attacks.

    Continue reading
  • Lonestar plans to put datacenters in the Moon's lava tubes
    How? Founder tells The Register 'Robots… lots of robots'

    Imagine a future where racks of computer servers hum quietly in darkness below the surface of the Moon.

    Here is where some of the most important data is stored, to be left untouched for as long as can be. The idea sounds like something from science-fiction, but one startup that recently emerged from stealth is trying to turn it into a reality. Lonestar Data Holdings has a unique mission unlike any other cloud provider: to build datacenters on the Moon backing up the world's data.

    "It's inconceivable to me that we are keeping our most precious assets, our knowledge and our data, on Earth, where we're setting off bombs and burning things," Christopher Stott, founder and CEO of Lonestar, told The Register. "We need to put our assets in place off our planet, where we can keep it safe."

    Continue reading

Biting the hand that feeds IT © 1998–2022