TH: What does this mean for the future of smartphone security? We used to say that MacOS X was insecure, but faced a limited number of threats. It wasn’t economically viable. Macs had to reach 16.6% marketshare for it to make sense. That said, with the White House running on Macs and Google running on Macs, the Mac is a higher priority.
But I think it’s even worse on the iPhone. If I can take over an iPhone, not only do I have personal information like the person’s telephone number, but I can also figure out which bank the owner uses based upon whether the Bank of America or Wells Fargo app is installed. I’m also willing to bet from a social engineering perspective than most iPhone users share the same 4-digit PIN between their iPhone and ATM card…
Charlie: I'm asked about the security of smartphones all the time. In the history of iPhone security, since version two came out in 2008, there had only been one public, remote root exploit against the iPhone. That was the SMS exploit I presented last year at BlackHat, and the exploit was never released or seen in the wild. So, while iPhones were theoretically vulnerable to attack, we simply hadn't seen it in practice. I felt there was pretty much no risk of remote attack against factory iPhones because it would be too difficult to find a Safari bug, do ROP, run a local privilege escalation, or disable code signing. This exploit has changed my mind, and I now see iPhones as vulnerable to attack, especially until a patch is available.
TH: What’s a reasonable expectation for end-users to have for Apple to patch these vulnerabilities? Is this something that takes a lot of effort or is it merely a matter of changing a few lines of code now that the problem is known?
Charlie: These flaws should be patched quickly, I'd say within two weeks. Apple will be very motivated to fix it, not so much because of the security implications, but because they don’t like people jailbreaking their phones. The fix should be relatively simple, and require little testing since the software runs in only a handful of different hardware configurations.
TH: Apple always talks about the security risk after jailbreaking. But to clarify, the vulnerabilities themselves that allowed unsigned code to be run in the first place were always there on every iPhone or iPad, regardless of jailbreak status, right?
Charlie: Yes, jailbreaking does weaken the security of the device by circumventing the security architecture as designed by Apple (code signing, running apps as user mobile in a sandbox, etc). However, in this case, the exploit works against fully-patched factory iPhones, which have not been jailbroken. This is the first example since iOS 2 came out that such a remote exploit exists. Previously, the ikea worm only worked against jailbroken phones, but this exploit works against phones right out of the box.
TH: Before that, we had the TIFF exploit that allowed the first-generation iPhone to be hacked, right? Was that the same sort of problem?
Charlie: So, the TIFF exploit was also a remote code vulnerability in MobileSafari. The difference was that back then, in the iOS 1 days (circa 2007), the iPhone security architecture was very immature. MobileSafari ran as root without a sandbox on factory phones. So, in this case, no second exploit was needed; any remote code vulnerability led to full administrative privileges on the device. You can see how much the security improved when version two came out.
TH: Apple tries to share the underlying code between iOS4 and the desktop Mac OSX. Although there are obviously architectural differences between the two, can these vulnerabilities of the iPhone and iPad be used to develop an exploit for the desktop OS?
Charlie: I would guess yes; I'm currently looking into this.
TH: Have you looked at Android, BlackBerry, Symbian, or Windows Phone 7 recently?
Charlie: I have also looked at Android security. In fact, I wrote an exploit for the Web browser for the G1 phone the day it came out. As for comparing it to iPhone, it is sort of hard to compare, as they make different choices in how they secure their phone. But I'd say they are roughly comparable.
TH: What kind of phone are you using?
Charlie: I normally use an iPhone 3GS, but it got some water in it a couple of days ago, so I'm using a first-generation iPhone at the moment that I normally use for hacking, err testing. I normally don't do anything special, and I take my phone into some pretty hostile environments like Blackhat and DEFCON. As I said, before I thought I was pretty safe from attack. At least until the patch comes out for these vulnerabilities, I'm being careful about following links in emails and twitter. By the way, you can follow me on twitter at 0xcharlie.
TH: Still running your MacBook?
Charlie: Yes, I'm still using my ancient MacBook, which has now won Pwn2Own three times. It is really on its last leg, so pretty soon I'll be switching to one of the MacBook Pros I've won. It'll be a sad day, as I have a sentimental attachment to this computer. It’s the only one I think I’ve ever written a Mac OS X exploit on; I wrote Mac Hacker’s Handbook on it. I’ll really miss it!
TH: How is progress with “No More Free Bugs?”
Charlie: Really good. Since we started talking about No More Free Bugs, Google has started paying for vulnerabilities, and within the last month raised the amount they pay to over $3000 per bug. Not long before this, Mozilla raised the amount they pay for security vulnerabilities from $500 to $3000. There has also been a large increase in the use of third-party "middlemen" like TippingPoint's Zero Day Initiative (ZDI). I don't take credit for all of these events, but I'm happy to see them happening. Anything that helps software ship with fewer bugs is okay with me. By the way, Microsoft and Apple still don't pay for vulnerabilities.
TH: Your company specializes in architecture review, penetration testing, and source code review. Recognizing the inherent bias, how could each of these services have changed things? Was this exploit purely the result of bad luck on Apple’s part? Or was this something preventable in hindsight?
Charlie: Companies make their own luck, good or bad. There are many things Apple could have done to prevent this from occurring, although it’s always easier to see in hindsight. First, by fuzzing and code auditing, the vulnerabilities could have been found and patched before it was discovered outside Apple. For example, in March I gave a talk about fuzzing where I mentioned I found 40-60 exploitable vulnerabilities in Preview. If I found this many bugs, there are probably many hundred critical bugs in the code.
Next, the sandbox rules could have been tighter. I doubt there was any reason to allow the Web browser to interact with IOKit the way it did. It should display Web pages, not mess with kernel drivers.
The ironic thing is that if Apple would offer a more open device, this event would have never happened. The guy who discovered these bugs and wrote the exploits wasn't a security researcher or evil hacker; he was someone who wanted to jailbreak his phone and use it however he wanted. If Apple allowed independent third-party applications, he would have never gone looking for bugs.
TH: Well he’s done us a great service for all of us. Not only has he taken the time to show us the security risks of our smart phones, but he’s given us a iOS4 jailbreak at the same time. Thanks a lot for your time.
Charlie: You’re very welcome.
UPDATE: The PDF bug was closed with the release of the 4.02 firmware. As a result, Jailbreakme.com no longer works. Owners of jailbroken iPhones can obtain the third-party fix through the Cydia. Unfortunately, owners of the original iPhone and iPod Touch do not have access to a patched version of iOS. Apple's response time to this vulnerability was good and within the two-week target time.