Sign in with
Sign up | Sign in

Security In The Enterprise

Charlie Miller On Hacked Batteries, Cloud Security, And The iPad
By

Alan: How could Apple have prevented this?

Charlie: Have fewer bugs I suppose! This particular bug would have required a code audit, I think. Fuzzing probably would not have found it. Also, it could have reduced the attack surface available for the second, escape-the-sandbox vulnerability. Everything else, it did right. ASLR, DEP, sandboxing. The iOS security model is pretty good, but this just goes to show that there are always potentially attacks that can work.

Alan: The mainstream media often talks about "jailbreaking," but the term really downplays the underlying issue that this is a remote exploit that allows arbitrary code to be run. As I see more and more enterprises using iPads, I can’t help but be paranoid about the security issues. We know that over 114 000 email addresses of early iPad owners were stolen from AT&T. These early adopters included high-level executives at major technology and finance/banking firms as well as government agencies. Given that it’s trivial to remotely execute code via PDF engine, and the documented sophistication of the hacks of companies like Lockheed Martin, it seems almost naïve to think that no one has attempted to compromise sensitive data via targeted attacks on the iPad. How should we deal with this issue?

Charlie: Well the problem is that all devices are susceptible to attacks of this nature, and an iPhone/iPad is a device. iOS-based devices are more secure than Mac OS X-based devices due to the code signing and sandboxing of applications. They are probably more secure than desktops running Windows 7. The biggest risk is you'll physically lose them and lose your data that way. But, despite the fact we've seen attacks against iOS devices, such as jailbreakme.com, it is pretty rare and malware is very rare too. I think iOS is about as good as we can do for now. There is always risk your device will be compromised. What you need to focus on is limiting access to data at any one time, detecting attacks quickly, etc.

Alan: Along the same lines as the earlier cloud computing question, are organizations better off adopting a heterogeneous computing environment or a homogeneous one? That is, if I only have one platform to support (say, an iPad), a security officer can really focus all of the efforts on securing one platform. If I have to support multiple platforms, my efforts to secure the network will be diluted across systems and the sieve will be twice as big with more potential holes and vulnerabilities. On the other hand, the argument for heterogeneous computing is that if I do lose against the bad guys and one of my platforms is compromised, I can quickly switch/rollover the company to the still uncompromised platform (and make the bad guys work twice as hard).

Charlie: This is a great question and the same answer might not fit for everyone. I used to recommend homogeneous environments to ease burden on patching systems. If your enterprise can't really keep up on patches, this is probably for you. However, for the best defense, heterogeneous networks are superior. You must design your network knowing that machines on it will be compromised. Desktops will get malware, your CMS will get SQL injected, etc. Just like you shouldn't use the same passwords in multiple places, you shouldn't use the same operating systems or devices in different places. That way, it will be much harder if an attacker needs exploits against different platforms to make any progress. In fact, most attackers won't have the skills to attack two or three different up-to-date systems, which raises the overall bar for security. You don't want every single computer in your network exploited because of a single Windows kernel remote.

React To This Article