Alan: With "red pill," you showed how it also possible to detect when an application was running within "The Matrix." In your proof-of-concept, you simply looked at IDTR via the SIDT instruction.
Joanna: Not inside the Matrix, but inside a Matrix. There’s a difference between detecting a virtualization versus a specific hypervisor, such as BluePill, as I said earlier. Also, note again that RedPill was targeting detection of software-based virtualization, as was used by VMWare products before Intel and AMD introduced VT-x/AMD-v (pre-2006). My original RedPill that was published in 2004 was unable to detect hardware virtualization.
Other people presented detectors capable (or sometimes not) of detecting hardware virtualization--this was in 2006 and 2007, after my original presentation at Black Hat (in the summer of 2006) about BluePill.
Alan: So, four and a half years later, we now have identification of virtual machines via other shifted tables, such as the global descriptor table and local descriptor table, but still adhering to the same concept.
Joanna: This is because, while pretty much all the machines we can buy today have VT-x support, some virtualization products still use software-based virtualization. I think VMWare Workstation still uses it when running 32-bit guests. The new virtualization detectors--next generation "RedPills" if you prefer--those capable of detecting VT-x/AMD-v virtualization, are usually timing-based, or caching-based.
Alan: If you're a good guy, you're worried about malware that runs in Ring-1. Identifying the presence of a virtual machine is therefore helpful. If you're not supposed to be running in a VM, but somehow you are, red flags can be raised.
Joanna: Sure, but let's not forget that the world seems to be moving towards virtualization, even used on desktop systems--Xen Client Initiative (also known as Project Independence), or Phoenix HyperCore.
Alan: With that said, if you were the evil guy running the virtual machine, couldn't you spoof the results to make the stealth malware remain hidden? That is, if it saw an application requesting the IDT and the next line of code is "does the result start with 0xc0 or 0x80," it would just spoof the results? So how do we protect against that type of attack?
Joanna: You don't, but such attacks assume that the evil guy knows the actual binary of the detector in advance. In other words, the evil guy must have a database of all the potential detector binaries, and which bytes inside those binaries should be "hooked" (such as the SIDT instruction) so that the binary executed as if there was no virtualization. Those are "implementation-specific attacks." They don't scale well, but are, of course, used in the wild by malware (not necessarily to cheat about virtualization being enabled or not, but rather to simply disable popular A/V programs).
Looking from the good guy's perspective, you cannot prevent such attacks if you execute at the privilege less than or equal to that used by your opponent. That is one reason why A/V products are losing the battle with kernel-mode rootkits, which often can disarm a known A/V kernel mode agent because, well, it is well-known and they can prepare an implementation-specific attack against it. Then the new version of the A/V tool comes out, and all the malware needs to be updated and so on. The arms race continues, the bad guys makes money and enjoy life, A/V also makes money and enjoy life, albeit without meeting its declared goal, which is to protect the user.
Alan: And the arms dealers make money on both sides of the war. Some current stealth malware is designed to shut off its most malicious actions if it's being run in a VM. The bad guys are doing this so that security researchers will have more difficulty identifying the true capabilities and dangers of the malware. So, if the malware is looking for IDT starting in high memory, why not have home users run operating systems that put the IDT in high memory? The home user is least-likely to run a VM, and the malware writer will not know if seeing the IDT in high memory means that they've infected a clueless home user (which is awesome) or if they're being run within a VM by a skilled researcher who’s set up a trap?
Joanna: Malware authors would quickly realize that it’s not worth it to avoid "virtualized" systems anymore, and we would get back to the same point where we are today.
Alan: But you’re one step ahead now because you’ve moved the clock forward, because the malware cannot hide behind obscurity by cloaking within a VM. So if it took x days for the security researcher to develop the countermeasure against the malware’s neutered VM mode, and then y days to analyze the sample and develop the countermeasure, you’ve just decreased “time in the wild” by x days?
Joanna: As I said earlier, this is naïve thinking and does not offer any protection in the longer term.
Alan: Ok. Your team didn't stop at Ring -1. What's happening at Ring -2?
Joanna: Yeah, Ring -1 malware is so 2006! Every x86 CPU has also something that is called System Management Mode (SMM), which itself is nothing new, as it has been present since the 80386 processors. What makes it interesting now is that when virtualization has been added to the processors, it turned out that SMM mode actually was granted higher privileges than the newly introduced Ring -1 hypervisor mode. Thus we called it "Ring -2" to stress its power over the hypervisor mode.
Our team hasn't been the first that played with SMM, however. In 2006, Loic Duflot presented a very nice attack against OpenBSD securelevel mechanism that used SMM mode. Loic used SMM as a "tool" in his attack, not as a target. Back then, in 2006, it was not unusual for the SMM not to be protected in any way on most systems--so if one had root access (or kernel-mode access), one could inject any code they wanted into the SMM and had it executed with SMM privileges. Of course, one still had to have this root or kernel access to do that.
But then system vendors started securing the SMM mode. The special part of DRAM that is used to hold code executed in SMM, which is called SMRAM, got special protections imposed by the chipset (Memory Controller Hub, to be precise). On most of the current systems, it is thus highly non-trivial to get any code executed with SMM privileges. One needs to find a bug in the chipset or BIOS in order to do that (even if we have kernel-mode access). In fact, at the last year's Black Hat in Las Vegas, Sherri Sparks and Shawn Embleton presented about SMM rootkits, and they made it clear that their rootkits could only be loaded on older systems (pre-2006). Interestingly the very next day, we had a series of presentations about Xen compromises, and we discussed a bug in Intel BIOSes that allowed for arbitrary code execution in SMM mode. I believe this was the first publicly discussed bug to allow to bypass SMM protection. Since that time, we have found two other ways to get into SMM mode on various systems. One another attack we found at the end of 2008 affected a large number of Intel-based systems (and potentially also with other BIOSes too) and we used it to bypass Intel TXT during our presentation at Black Hat DC in February.
This bug is still being patched by Intel and we are planning to present the details at the Black Hat USA next month. Yet another bug we found earlier this year was in the caching semantics as used by Intel CPUs. This very same bug has also been discovered independently by Loic Duflot and we agreed with Loic to publish a paper on this bug on the same day as Loic was presenting on this at a conference.
So, to sum it up, SMM rootkits (or Ring -2 rootkits) require access to the very well-protected SMM memory, and on most systems today an attacker must use some sort of an (often non-trivial) exploit to get there.
Alan: Are these SMM attacks limited to specific hardware?
Joanna: They are usually limited to given BIOS version (or at least a family of BIOS products) and also to specific family of chipsets like 3- or 4-series Intel chipsets.