'METIOR' Defense Blueprint Against Side-Channel Vulnerabilities Debuts
Bridging hardware design and cybersecurity once again.
It's been a while since the recognition explosion they got back in 2019, but preventing side-channel attacks is still an important part of our cybersecurity. An exotic approach towards information stealing, side-channel attacks marred CPU designs from both AMD and Intel, with vulnerabilities proving severe enough that companies preferred to roll out performance-degrading patches rather than let customers operate in insecure hardware. Now, a new MIT framework by the name of Metior aims to improve the world's capability to better understand side-channel attacks and perhaps improve how to defend against them.
Metior is an analysis framework built by the Massachusetts Institute of Technology that aims to simplify hardware and software design frameworks to improve defense capabilities against known (and unknown) side-channel attacks. Essentially, Metior enables engineers to quantitatively evaluate how much information an attacker can steal with a given side-channel attack.
It's essentially a simulation sandbox, where chip designers and other engineers can find what combination of defenses maximizes their protection against side-channel attacks, according to their use-case. Because you can quantitively measure how much information is stolen, you can calculate the impact of it being stolen (according to your system and program and every other variable), which means you can now decide to bake in protections from the most impactful types of attacks.
By looking at the underlying problem - that side-channel attacks are made possible by the simple operation of a computer system, and that hardware mitigations are costly and not always overlapping - MIT managed to collate what amounts to a series of design rules.
These design rules are meant to maximize hardware-level defense against a variety of side-channel attack techniques, while also attempting to emulate them so they can be better understood. This is a departure from the slightly more haphazard defense method undertaken by companies whose products were vulnerable to side-channel attacks (such as Intel). To be fair, that approach - to provide hardware mitigations against specific side-channel attack vectors - was needed in order to stem the trust decline caused by it being vulnerable to the exploit in the first place. But those solutions are like bandages on open wounds, cost way too much performance (such as 35% on a particular Spectre-v2 vulnerability), and side-channel defense requires something more robust and multifaceted.
Speaking with SciTechDaily, Peter Deutsch, a graduate student and lead author of an open-access paper on Metior, explains that “Metior helps us recognize that we shouldn’t look at these security schemes in isolation. It is very tempting to analyze the effectiveness of an obfuscation scheme for one particular victim, but this doesn’t help us understand why these attacks work," he said. "Looking at things from a higher level gives us a more holistic picture of what is actually going on,” he concluded.
Side-channel attacks are a particularly superstitious type: through them, attackers don't even need access to any specific application logic to steal information from it, they can simply observe how it operates. How much time does it spend accessing the computers' memory? How deep was that memory flush? And remember that this happens in various components within your PC: even GPUs are vulnerable to this type of attack.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
It's almost the same as putting your fingers to your wrist to feel your pulse: you can tell your heartbeat, but you're extrapolating it from other information sources; you don't need to look inside your container (your heart, body) or directly see your blood flow. Side-channel attacks generally work in the same way; attackers can steal precious information just by observing traffic and flow at key moments in a given program's operation.
You can imagine how hard and expensive it is to mask something like someone's heartbeat, and that's part of the difficulty with protecting from side-channel attacks. But typically, protection from these data-stealing attacks is secured through obfuscation: by trying to hide the computer system's equivalent to a pulse (the information passing between its memory and CPU).
So if a side-channel attack is looking for a pattern of memory accesses, for instance, one way to obfuscate that would be to change the way the program accesses memory: by making it fetch other, unnecessary memory bits, by flushing and caching through more information cycles... you name it. The goal is simply to interrupt the predictable string of bits that give side-channel attackers their needed information.
This is difficult, and costs performance, because security is being achieved by actively "scrambling" the information that's still being produced and leaked just by executing the program itself. And it also costs development dollars, because most of the techniques to scramble these "organic" computing signals need other, superfluous operations to occur in order to "obfuscate" the real patterns that attackers are looking for. Anything in computing that costs energy and computing cycles ultimately hurts performance.
“Any kind of microprocessor development is extraordinarily expensive and complicated, and design resources are extremely scarce. Having a way to evaluate the value of a security feature is extremely important before a company commits to microprocessor development. This is what Metior allows them to do in a very general way,” Emer says.
And in a very general way, it's also what every organism and organization on the planet wants to achieve: to work smarter, not harder.
Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.
-
digitalgriffin If process per thread encryption of next gen hardware becomes a reality, and done properly with the decryption happening in channels far apart, this will stop 99.9% attacks.Reply -
InvalidError
The cache flush, power and other side-channel attacks don't care about the raw data, only about externally observable memory, cache, power, etc. patterns from which they can infer what data the target code is processing. Encrypting the process' memory space wouldn't affect that in any way, the processing behaviour and externally observable effects would remain the same.digitalgriffin said:If process per thread encryption of next gen hardware becomes a reality, and done properly with the decryption happening in channels far apart, this will stop 99.9% attacks.
If you want to secure your data against side-channel leaks, you have to change your algorithms to whiten any externally observable noise they may generate. Things like re-arranging your code to take constant time, constant power, constant reads/writes/flushes with constant timing regardless of data. One way to maintain near-constant-everything would be state machines where all code for every state gets executed on every iteration of a loop and ends with a jump table to select the desired result. While horribly inefficient, I bet most solutions for side-channel attacks will require such sacrifices. -
bit_user Side-channel attacks are a particularly superstitious type: through them, attackers don't even need access to any specific application logic to steal information from it, they can simply observe how it operates.
I think you do need to know the specific build of whatever library or application code is handling the data you want to extract. Otherwise, you cannot model it. It should also be noted that the observations are indirect, therefore requiring no special privileges.
I remain somewhat skeptical of how many such attacks have been successfully performed, in the real world. Even if we're talking about stealing encryption keys, the spying thread is going to have to continually hammer the CPU and just hope that it gets scheduled alongside something handling those keys. Then, there's the whole challenge of trying to figure out what and who you just spied on, because you normally have no visibility or control over what are the other tenants.
You can imagine how hard and expensive it is to mask something like someone's heartbeat, and that's part of the difficulty with protecting from side-channel attacks.
Not really. If you can't touch me, you can't feel my pulse. It's not a perfect analogy, but it does apply. In a hosted environment, someone doing something secure can spend extra money to rent out an entire instance, so it's shared with no other tenants.
Another thing you can do is restrict foreign threads from sharing your core. Google integrated a feature into the Linux kernel called core-scheduling. I assume Hypervisors have a similar control for that, at the VM level. This only helps with SMT-based exploits, but that's how the strongest side-channel attacks work. -
InvalidError
AFAIK, most of these have only been demonstrated in lab environments where the researchers basically control every variable. In a real-world environment on a somewhat busy server, you have 100+k APIC events/sec and all of the other kernel/context-switching noise those may generate screwing up any timing, scheduling and power measurements. That is on top of all of the uncertainty regarding which account keys are being processed at any given time adding to the amount of noise spy algorithms would need to somehow sort out.bit_user said:I remain somewhat skeptical of how many such attacks have been successfully performed, in the real world. Even if we're talking about stealing encryption keys, the spying thread is going to have to continually hammer the CPU and just hope that it gets scheduled alongside something handling those keys.
Then you have the constant power drain and CPU load by some unknown process likely to get the spy process flagged as a suspicious CPU hog in relatively short order if the operators are doing any sort of power and performance monitoring.
Most of these threats are purely theoretical. The attack code would need to run uninterrupted for so long before producing a first successful attack, only systems that are left completely neglected would be likely to leak information. I'd expect most servers hosting information sensitive enough to worry about this are being monitored closely enough that such spyware would be short-lived. -
digitalgriffin
The constant read/writes/flushes are to overwhelm the circuits with physics similar to row hammer. Then reading and injection can occur. But if each channel is running a unique encryption, injection of malicious data instructions will mess up the encryption and at worst, cause a crash.InvalidError said:The cache flush, power and other side-channel attacks don't care about the raw data, only about externally observable memory, cache, power, etc. patterns from which they can infer what data the target code is processing. Encrypting the process' memory space wouldn't affect that in any way, the processing behaviour and externally observable effects would remain the same.
If you want to secure your data against side-channel leaks, you have to change your algorithms to whiten any externally observable noise they may generate. Things like re-arranging your code to take constant time, constant power, constant reads/writes/flushes with constant timing regardless of data. One way to maintain near-constant-everything would be state machines where all code for every state gets executed on every iteration of a loop and ends with a jump table to select the desired result. While horribly inefficient, I bet most solutions for side-channel attacks will require such sacrifices.
You can install internal profilers for instructions to see if there's an excess number of flush ops/read/write ops and intentionally throttle them.