'METIOR' Defense Blueprint Against Side-Channel Vulnerabilities Debuts

Process Roadmap
(Image credit: Intel)

It's been a while since the recognition explosion they got back in 2019, but preventing side-channel attacks is still an important part of our cybersecurity. An exotic approach towards information stealing, side-channel attacks marred CPU designs from both AMD and Intel, with vulnerabilities proving severe enough that companies preferred to roll out performance-degrading patches rather than let customers operate in insecure hardware. Now, a new MIT framework by the name of Metior aims to improve the world's capability to better understand side-channel attacks and perhaps improve how to defend against them.

Metior is an analysis framework built by the Massachusetts Institute of Technology that aims to simplify hardware and software design frameworks to improve defense capabilities against known (and unknown) side-channel attacks. Essentially, Metior enables engineers to quantitatively evaluate how much information an attacker can steal with a given side-channel attack. 

It's essentially a simulation sandbox, where chip designers and other engineers can find what combination of defenses maximizes their protection against side-channel attacks, according to their use-case. Because you can quantitively measure how much information is stolen, you can calculate the impact of it being stolen (according to your system and program and every other variable), which means you can now decide to bake in protections from the most impactful types of attacks.

You can imagine how hard and expensive it is to mask something like someone's heartbeat, and that's part of the difficulty with protecting from side-channel attacks. But typically, protection from these data-stealing attacks is secured through obfuscation: by trying to hide the computer system's equivalent to a pulse (the information passing between its memory and CPU). 

This is difficult, and costs performance, because security is being achieved by actively "scrambling" the information that's still being produced and leaked just by executing the program itself. And it also costs development dollars, because most of the techniques to scramble these "organic" computing signals need other, superfluous operations to occur in order to "obfuscate" the real patterns that attackers are looking for. Anything in computing that costs energy and computing cycles ultimately hurts performance.

And in a very general way, it's also what every organism and organization on the planet wants to achieve: to work smarter, not harder.

TOPICS
Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • digitalgriffin
    If process per thread encryption of next gen hardware becomes a reality, and done properly with the decryption happening in channels far apart, this will stop 99.9% attacks.
    Reply
  • InvalidError
    digitalgriffin said:
    If process per thread encryption of next gen hardware becomes a reality, and done properly with the decryption happening in channels far apart, this will stop 99.9% attacks.
    The cache flush, power and other side-channel attacks don't care about the raw data, only about externally observable memory, cache, power, etc. patterns from which they can infer what data the target code is processing. Encrypting the process' memory space wouldn't affect that in any way, the processing behaviour and externally observable effects would remain the same.

    If you want to secure your data against side-channel leaks, you have to change your algorithms to whiten any externally observable noise they may generate. Things like re-arranging your code to take constant time, constant power, constant reads/writes/flushes with constant timing regardless of data. One way to maintain near-constant-everything would be state machines where all code for every state gets executed on every iteration of a loop and ends with a jump table to select the desired result. While horribly inefficient, I bet most solutions for side-channel attacks will require such sacrifices.
    Reply
  • bit_user
    Side-channel attacks are a particularly superstitious type: through them, attackers don't even need access to any specific application logic to steal information from it, they can simply observe how it operates.
    I think you do need to know the specific build of whatever library or application code is handling the data you want to extract. Otherwise, you cannot model it. It should also be noted that the observations are indirect, therefore requiring no special privileges.

    I remain somewhat skeptical of how many such attacks have been successfully performed, in the real world. Even if we're talking about stealing encryption keys, the spying thread is going to have to continually hammer the CPU and just hope that it gets scheduled alongside something handling those keys. Then, there's the whole challenge of trying to figure out what and who you just spied on, because you normally have no visibility or control over what are the other tenants.

    You can imagine how hard and expensive it is to mask something like someone's heartbeat, and that's part of the difficulty with protecting from side-channel attacks.
    Not really. If you can't touch me, you can't feel my pulse. It's not a perfect analogy, but it does apply. In a hosted environment, someone doing something secure can spend extra money to rent out an entire instance, so it's shared with no other tenants.

    Another thing you can do is restrict foreign threads from sharing your core. Google integrated a feature into the Linux kernel called core-scheduling. I assume Hypervisors have a similar control for that, at the VM level. This only helps with SMT-based exploits, but that's how the strongest side-channel attacks work.
    Reply
  • InvalidError
    bit_user said:
    I remain somewhat skeptical of how many such attacks have been successfully performed, in the real world. Even if we're talking about stealing encryption keys, the spying thread is going to have to continually hammer the CPU and just hope that it gets scheduled alongside something handling those keys.
    AFAIK, most of these have only been demonstrated in lab environments where the researchers basically control every variable. In a real-world environment on a somewhat busy server, you have 100+k APIC events/sec and all of the other kernel/context-switching noise those may generate screwing up any timing, scheduling and power measurements. That is on top of all of the uncertainty regarding which account keys are being processed at any given time adding to the amount of noise spy algorithms would need to somehow sort out.

    Then you have the constant power drain and CPU load by some unknown process likely to get the spy process flagged as a suspicious CPU hog in relatively short order if the operators are doing any sort of power and performance monitoring.

    Most of these threats are purely theoretical. The attack code would need to run uninterrupted for so long before producing a first successful attack, only systems that are left completely neglected would be likely to leak information. I'd expect most servers hosting information sensitive enough to worry about this are being monitored closely enough that such spyware would be short-lived.
    Reply
  • digitalgriffin
    InvalidError said:
    The cache flush, power and other side-channel attacks don't care about the raw data, only about externally observable memory, cache, power, etc. patterns from which they can infer what data the target code is processing. Encrypting the process' memory space wouldn't affect that in any way, the processing behaviour and externally observable effects would remain the same.

    If you want to secure your data against side-channel leaks, you have to change your algorithms to whiten any externally observable noise they may generate. Things like re-arranging your code to take constant time, constant power, constant reads/writes/flushes with constant timing regardless of data. One way to maintain near-constant-everything would be state machines where all code for every state gets executed on every iteration of a loop and ends with a jump table to select the desired result. While horribly inefficient, I bet most solutions for side-channel attacks will require such sacrifices.
    The constant read/writes/flushes are to overwhelm the circuits with physics similar to row hammer. Then reading and injection can occur. But if each channel is running a unique encryption, injection of malicious data instructions will mess up the encryption and at worst, cause a crash.

    You can install internal profilers for instructions to see if there's an excess number of flush ops/read/write ops and intentionally throttle them.
    Reply