Class-Action Lawsuit Forming Against Intel for 'Downfall' Chip Bug

Intel Alder Lake
(Image credit: Tom's Hardware)

Intel may soon find itself with its back against a proverbial wall following the disclosure of the "Downfall" chip vulnerability earlier this month. According to a class action aggregator, there's now an open interest in pursuing a lawsuit against Intel for damages relating to the Downfall vulnerability. That isn't surprising, given that fixing the bug can result in up to 39% less performance in some workloads and impacts what could number in the billions of processors. Called by law firm Bathaee Dunne LLP, the class action investigation (which is still garnering interest, plaintiffs, and information) aims to force Intel to compensate customers for "the loss of value, reduced performance, security issues and other damages stemming from the Downfall vulnerability."

Intel's Downfall is another high-impact, difficult-to-mitigate vulnerability that attacks speculative execution — a feature of modern CPUs that aims to predict what data and/or operations will be necessary for a workload to be completed before the information is even required. Speculative execution thus aims to keep that information readied and easily accessible for processing. Still, as the amount of vulnerabilities in speculative execution scenarios increases, we've also seen a trend where fixing these issues has a correspondingly negative impact on performance. For now, it appears Intel's mitigations for Downfall have an average performance cost around the 39% mark

Intel itself said performance could decline by as much as 50% in certain scenarios, showcasing just how important speculative execution is for a modern CPU's performance. Considering how the vulnerability affects Intel processors ranging from 6th-gen (Skylake) to 11th-gen (Rocket Lake), including Xeon products based on the same architectures, the amount of affected Intel CPUs will likely be in the billions. 

The argument for the class action lawsuit stems from the fact that affected users (like Intel) are left between a rock and a hard place: They paid 100% of a CPU's cost (which, in Intel's lineup, translates into an expected performance). But now, users have to choose between leaving their systems vulnerable to the Downfall speculative execution attack (not good) or taking a substantial hit to performance on workloads that matter to them (not great, either).

But in this case, keeping the vulnerability unaddressed could have a real impact on businesses and users. According to security researcher Danial Moghimi (who initially disclosed Downfall), the vulnerability would allow malicious third-party apps and services to steal sensitive information, including passwords, financial details, and even cloud-stored data.

Considering how AMD also has had its fair share of vulnerabilities (such as Inception, Squip, and its recent "Divide by zero" bug), it remains to be seen whether something similar will be aiming for the red team's bottom line as well.

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • InvalidError
    Most of these side-channel attacks are purely theoretical yet practically impossible to achieve under real-world conditions where systems are juggling tens of thousands of threads spanning dozens of unrelated background processes, applications, system services, etc. and there are no guarantees that the desired data is in-flight through the victim algorithm at any particular time for the attack code to get anything back from.

    Since all of those side-channel attacks require extensive CPU-time abuse to even have a chance of succeeding, a successful exploit also requires failure to catch rogue processes consuming a disproportionate amount of CPU time that should cause performance complaints along with a substantial increase in system power draw.

    Most systems don't handle data anywhere near sensitive enough to worry about these. For environments where hypothetical side-channel attacks are unacceptable, we'll need different CPUs designed specifically to eliminate all potential crosstalk between unrelated threads.

    For most other uses, all that is really necessary would be to provide software developers with the ability to mark security-critical parts of their software to prevent unrelated code from running on the same CPU/L2 until it exits the protected section - can't side-channel-attack code when you cannot run concurrently.
    Reply
  • abufrejoval
    Resilience against side channel attack is not a quality that was sold, so I don't see a logical base for getting damage compensation.

    But juries vote and do not deduct, so who knows where this is going. It won't help if CPU makers were completely driven out of business.

    Verifiying a speculative design to have no side vulnerabilities seems theoretically impossible because it sounds very much like an Emil Post correspondance problem, "reasonable assurance" still neigh unaffordable.

    Without all those speculation tricks CPUs can only go wide like GPU cores and that means that without a massive rewriting of all code speeds will badly disappoint.

    This will drive a large performance and technology wedge between designs that don't need to care about side channels, because they are not shared and others that potentially share resources.

    If speculation can be activated on the fly so that certain code passsages can be excluded, or if caching domains can be segregated more carefully, that's also something that might help in cloudy scenarios.

    But being successfully sued for something nobody asked you to design for, should be reserved for very exceptional cases.
    Reply
  • RichardtST
    Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down. Poof. Gone. No more hardware. No more software. The complexity of them both is to such a degree that it is, at this point, impossible to not have a number of fatal flaws. Just not gonna happen. The buyer has to accept the risk, otherwise they get nothing.
    Reply
  • InvalidError
    Francisco Alexandre Pires said:
    It seems to me (and please correct me if I'm wrong) that you are conflating "speculative execution" with "side-channel attacks".
    Because it is a side-channel: Downfall poses no threat at all without malicious code somewhere else actively attempting to scoop whatever data it may be leaking and successfully extracting data still requires the attacking thread to get lucky with peeking at what it can see while relevant data is being processed and attempt to infer the target content from it. No plain-text user-space data is being directly leaked.

    RichardtST said:
    Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down.
    You do want to allow accountability for bugs that make a product fundamentally unsuitable for its intended purposes like the Pentium FP bug that made affected FPUs effectively worthless.

    Mostly hypothetical security bugs like Downfall and Heartbleed don't really affect anyone besides people who run high-security applications on x86 hardware and if you run such stuff, you should be closely monitoring your systems for unidentified processes as your fourth line of defence after strict firewalling, strict access control lists and skimming logs for suspicious connection/login attempts for preemptive action.
    Reply
  • ravewulf
    To my knowledge, they never claimed it would be completely bug-free and it's simply not possible to predict how much of a performance impact an unknown future mitigation is going to cost. At best, you could give a worst case for performance if you completely disable speculative execution but that's still leaving unknown variables on the table. So, yeah, it sucks that you have to choose between maximum security vs the original performance but what would be the reasonable alternative?
    Reply
  • hotaru251
    imho Intel will likely fallback on the unknown attacks in future of a product as its beyond their scope of knowing as attacks and the like change every day.
    Reply
  • InvalidError
    ravewulf said:
    So, yeah, it sucks that you have to choose between maximum security vs the original performance but what would be the reasonable alternative?
    If you need the highest security possible, don't allow your security-critical code and services to run on a machine that also runs arbitrary user code such as a virtualized server instance. If you don't allow any unknown code on your security-critical servers, side-channel exploits are irrelevant.
    Reply
  • domih
    RichardtST said:
    Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down. Poof. Gone. No more hardware. No more software. The complexity of them both is to such a degree that it is, at this point, impossible to not have a number of fatal flaws. Just not gonna happen. The buyer has to accept the risk, otherwise they get nothing.
    It depends. I believe it has to be on a case per case basis. As it is today.

    See for instance the MoveIt disaster (https://www.darkreading.com/search?q=MoveIt), all these consequences for a stupid SQL injection. IMHO, software makers should be held responsible for NOT preventing SQL injections.

    Car makers have been sued for defects and car makers are still making cars.
    Reply
  • ex_bubblehead
    RichardtST said:
    Well, here's the real problem... If you start allowing damages against bugs, then the entire hardware and software industries shut down. Poof. Gone. No more hardware. No more software. The complexity of them both is to such a degree that it is, at this point, impossible to not have a number of fatal flaws. Just not gonna happen. The buyer has to accept the risk, otherwise they get nothing.
    ^^
    This.

    When I studied software engineering some 40 years ago the first thing put to us was that in any program of more than 4 lines there is at least 1 bug, and, that it is mathematically impossible to prove that any piece of code is bug free. Hardware works the same way.

    The proof of this is simple. First, any editors, compilers/assemblers, debuggers, etc. must be proven free of all errors. Then the hardware they run on must be proven. Then the operating system and storage systems must be proven, etc. etc. etc. All possible interactions and permutations must be accounted for and tested.

    The number of possible permutations that result rapidly reach close to infinity in very short order.
    Reply
  • InvalidError
    Francisco Alexandre Pires said:
    In that sense, customers see a performance regression that has nothing to do with how they use the product, but which has everything to do with the (bugged) features Intel (and others) build into their products.
    What bugged feature? The speculative execution and load features work fine. The only problem is that some architecture registers which weren't thought to be of significance at the time the feature (gather registers) got implemented turned out to be a potential information leak vector.

    As I have written before, the flaw is of no consequence in a tightly buttoned-up system where no foreign code is allowed. Simply having software monitoring suspect CPU usage would be enough to catch malware attempting to exploit side-channel attacks before they get a chance of achieving anything.

    CPU-level performance-sapping mitigation is only necessary if you want to do zero effort whatsoever to prevent it on the software side.
    Reply