Closed

Understanding The Meltdown And Spectre Exploits: Intel, AMD, ARM, And Nvidia

We cover the latest developments in the Meltdown and Spectre vulnerabilities.

Understanding The Meltdown And Spectre Exploits: Intel, AMD, ARM, And Nvidia : Read more
55 answers Last reply
More about understanding meltdown spectre exploits intel amd arm nvidia
  1. Wow that hard earn money we spent on intel cpus will come at a bigger cost now. I am sure they have know this issues for well over 20 years but Greed is the ROOT of aLL EVIL.
  2. A lot of people are claiming that gaming won't be affected, but that confuses me. You're making a lot calls from user space to the drivers, which should be in kernel space. We're not accessing the GPU from user mode directly, are we? Isn't this going through drivers and / or the HAL?
  3. jeremiah.moss said:
    A lot of people are claiming that gaming won't be affected, but that confuses me. You're making a lot calls from user space to the drivers, which should be in kernel space. We're not accessing the GPU from user mode directly, are we? Isn't this going through drivers and / or the HAL?


    The problem lies (becomes exploitable) when you move in and out of memory spaces. So, if you don't have a malware running already and you are gaming, if the/a malware tries to access something without you making it run or force a switch of programs, nothing will happen since no memory will be moving (no randomized movement of memory addresses happen). So, the stuff put in the user memory address space will remain there without actually calling anything "new" to be put there from the kernel space (drivers should really be loaded at the start and not during gameplay). Now, for Streaming though. You are forcing several context switches there, so it will be interesting to see if it happens. The DX driver needs to send the data to another program residing in a different user space, so that implies moving memory.

    That is how I understand this problem, so feel free to correct me.

    Cheers!
  4. Article was logical and well constructed, easy to follow and put the information in one place. Nicely done.Good piece of journalism there.
  5. Is there any indication that the Microsoft patch doesn't inflict an unneeded slowdown on AMD hardware? I know that Linus stopped that on the Linux side, but who's to say that Microsoft didn't just apply the fix to all architectures?

    jeremiah.moss said:
    We're not accessing the GPU from user mode directly, are we?


    Yes we are. That's what Microsoft changed in the driver model between XP and Vista. There is still a kernel component to the drivers, but much of the work is done in user mode, which both adds to OS stability (early NVIDIA Vista drivers notwithstanding) and benefits performance.
  6. jeremiah.moss said:
    A lot of people are claiming that gaming won't be affected, but that confuses me. You're making a lot calls from user space to the drivers, which should be in kernel space. We're not accessing the GPU from user mode directly, are we?

    Most of the DirectX and other APIs' runtime is user-space and that's what typical software interacts with the most. Not every DirectX call results in a trip through a system call.

    In the case of server-style workloads that frequently involve the file system, IO and inter-process synchronization though, most of those have a very thin APIs that do little more than populate structures and do basic sanity checks before doing system calls.
  7. I do not want to play the "devil's advocate" but, if he knew about the fault since last June he wouldn't sell on purpose for the case he would be accused for "abandoning ship" before it sinks. After all he is CEO from since May 2013. The flaws are rumored to be on most intel's chips many years before.
  8. Who bough intel stocks after the bug was announced? Was a ver good time for that!
  9. Quote:
    Intel's statement on the matter specifically says that the exploits are not caused by a "bug" or a "flaw" that is unique to Intel products. Intel also noted that the exploits can "gather sensitive data from computing devices that are operating as designed."

    Exactly, it's not a bug. The backdoor has been functioning perfectly for decades. : 3
  10. Well, after Volkswagen scam, we have the Intel scam. It really doesn't look good for them in the fore coming future.
  11. Myrmidonas said:
    I do not want to play the "devil's advocate" but, if he knew about the fault since last June he wouldn't sell on purpose for the case he would be accused for "abandoning ship" before it sinks. After all he is CEO from since May 2013. The flaws are rumored to be on most intel's chips many years before.


    Intel knew about this issue but got confirmation that external resources were looking in the matter. If the CEO of Intel sold his shares due to this story, it means it has the potential to have a huge impact for Intel as a company.

    Anonymous said:
    Who bough intel stocks after the bug was announced? Was a ver good time for that!


    Nonetheless, this uncertainty means nothing good for the investors. I would not buy Intel stock right now, it might not be a dip at all, it could be a fall. Investing in AMD, as of today, make way more sense since they are almost not affected by this. If you look at the road map to, AMD will be on 7nm next year.
  12. Mark RM said:
    Article was logical and well constructed, easy to follow and put the information in one place. Nicely done.Good piece of journalism there.

    Agreed. There was so much bad info out there, but you cut through the mess and laid out the facts in a concise article. Thanks!
  13. has any of these performance hit patches been released?
  14. I'm surprised nobody has commented on this:

    Quote:
    This new firmware disables branch prediction on AMD family 17h processor to mitigate a attack on the branch predictor that could lead to information disclosure from e.g. kernel memory (bsc#1068032 CVE-2017-5715).


    Is there some reason we don't believe that disabling branch prediction is going to plant a tremendous foot on the back of performance? I mean, the whole POINT of branch prediction IS increased performance, right? I think there are probably a LOT of little tidbits like this that are likely important, but are not getting the kind of publicity that some of these other stunts and statements are getting due to simply being overlooked.

    I mean, great, you removed the problem. Maybe. But you completely removed branch prediction from ALL ZEN processors, so how's that not going to seriously affect performance? Or am I reading this wrong?
  15. darkbreeze said:
    Is there some reason we don't believe that disabling branch prediction is going to plant a tremendous foot on the back of performance?

    Since branch prediction is required to boost performance in loops, chained list, parsing, manipulating tree structures and countless everyday algorithms, I'd expect outright disabling branch prediction to have a severe impact on performance if it does exactly what it sounds like it does.

    Without branch prediction, those 96+ slots in the reorder queue aren't going to see much use and could be cut down to 32 or so or the thread count per core increased to four to help fill it.

    I'll add this to my bucket of reasons not to upgrade my PC this year.
  16. Yeah, this is getting worse and worse all the time, and all people are really HEARING is the damage control schpeel coming out in in full force while they quietly slip the reality right past everybody. Later they can say, no we said that but you weren't paying attention. Almost makes it worth just dealing with having the vulnerability and forget about patching anything. Keep sensitive data on unconnected devices or something.
  17. bennie101 said:
    Wow that hard earn money we spent on intel cpus will come at a bigger cost now. I am sure they have know this issues for well over 20 years but Greed is the ROOT of aLL EVIL.

    m

    Well, the OS patch at least disable the communication to the vulnerability part with the cost of performance. My PC cost $2k, and I ain't upgrading my whole dam processor and motherboard until Intel compensate it.

    The OS patch only solve 50% - 70% of this vulnerability issue, and the other is in the architecture itself which require to upgrade it.
  18. Upgrade it to what? There's nothing you can upgrade it TO that doesn't have the same exact problem. If they have to start from scratch to totally redesign these architectures, we're probably looking at another 1-2 years before anybody has a suitable release that resolves the vulnerability. And if they can't figure out some way to still use branch prediction without there being other exploits, new CPUs might be at Ivy bridge performance levels again.
  19. darkbreeze said:
    Upgrade it to what? There's nothing you can upgrade it TO that doesn't have the same exact problem. If they have to start from scratch to totally redesign these architectures, we're probably looking at another 1-2 years before anybody has a suitable release that resolves the vulnerability. And if they can't figure out some way to still use branch prediction without there being other exploits, new CPUs might be at Ivy bridge performance levels again.


    This is architecture flaws, so software update only the good solution to fix it with a cost of performance sacrifice; however, if you want to completely solve this vulnerability, than an upgrade to the newer CPU that doesn't affect by these two vulnerability is the only option that some cyber security experts recommended. The software patch probably will disable the communication to the vulnerability part, but we still live with this vulnerability exist on our past and presence processors.

    " Some experts say that to completely get rid of the risks created by the flaws, the affected processors need to be replaced entirely. But that's not realistically going to happen anytime soon.

    There aren't any processors available at the moment that can replace the vulnerable ones and still provide the same kind of functionality.

    Experts say that it will take years to bring to market new chips that can perform the same tasks both safely and effectively. " http://money.cnn.com/2018/01/04/technology/business/apple-macs-ios-spectre-meltdown/index.html

    I also believe some antivirus can detect this type of malicious attack and activity as well.
  20. I didn't need an explanation about the exploits, I FULLY understand the exploits, at least as much as the next guy.
  21. darkbreeze said:
    Upgrade it to what? There's nothing you can upgrade it TO that doesn't have the same exact problem. If they have to start from scratch to totally redesign these architectures, we're probably looking at another 1-2 years before anybody has a suitable release that resolves the vulnerability.

    Since Spectre is fundamentally a timing exploit, attempting to prevent it at the hardware level means having to find a way to prevent processes running on the same CPU from snooping task-dependent performance variations.

    A possibly simple and effective fix for timing exploits might be to configure a cache line eviction threshold to preempt threads triggering anomalously frequent evictions. Can't do timing analysis if you get preempted faster than you can thrash the caches to monitor their performance. Another candidate might be to set a threshold on how often a process can read the high performance timers - can't do an effective timing attack if you don't have a reference for how much time has gone by, when you got preempted and when your process resumed.
  22. talonsoulwhisper said:
    Okay, when Meltdown came around it was Intel CPUs after 771. Okay fine I don't care
    When Spectre came around, okay all CPUs that's Intel, AMD, and ARM. Again still don't care.
    Where in the actual <mod edit> did Nvidia Cards come from? Lul


    Nvidia doesn't only make GPUs, they do make some mobile CPUs based on the ARM architecture, those are what are affected by the vulnerability. These CPUs go into tablets, Nvidia devices like the Shield, Shield K1 Tablet and Shield TV, and also the Nintendo Switch.
  23. Supernova1138 said:
    Nvidia doesn't only make GPUs, they do make some mobile CPUs based on the ARM architecture, those are what are affected by the vulnerability. These CPUs go into tablets, Nvidia devices like the Shield, Shield K1 Tablet and Shield TV, and also the Nintendo Switch.

    And Tesla Model-S/X cars until Tesla decided to switch to Intel in late 2017.
  24. Dont have a cow people. You have been living with this sh*t for many decades and if you were secretly hacked, are you in jail right now for the crap you have on your computer? The world is not coming to an end. So chill... What needed to happen has happened; And that said, the future is bright and sun will shine again!
  25. Wow, Intel is saying that they knew about this since 2010? This is nuts.
  26. InvalidError said:
    Since Spectre is fundamentally a timing exploit, attempting to prevent it at the hardware level means having to find a way to prevent processes running on the same CPU from snooping task-dependent performance variations.

    A possibly simple and effective fix for timing exploits might be to configure a cache line eviction threshold to preempt threads triggering anomalously frequent evictions. Can't do timing analysis if you get preempted faster than you can thrash the caches to monitor their performance. Another candidate might be to set a threshold on how often a process can read the high performance timers - can't do an effective timing attack if you don't have a reference for how much time has gone by, when you got preempted and when your process resumed.


    Granting that you are not involved in this process and don't have any actual knowledge of what it might take to physically implement this on the actual process (Or maybe you do, I don't know that. I suspect you might, but don't know that either), on a rather educated guess, what would YOU say the timeframe involved with implementing those changes into a process would be?

    Do you think that's something that can be done through microcode changes or does this need to happen at the hardware level?

    Is this theoretically something that could be implemented into existing or in-progress designs, or does this require a complete redesign from the ground up?

    Realistically, would making THOSE changes also have a hit on branch prediction and performance, or would it largely not affect the overall cycle?
  27. berezini said:
    Dont have a cow people. You have been living with this sh*t for many decades and if you were secretly hacked, are you in jail right now for the crap you have on your computer? The world is not coming to an end. So chill... What needed to happen has happened; And that said, the future is bright and sun will shine again!


    So you figure succeptible versions of branch prediction have been implemented in consumer processors for "many decades"? LOL.

    These vulnerabilities only affect CPUs that have been around for about 10 years, maybe slightly more, definitely not "decades". Honestly, I don't think you have too much awareness of where the sun even rises from on this subject. Maybe you do, I don't know, but it doesn't show through your comments.
  28. Do we need Intel security patch as well?
  29. darkbreeze said:
    berezini said:
    Dont have a cow people. You have been living with this sh*t for many decades and if you were secretly hacked, are you in jail right now for the crap you have on your computer? The world is not coming to an end. So chill... What needed to happen has happened; And that said, the future is bright and sun will shine again!

    So you figure succeptible versions of branch prediction have been implemented in consumer processors for "many decades"? LOL.

    These vulnerabilities only affect CPUs that have been around for about 10 years, maybe slightly more, definitely not "decades". Honestly, I don't think you have too much awareness of where the sun even rises from on this subject. Maybe you do, I don't know, but it doesn't show through your comments.

    This issue can potentially affect Intel CPUs all the way back to 1995, 23 years ago. So "decades" may very well be accurate.
    http://www.tomshardware.com/news/intel-meltdown-spectre-cpu-patches,36225.html
  30. darkbreeze said:
    Realistically, would making THOSE changes also have a hit on branch prediction and performance, or would it largely not affect the overall cycle?

    The changes I thought about (assuming CPUs don't already have the necessary facilities) can't do anything about missing KPTI checks in speculative execution instruction flow, they can only deal with timing-related side-channel issues. If there was a possible microcode fix for KPTI and similar checks, Intel would have done it before OS vendors had to push out patches.

    On the side-channel side of things, trying to design an architecture that is intrinsically side-channel-safe will almost certainly come with considerable performance compromises. If people freak out enough over this, we'll probably end up with CPUs having some slow security-hardened cores for handling sensitive data and high-performance cores for everything else.
  31. TJ Hooker said:
    all the way back to 1995, 23 years agol


    Maybe, but I haven't seen that stated anywhere except the Tom's article. I may have simply overlooked it, but most the reference material I've read so far hasn't firmly stated that CPUs that old are among the indicated models. Have you seen a firm declaration in any of the actual testing saying it HAS been verified on those older architectures? Most that old stuff doesn't even support the majority of instructions used now.
  32. darkbreeze said:
    TJ Hooker said:
    all the way back to 1995, 23 years agol


    Maybe, but I haven't seen that stated anywhere except the Tom's article. I may have simply overlooked it, but most the reference material I've read so far hasn't firmly stated that CPUs that old are among the indicated models. Have you seen a firm declaration in any of the actual testing saying it HAS been verified on those older architectures? Most that old stuff doesn't even support the majority of instructions used now.


    I think they're saying that everything since the original Pentium is affected by Spectre because the original Pentium was the last Intel chip to not include branch prediction. Now I'm pretty sure nobody has broken out their vintage hardware from the mid to late 1990s to test this, so there probably is a more recent cut off point, but nobody knows for sure so they seem to just assume any CPU with branch prediction is affected.
  33. IMO

    Based on wut i just read on other popular tech sites, Intel and its whole shareholders are in very deep serious shit with this issue and since they just released a lot of their new gen procs. that startigically competes with latest Amd offerings if they dont manage to mitigate or fix this vulnerability a lot of their profits, budget allocation for RnD and marketing and credits from their business partners will go down the drain bigtime including their trading stocks.

    Linus Torvalds also expresses disappointments on how Intel is currently addressing the issue as of the moment and a lot of Intel PR guys are spreading "false words of comfort" about the issue and also on other popular tech sites same thing happens which also are expressing disappointments on Intel's PR which based on the interpretation that Intel has no plans of fixing the vulnerability including the level 2 and 3 and it doesn't affects much of their latest procs. and including older 1st generation i3, i5 and i7 models knowing that 90 percent of all their procs. are vulnerable including old dual core ones.

    I hope Tom's Hardware could make an article on how to properly bechmark Intel procs. specifically to see if an Intel proc. owner (in their server, gaming rig or laptop) is already affected by the issue since Tom's have lots of intel data benchies to compare their procs. to.
  34. Um, ok.
  35. Rock_n_Rolla said:
    Based on wut i just read on other popular tech sites, Intel and its whole shareholders are in very deep serious shit with this issue

    Maybe, maybe not. Most of these exploits require local access for exploitation and it isn't Intel's job to secure local access, that's the OS, services and applications' job to keep foreign code out of the system. The CPU is nothing more than the last line of defense after multiple other protection layers have already failed.

    If the unexpected success of timing-based attacks is anything to go by, no system using multi-core CPUs to run multiple concurrent tasks can be trusted any more than its least trustworthy process.
  36. OMG and my friends made fun of me for being and AMD guy. Currently on a FX9590
  37. No new BIOS update for my Z97 Maximus Vii Hero -.-

    I hope my motherboard will get one mingo.
  38. Ernst01 said:
    OMG and my friends made fun of me for being and AMD guy. Currently on a FX9590


    Buying an AMD Ryzen CPU makes sense. Buying an FX-9590 makes heat. That is all.
  39. darkbreeze said:
    Ernst01 said:
    OMG and my friends made fun of me for being and AMD guy. Currently on a FX9590


    Buying an AMD Ryzen CPU makes sense. Buying an FX-9590 makes heat. That is all.


    But....Security hole in AMD CPUs' hidden secure processor code revealed ahead of patches. So Ryzen CPUs are also compromised, or am I missing something?
  40. My comment had nothing to do with the security issues, and everything to do with the comment about the FX-9590. Yes, nearly ALL processors are compromised.
  41. Ernst01 said:
    OMG and my friends made fun of me for being and AMD guy. Currently on a FX9590


    No, they made fun of you for buying one of the worst AMD cpus ever..
  42. Ernst01 said:
    OMG and my friends made fun of me for being and AMD guy. Currently on a FX9590


    The 9590 is one of the worst CPUs ever made. It's all about the Ryzen 5 and 7 series now.
  43. g-unit1111 said:
    The 9590 is one of the worst CPUs ever made. It's all about the Ryzen 5 and 7 series now.

    Intel's Prescott (last generation on Netburst architecture which miserably under-delivered on performance gains while consuming significantly more power) might beg to differ. AMD doesn't have a monopoly on poor design choices, but it does deserve the shame of roughly duplicating Intel's failure.

    Pursuing deeper pipelines primarily for the sake of achieving higher clock frequencies failed both companies spectacularly.
  44. Myrmidonas said:
    darkbreeze said:
    Ernst01 said:
    OMG and my friends made fun of me for being and AMD guy. Currently on a FX9590


    Buying an AMD Ryzen CPU makes sense. Buying an FX-9590 makes heat. That is all.


    But....Security hole in AMD CPUs' hidden secure processor code revealed ahead of patches. So Ryzen CPUs are also compromised, or am I missing something?


    Did you read the article? You need physical access to the motherboard to exploit it.
  45. Rogue Leader said:
    Myrmidonas said:
    darkbreeze said:
    Ernst01 said:
    OMG and my friends made fun of me for being and AMD guy. Currently on a FX9590


    Buying an AMD Ryzen CPU makes sense. Buying an FX-9590 makes heat. That is all.


    But....Security hole in AMD CPUs' hidden secure processor code revealed ahead of patches. So Ryzen CPUs are also compromised, or am I missing something?


    Did you read the article? You need physical access to the motherboard to exploit it.


    I see your point but I would not pay money for something that is flawed, when I know it.
  46. Myrmidonas said:
    I see your point but I would not pay money for something that is flawed, when I know it.

    If you are going to consider side-channel exploits that require access to the machine to exploit as fundamentally flawed, then I have bad news for you: side-channel exploits will in all likelihood remain possible on all CPUs that haven't been specifically designed to be hardened against side-channel attacks. Hardening CPUs against side-channel attacks comes with severe performance compromises to make the execution rate and power usage as uniform as possible so data cannot be inferred from side-channel measurements, which is why the most secure elements in modern platforms are delegated to purpose-built micro-controllers and hard-wired logic.
  47. I am not willing to sacrifice the kind of performance noted for my Windows 7 laptop running a Sandy Bridge i7 CPU. That is stupid, especially given that there really is NO threat. Now that so many systems are going to be updated, there is little reason for any scumbags to try to exploit these vulnerabilities, IMO. From my perspective, the cure is far worse than the disease, especially on older hardware / OS combinations. It just is not worth it. So, I believe Microsoft should make a way to have these patches be OPTIONAL and AVOIDABLE and UNINSTALLABLE. This is crap!
  48. thuck777 said:
    I am not willing to sacrifice the kind of performance noted for my Windows 7 laptop running a Sandy Bridge i7 CPU. That is stupid, especially given that there really is NO threat. Now that so many systems are going to be updated, there is little reason for any scumbags to try to exploit these vulnerabilities, IMO. From my perspective, the cure is far worse than the disease, especially on older hardware / OS combinations. It just is not worth it. So, I believe Microsoft should make a way to have these patches be OPTIONAL and AVOIDABLE and UNINSTALLABLE. This is crap!


    If you're running Windows 7 you can just choose to not install the security update and don't bother updating your BIOS, assuming your laptop manufacturer even bothers updating such an old product. Windows 10 is the OS that has issues with mandatory updates and you have to do more legwork to disable them if you choose to do so.
  49. Well, since the Windows patch has very little affect on performance, even IN the workloads most affected, seems like if you simply avoid updating the firmware beyond any versions that were available prior to Nov 1, 2018, you can avoid 95% or more of the performance hit. No patch + firmware equals very little difference if any on the majority of systems. I think that's what I'm going to do since the consensus is that unless you allow exploitable malware to reside on your system there is zero risk of a side channel attack, and I have no plans to allow that, it makes little sense to intentionally cripple my system by installing the firmware at all.

    I'd avoid the Windows 10 patch as well if I could, but obviously there is no way to avoid installing that AND continue to be able to install other updates. It's all or none in that regard, so I'll allow the update but avoid the firmware. Pretty doubtful at this point there will be any meaningful bios updates for my Z170 system that are unrelated to this patch anyhow, and any future ones will incorporate it, so for all intents and purposes this system is done getting firmware upgrades for the remainder of its lifetime.
Ask a new question

Read More

CPUs Intel AMD ARM Nvidia Hardware Security