Sign in with
Sign up | Sign in
Your question

Why can't CPUs get data directly from Hard drives?

Last response: in CPUs
Share
September 4, 2010 5:59:59 AM

So what I learned is that CPU have to get data like this:

HDD->RAM->CPU

And that the CPU has cache memory to store some data, same with the RAM and HDD.

The cache memory of the CPU is really fast but small.The RAM memory is fast but not as a fast as the CPU and is bigger than the CPU. And the HDD is the slowest of them all but is the biggest.

Correct me if I'm wrong >.<!

So why can't the CPU get data straight from the HDD?

a c 203 à CPUs
a b } Memory
September 4, 2010 6:12:39 AM

There is no direct physical data connection between the CPU and the HDDs.

The CPU connects to the motherboard though the front side bus (or Hyper Transport or QPI depending on the motherboard).
The Northbridge chips handles IO duties between the CPU and RAM.
The Southbridge chip handles IO duties with HDDs.
September 4, 2010 6:20:36 AM

So how come if the CPU wants to get something it doesn't have, why does it go through RAM first then HDD?
Why couldn't it just go to the HDD since it's connected to the motherboard :o ?
Related resources
a c 203 à CPUs
a b } Memory
September 4, 2010 6:25:28 AM

You were the one that said it had to go through RAM. I didn't.
But there is no direct connection between the CPU and the HDD.
Also, RAM does not have any data cache 'like the CPU and HDD'.
September 4, 2010 6:32:31 AM

Ah, my mistake. So what does RAM have then?
a c 203 à CPUs
a b } Memory
September 4, 2010 6:39:38 AM

Data from the HDDs.
a c 172 à CPUs
a b } Memory
September 4, 2010 6:50:30 AM

:) 
a c 203 à CPUs
a b } Memory
September 4, 2010 6:55:42 AM

I'm wondering how I'm going to explain DMA.
a b à CPUs
a b } Memory
September 4, 2010 7:00:55 AM

One thing i can add is that getting data from RAM and CPU cache is much faster than getting data from HDD.
a b à CPUs
a b } Memory
September 4, 2010 7:07:56 AM

I was going to say close, but its worse then you think seeing as the HDD data has to go through the SB and then the NB.

As I understand it, a CPU needs X from the HDD. A request is sent to the drive through all the buses. It then makes the return trip from the drive to the SB, NB, and then gets loaded into the ram. (I don't know enough about DMA to understand how this changes things.) The NB just folded into the CPU itself. If you look at the link WR2 provided the CPU now (or in the case of AMD will soon) handles all memory and GPU duties. In time, the SB might get folded into the CPU as well.
a c 203 à CPUs
a b } Memory
September 4, 2010 7:30:50 AM

4745454b said:
The NB just folded into the CPU itself.
It might be more accurate to say that the Northbridge Memory Controller was just moved onto the CPU die.


It's still going to perform much the same way as before - just using QPI (Intel) and Hyper Transport (AMD) instead of FSB.
a b à CPUs
a b } Memory
September 4, 2010 7:43:23 AM

In their most current chip form, all NB duties on are the CPU.
a b à CPUs
September 4, 2010 8:25:17 AM

hell_storm2004 said:
One thing i can add is that getting data from RAM and CPU cache is much faster than getting data from HDD.


Bingo. HDD is the slowest part in any modern computer hands down.
a b à CPUs
September 4, 2010 9:43:12 AM

data deed from a HD is slow, like mentioned., so the CPU would have to sit idle once in a while if it tried to get data from HD directly. and as COU has small cache, it is not possible load the data on cache 1st. pointless. so the data is stored in RAM and then to CPU. as RAM is faster than HDD, it can keep the CPU relatively more busy. hence this route is always chosen :) 
a b à CPUs
September 4, 2010 9:59:20 AM

As mentioned by others, HDDs are slow. Obviously the data still needs to be read from it at some point, whether directly by the CPU or to be loaded into RAM - you can't avoid this. Where the difference would be most noticeable is if the CPU required access to this data more than once. The CPU caches can only hold a small amount of data, so the rest will need to be stored in RAM or read from the HDD again. It is much better to (possibly) increase the time to read data from the HDD initially and allow subsequent reads of the same data to be made from RAM than to have a slightly faster initial read but have the HDD tearing itself apart reading the same data over and over while the CPU sips coffee in Starbucks while waiting for something to do.
September 4, 2010 4:30:05 PM

Before DMA ( direct memory access ) become the norm the CPU did directly access the hard drive. This was horribly inefficient due to the cpu sitting asking the HD are you ready yet over and over while the HD slowly read the data. With DMA the CPU tells the HD read this much and put it in ram. Then the cpu is free to do any other task that is not dependent on that data. When the HD has read the requested data it sends the cpu and interrupt and the cpu then resumes doing what need to be done to the data. This has huge impacts on systems that run more than one process ( any semi modern OS aka anything new that DOS ), if you want to see the performance difference set your hard drives to polled I/O in device manager. You will cry and restart using DMA.
a c 203 à CPUs
a b } Memory
September 5, 2010 7:16:53 AM

And then after explaining DMA you'd have to talk about Prefetch/Superfetch and the role the OS plays in moving data and programs into RAM.
September 5, 2010 8:20:25 AM

I think that is beyond the scope he is looking for but if he wants that type of detail I'm sure we can help him out.
!