I was reviewing technical details of some motherboards and Nvidia graphic cards, and a couple of questions stuck me.
(1)
In my experience with system memory, the CPU has always been faster than RAM, much faster actually (if one could actually simply compare the clock speed of the CPU and memory, that is), and GPU is much slower than GRAM. Is there a reason why they are the way they are? In my simplified thinking, I would think the closer the speed of the CPU/GPU to its memory, the better: less waiting involved.
(2)
Why isn't GRAM being used for system memory too? I think the GPU also computes, but of course there are way many more cores on the GPU. I don't think costs is the reason: people are buying monitors that cost over $3,000, and running RAID with enterprise SSDs (Intel 750 1.2T, for example), for example.
(3)
Lastly, is it sensible to compare iops of SSDs to CPU/GPU/memory speed?
Thanks.
(1)
In my experience with system memory, the CPU has always been faster than RAM, much faster actually (if one could actually simply compare the clock speed of the CPU and memory, that is), and GPU is much slower than GRAM. Is there a reason why they are the way they are? In my simplified thinking, I would think the closer the speed of the CPU/GPU to its memory, the better: less waiting involved.
(2)
Why isn't GRAM being used for system memory too? I think the GPU also computes, but of course there are way many more cores on the GPU. I don't think costs is the reason: people are buying monitors that cost over $3,000, and running RAID with enterprise SSDs (Intel 750 1.2T, for example), for example.
(3)
Lastly, is it sensible to compare iops of SSDs to CPU/GPU/memory speed?
Thanks.