Sign in with
Sign up | Sign in
Your question

GT/s compared to MHz

Tags:
Last response: in CPUs
Share
December 7, 2011 7:52:56 AM

Been a long time since I've been hardware savy, so please forgive my ignorance.

Historically, the higher the bus speed the better & faster performance you'd receive from your computer. eg.
An 800MHz FSB is twice as fast as a 400MHz

Now we don't see the bus system advertised in MHz but rather it's Gigatransfers/s

My question is this, how much faster are these gigatransfer dealios in comparison with say the same example of 800MHz compared to 2.5GT/s

More about : compared mhz

a b à CPUs
December 7, 2011 8:18:34 AM

That's because bus speed has been widely accepted as a fairly useless method of measuring speed these days. Bus width comes into play a lot more, as we need to move not only data fast, but we need to move a lot of data fast, thus larger memory buffers on graphics cards and wider bus widths these days. For example, a card with 1GB of memory will run A LOT faster if it has a 256-bit bus width with a little slower FSB or memory clock than it would if it had a massive memory clock but only a 64-bit bus width. Or something.

With regards to your question about GT/s vs MHz, it's fairly simple. MHz refers to the physical clock speed of the GPU, CPU, FSB, Memory Clock or whatever other processor or controller you're talking about, where GT/s is a calculated amount of data based on the bus width and speed of whatever you're wanting to measure.

This is a more accurate and useful system to use, as you can easily be mislead by a fast memory clock but then let down by the actual capacity the memory has to handle the work thrown it's way (*wink wink* anti aliasing and high resolution gaming).

DDR also plays a role here, but is a little more complicated.

I'm not completely trained in this though, so some of this may be a little off, but I'm pretty sure I've got the basics covered here.
m
0
l
December 7, 2011 8:55:07 AM

Ahhhh,
Ok

Yes got the whole Bus and width idea, hence the reason why mainboards & cpu's are now running & 64 bits instead of the last 10 or so years at 32.

Looking at intel processors on the intel site to try and catch up a bit, selecting several cpu's for comparison you'll find the 800Mhz, 1333Mhz or whichever describing few of the lower end cpu's and the new modern ones with the GT/s rating. The GT/s rating is yes far superior as it's giving us a true reading of throu-put. Here is a link to a cpu comparison to give a better idea of what my poor brain is attempting to figure out. http://ark.intel.com/compare/52585,61275,48750,27502,33...

Also under Bus type it's gone from FSB to DMI and the real tricked out boards go to something called QPI, little help???

A lot of your info was good knowledge and helpful Toxxyc thanks much, however, I still don't have an answer to my question. I do have dimploma in CST & know my war around a bit but hardware has advanced so much since I last bought anything, well, that could also apply to anyone not buying anything in the last three months.

BTW, Hello to the site.

Cheers.
m
0
l
Related resources
a b à CPUs
December 7, 2011 12:01:56 PM

This is much like DDR(double data rate) memory. 1333"mhz" memory is really only at 667mhz, but it delivers data at the rising and falling of the clock cycle, so it should technically be called 1333MT/s ram.

The old Mhz was used on the FSB when it really didn't tell the truth; the bus was 4x slower than the actual Mhz they showed, because the bus was quad pumped. Now the QPI bus transfers at 2 times per clock cycle like DDR. Intel calls this double pumped.

QPI transfers data at 4 bytes per clocktransfer. The first gen i7 9xx series had a QPI of either 4.8GT/s(2.4ghz) or 6.4GT/s(3.2ghz), thus giving the bandwidth of 19.2 or 25.6GB/s.

DMI is not a new bus. DMI was on the first gen i7 platform connecting the northbridge to the southbridge at 10Gb/s. The new LGA 1155 platform removed the northbridge, so now all that is left is the DMI connecting the processor to the southbridge.

Edit: I should also add that in Sandy Bridge DMI 2.0 has a connection that can transfer at 20Gb/s(2.5GB/s). Each direction only gets half of that bandwidth, which would be 10Gb/s or 1.25GB/s.

The high end LGA 2011 platform that just released doesn't even have QPI anymore either. I believe QPI is only going to be used in server platforms from now on as Pci-e bandwidth was the only reason the first gen i7 needed that much bandwidth. Pci-e has now been integrated onto the CPU die for a direct connection instead.

Bus width can change the amount of data a bus can deliver even if it stays at the same speed.
m
0
l
a b à CPUs
December 7, 2011 12:20:46 PM

Man... I'm really off right now. If you don't understand that babbling, ask again, and I will try to answer your questions a little later.
m
0
l
December 8, 2011 9:12:59 AM

Haserath said:
Bus width can change the amount of data a bus can deliver even if it stays at the same speed



This much, yes I know very well and I understand. ;) 

southbridge, northbridge, sorry no idea :o 

Still some good information but unfortunately, still doesn't answer the question, although it might if I understand all of what you're talking about. My previous computer training was all in software and focused very little on hardware.

If you can remember the movie it comes from and the line then I might be able to get you, the line goes, "Ok, explain this to me, like I'm a two year old."

Other than doubling bus width at same clock greatly enhancing performance, I'm not sure what's happening hardware wise, other than what does what.
m
0
l
a b à CPUs
December 8, 2011 12:51:05 PM

sickle44 said:
If you can remember the movie it comes from and the line then I might be able to get you, the line goes, "Ok, explain this to me, like I'm a two year old."

Alright, you asked for it.

This is for grown ups, so run off to bed. :pt1cable: 

-----
Ok, in all seriousness, you cannot compare the different buses in Hertz or Transfers. You need to figure out their bandwidth in Bytes.

FSB@1600Mhz= 12.8GB/s

QPI@6.4GT/s= 25.6GB/s

DMI@ 2.5 GT/s= 1.25GB/s(effectively 1GB/s due to overhead)

m
0
l
December 8, 2011 7:37:02 PM

Haserath said:
Alright, you asked for it.

This is for grown ups, so run off to bed. :pt1cable: 

-----
Ok, in all seriousness, you cannot compare the different buses in Hertz or Transfers. You need to figure out their bandwidth in Bytes.

FSB@1600Mhz= 12.8GB/s

QPI@6.4GT/s= 25.6GB/s

DMI@ 2.5 GT/s= 1.25GB/s(effectively 1GB/s due to overhead)


I saw that last bit and stopped reading...

._.
m
0
l
a b à CPUs
December 8, 2011 10:49:49 PM

majorgibly said:
I saw that last bit and stopped reading...

._.

You stopped reading, because that was the end of my post? :p 

I will explain why DMI has so much less bandwidth.

Intel has separated the controllers on the CPU for the peripherals over time.

FSB- Had to handle Memory Bandwidth, Pci-e Bandwidth(graphics cards and such), and southbridge bandwidth(hard drive, ethernet, etc.).

QPI- Had to handle much more Pci-e Bandwidth and southbridge bandwidth. Intel had at this point separated the memory controller to have its own separate bandwidth.

DMI- It only connects to the southbridge. Intel now has Pci-e bandwidth and Memory bandwidth handled by separate controllers on the CPU.

DMI is based on Pci-e(which is at 2.5GT/s), and it has 4 lanes of pci express, which gives it its 1GB/s data bandwidth(250MB/s per lane). DMI 2.0 uses pci-e 2.0(which is 5GT/s), which doubles that to an effective 2GB/s for data.
m
0
l
a b à CPUs
December 9, 2011 3:50:02 AM

Haserath said:
DMI- It only connects to the southbridge. Intel now has Pci-e bandwidth and Memory bandwidth handled by separate controllers on the CPU.

GOLD. Cleared up some of my questions as well, thanks.
m
0
l
March 16, 2012 5:15:00 PM

Toxxyc said:
GT/s is a calculated amount of data based on the bus width and speed of whatever you're wanting to measure.


This is incorrect, it doesn't tell you the amount of data, it tells you the number of transfers per second.

T/s (number of transfers per second) = (clock speed) * (transfers per clock cycle)
For example: 4 GT/s = 1 GHz * 4 transfers per cycle

On Wikipedia, there is a table showing number of transfers per cycles for each family of CPU's.

The calculated amount of data is the "Bit Rate," usually expressed in bit/s or bps. If you know the transfer rate (GT/s), bus size, and overhead due to encoding, you can calculate the effective bit rate:

Bus Size: 8 Bytes
Clock Speed: 100 MHz
Transfers per clock cycle: 4
Encoding: 8b/10b
Gross Bit Rate = 8 B * 100 MHz (cycles/second) * 4/cycle = 3200 MB/s
Effective Bit Rate = (gross bit rate) * (encoding) = 3200 MB/s * 8b/10b = 2560 MB/s

Real-life examples below show how PCI Express bit rates are calculated. Note, I'm showing multiple PCI Express slot lengths (x1, x4, x16) to illustrate how those affect the calculation:

PCIe v1.0 x1:
Bus Size: 1 bit = 1/8 Byte
Baud: 2.5 GT/s = 2500 MT/s
Encoding: 8b/10b
Bit Rate: 250 MB/s = (1/8) * 2500 * (8/10)

PCIe v2.0 x4:
Bus Size: 4 bit = 1/2 Byte
Baud: 5 GT/s
Encoding: 8b/10b
Bit Rate: 2 GB/s = (1/2) * 5 * (8/10)

PCIe v3.0 x16:
Bus Size: 16 bit = 2 Byte
Baud: 8 GT/s = 8000 MT/s
Encoding: 128b/130b
Bit Rate: ~16 GB/s = 15,753 MB/s = 2 * 8000 * (128/130)

You may sometimes see PCIe bit rates listed that are 2x the values shown in these calculations, but those are "aggregate" numbers, meaning they double the number to reflect simultaneous reading and writing.
m
0
l
!