Sign in with
Sign up | Sign in
Your question

MHZ vs GHZ?

Last response: in Graphics & Displays
Share
December 13, 2012 11:32:21 AM

I noticed that some cards like the radeon 7870 have like 1 ghz, while some 7950 and even higher, have about 800 mhz. So what would be better?

More about : mhz ghz

a b U Graphics card
December 13, 2012 11:36:15 AM

The 7950 is more powerful than the 7870. Comparing clock speeds is only useful when comparing 2 cards of the same model.
m
0
l
December 13, 2012 11:53:29 AM

oh? I would think clock speeds would be higher O.o
m
0
l
Related resources
a c 84 U Graphics card
December 13, 2012 11:55:54 AM

it's not just about the clock speeds themselves but rather what actually is running at those clockspeeds, like comparing v8 engine to a smaller inline 4...
m
0
l
a b U Graphics card
December 13, 2012 11:57:50 AM

The 7950 has more memory, shaders, and more of everything else. It can do more with every clock cycle than a 7870 can. The only time clock speeds matter is comparing 2 same model cards. A 7870 at 1050/1250 MHz is more powerful than a 7870 at 1000/1200 MHz. Make sense?
m
0
l
a b U Graphics card
December 13, 2012 12:02:05 PM

A big +1 to what has been said. It's nice to understand this stuff (just for the sake of knowledge) but don't use it to make purchasing decisions. That's what benchmarking is for (testing how fast the cards are in different situations and plotting the results on a chart for comparison). Take a look:

http://www.tomshardware.co.uk/review/Components,1/Graph...

The 3rd and 4th articles will show you what I mean. Although frames/second is a flawed performance metric, it gives a rough idea of how cards perform relative to each other. It's certainly better than examining clock speeds, ROPs, memory bus widths etc :-) It's what's referred to as 'real-world performance', meaning that it's the end result you actually see and enjoy.
m
0
l
a b U Graphics card
December 13, 2012 12:48:43 PM

Kari said:
it's not just about the clock speeds themselves but rather what actually is running at those clockspeeds, like comparing v8 engine to a smaller inline 4...


Good car analogy :D 

The reason clock speed doesn't indicate speed for graphics cards is because a GPU is like a processor with hundreds of cores that all work in parallel doing the same task on different pieces of data (each 'core' would do a computation for one pixel of a scene for example, and all cores are doing the same computation on different pixels. You can make that system faster by adding more cores (the number of compute units), or by making each core go faster (the core clock), or by speeding up how quickly they can access data (memory clock speed) or by increasing the amount of data the access at one time (the memory bus width). All these things can bottleneck a card's design, and nobody fully understands them, which is why game benchmarking is a better indicator of performance (though keep in mind you can't mix and match performance tests from multiple sources, you want to see each card tested in exactly the same scenario).

Usually, within a given generation of cards (such as the 7### series for amd), the ### is an indicator of where that card ranks, a 7950 will be faster than a 7870 (unless they start doing something really screwy with marketing cards), the difficult part is knowing when the extra 40$ for the next step up is worth it, and which brand has the best offering in your price range (amd vs nvidia).

Edit: Also to answer a much simpler question you may have been trying to ask... 1Ghz = 1000Mhz.
m
0
l
!