Whats better 32x4 or 64x2?

nottheking

Distinguished
Jan 5, 2006
1,456
0
19,310
Ok a question for you all whats better on a graphics card 32x4=128 bit mem interface or 64x2=128 but mem interface?
Were you actually expecting an answer that quick on the Forumz? I would've personally thought you might've known better than to expect that much. (no offense intended)

To be honest, I don't think it makes any difference in performance how many memory controllers are needed to get to 128 bits of width. However, I could be wrong; in that event, the only possible effect might be that more controllers are less efficient, so I'd actually recommend the 62x2 (2 64-bit controller) setup.

Is this a question actually regarding graphics cards you're considering purchasing/using, or is this simply a theoretical question? The one thing I'd REALLY point out is that any given GPU design will have only one size of memory controller, so if you have two GPUs, one with 32-bit controllers, and the other with 64-bit controllers, then they'd be of two separate designs, such as, say, the G72 (found in the GeForce 7300 cards) and the G73. (GeForce 7600) Both, I believe, can have up to a 128-bit memory interface, but as with most cases, those using 64-bit controllers are designed in other areas to perform better.
 
I wish I had the time to look again (not there yet still putting gear away [waiting for dryer]), but IIRC the theoretical answer depends on the GPU's design and how it uses it's bitwidth in conjunction with it's texture size, and an illustration of this was done in ATi's ring-bus description which invilved 64bit memory modules for the X1800/1900 and 32bit for both the X1300&1600 IIRC.

That would be the place to look IMO because there was like an essay on the benifits and why the move to this was a good idea. Just right now I can't remember all of it, but there are rare benifits IIRC, but can't remember all the factors involved.
 

cleeve

Illustrious
It can't be identical there are too many factors.

What other factors?

Clockspeed and memory bus. There, you have it. Those are your two factors. If they're identical, then performance is identical.

Other than that, maybe latency if you want to get picky, but that's a video card BIOS issue.

Of course, it's assuming all memory is DDR. But that's stretching for another 'factor'... :p
 

dvdpiddy

Splendid
Feb 3, 2006
4,764
0
22,780
Ok man i meant that say you got a graphics card form any company with 12 raster ops,12 fragment pipes, 12 vertex shaders,and 12 texture units both with 500 core 500 mem gddr3 1.3ns 128 bit bus but one card is 32x4 and the other is 64x2 so what card would be better for mem access and what card what would be for squeezing more fps out?
 

cleeve

Illustrious
Assuming all else is equal, like in your example, the memory chip configuration won't make a lick of difference.

As long as both of the cards have a 128-bit bus, THE MEMORY CHIP CONFIGURATION IS NOT A FACTOR.
 

cleeve

Illustrious
Where are you getting these figures from?

'Single and Dual-channel' usually referrs to system RAM on a motherboard, not video card RAM.

You mentioned that you're talking about the 7600 GT. A video card is not like a motherboard, it doesn't use single or dual channels. It simply has a memory bus... that bus is set to 64-bits, or 128-bits, or whatever. It's not like a dual-channel motherboard that can be single or dual channel based on the memory configuration you use.

DDR and DDR2, or DDR3 at the same latencies and clockspeeds will perform identically. The advantage of DDR2 and DDR3 is that they clock higher.

SDR will perform half as fast as DDR at the same clockspeeds and latencies.
 
Ok a question for you all whats better on a graphics card 32x4=128 bit mem interface or 64x2=128 but mem interface?

Ya' know it'd be really be easier if people followed my directions instead of continuing down the same path.... :twisted: :tongue: :twisted:

http://www.beyond3d.com/reviews/ati/r520/index.php?p=05

Beyond just the addition of the arbitration login, ATI are claiming a 4x efficiency in random access of the memory by virtue of the fact there are now 8 memory banks per DRAM on the R520 memory controller, as opposed to 4 banks per DRAM on R420, and also 8x32-bit memory channels rather than 4x64-bit channels. To get the maximum efficiency out of the memory bus the memory channels should ideally carry enough data to max out the width of the channel and the burst length of the DRAM module - the wider the memory channel the less likely this is to occur, so breaking down the memory channel into even smaller width can increase the effective bandwidth utilisation. Up until now all 256-bit memory busses have utilised 4 way crossbar's, breaking the channels down to 4x64-bit busses; its likely that previous designs were not able to go down any further due to the trace density issues mentioned before - the ring bus mechanism now allows this to occur and also reduces the trace density, increasing the clock speeds. Note: the 8x32-bit channels is why we see an odd memory layout on the R520 boards, with one memory chip at angles to the top of the chip; normally 64-bit busses would need to have 32-bit chips paired together.


Although the X1600 use half the number of channel not half the size.
 

cleeve

Illustrious
I stand corrected!

Still, I'd be interested in seeing some performance numbers comparing the two.

In theory, the increase in efficiency only occurs then texture memory only partially fills a larger crossbar...
 
In theory, the increase in efficiency only occurs then texture memory only partially fills a larger crossbar...

Yeah I know, I just happened to remember it from the R520 launch when all of us were, WTF is a Ring-Bus and HTF is it gonna matter?

It's all theory, and the difference is likely very minor, and I doubt we'd know whatis 8x32 vxs 6x64 versus the ability to directly map to any memory on the grid, better memory compression, more dynamic load balancing and HyperZ improvements, but all we're talking about is theory, just like shader length support, etc. So it's really a combination of things/

IMO it's like the texture size support each card has; for the VPUs does 2048x2048 matter compared to 4096x4096 vs 8192x8192 vs 16K vs 16K? Sure theoretically, but in practice, how often are we talking about such a difference outside of professional apps?

Anywhoo, wouldn't never have thunk it if I hadn't seen it myself.