Sign in with
Sign up | Sign in
Your question

Can a Gigabyte GA-G31M-ES2C motherboard support DDR3 Graphics cards?

Last response: in Motherboards
Share
February 14, 2010 2:01:21 PM

Hi guys i need answers quick on:
Can a Gigabyte GA-G31M-ES2C motherboard support DDR3 Graphics cards?
I cant seem to tell and find answers
Thanx guys ciao
a b U Graphics card
a c 435 V Motherboard
February 14, 2010 4:47:23 PM

Yes. The only concern is your power supply specs. It should have a 6 pin pci-e connector and the appropriate amps for the 12v rail. For high end cards, you'll need about 40 amps minimum, which can be figured with one to four 12v rails.
February 14, 2010 6:31:47 PM

o1die said:
Yes. The only concern is your power supply specs. It should have a 6 pin pci-e connector and the appropriate amps for the 12v rail. For high end cards, you'll need about 40 amps minimum, which can be figured with one to four 12v rails.


o1die said:
Yes. The only concern is your power supply specs. It should have a 6 pin pci-e connector and the appropriate amps for the 12v rail. For high end cards, you'll need about 40 amps minimum, which can be figured with one to four 12v rails.


:hello:  Thank you very much :wahoo:  i really appreciate it ...I have a Huntkey 550Watt Greenhouse powersupply that should be able to power my New coming 9800GT 1gb ddr3 Graphics card :bounce: 
Please tell me how you can tell if it can supoport ddr3 cards?
Thanx again :D 
Related resources
a c 177 V Motherboard
February 14, 2010 8:52:26 PM

The graphics card's memory is just that - it's local to the GPU itself (and can be DDR3, GDDR, or GDDR5 - as well as being 'wide': while the CPU accesses main RAM in 'words' of 32 or 64 bits' width, a GPU typically has an access width between 128 and 256 bits 'wide'), only accessed by the (usually massively parallel - many, many cores, operating simultaneously) actual graphics cores; a second form of memory access is also taking place (through the PCIe bus) to the motherboard's main memory (which can be either DDR2 or DDR3, depending on the MOBO), using an area 'mapped' for 'shared access' - that's how the main CPU 'talks' to the GPU(s). The 'shared' area (which is not unique to graphics cards - many I/O devices use 'shared' memory to communicate with the system - it's just that the 'real estate' consumed by the GPU is usually the 'biggest chunk') is the reason that 32 bit operating systems, which can only address, and therefore access four gigabytes (2 to the thirty-second power is 4,294,967,296...), mostly 'show' quite a bit less (usually around 3.2G) than four gig 'useable' - it's because those memory-mapped 'chunks' for I/O by devices is subtracted from the total...
February 15, 2010 2:41:40 AM

bilbat said:
The graphics card's memory is just that - it's local to the GPU itself (and can be DDR3, GDDR, or GDDR5 - as well as being 'wide': while the CPU accesses main RAM in 'words' of 32 or 64 bits'.............................


Thanx man :sol: 
are you saying that any board can support any graphics card? :??: 
a c 177 V Motherboard
February 15, 2010 2:59:22 PM

Well - that's a bit broad, but basically, yes. The main requirements are simply physical; most modern graphics cards have a physical PCIe x16 'card edge', so they need a physical x16 MOBO slot, but almost all will run with no degradation of performance in a x8 electrical slot (the difference between physical and electrical is that physical denotes how many 'fingers' or connectors are present in the slot - and thus, how long the slot is, while electrical denotes how many of those 'fingers' or connectors are actually 'apportioned' from the northbridge, southbridge, or CPU's collection of PCIe 'lanes'...); for ATIs, you're good in a x8 electrical slot for anything up to the 57xx's - for a 58xx, or a dual GPU card, you might start pushing the capacity of the slot, and really want a x16; for nVidias, I'm pretty sure x8 is OK up to GTX 280's - for a 285, or a twin GPU 295 (or a Quaddro, if you've got a small fortune to spend), again, x16 might be desirable...

The second issue that has to do with vidcard vs MOBO is if you intend to do a twin vidcard setup, for acceleration of a monitor; if all you intend a couple cards for is this:

the MOBO doesn't matter. The two manufacturers have differing methods of accelleration - for ATIs, it's CrossFire; for nVidias, it's SLI; up until the 1156/1366 platform, most all motherboards would do one or the other, but not both - if you wanted to do SLI, typically you needed an AMD or nVidia chipset on-board; if Xfire, you wanted an Intel; now, with the new platforms, either is available!
February 15, 2010 4:51:36 PM

Awesome thanx man
!