Sign in with
Sign up | Sign in
Your question
Closed

MSI Calls Out Gigabyte for "Not True PCIe 3.0"

Last response: in News comments
Share
September 8, 2011 9:13:35 PM

WOW, what a big slap in the face of Gigabyte
Score
33
Anonymous
September 8, 2011 9:13:51 PM

Ouch!
Score
14
Related resources
September 8, 2011 9:20:27 PM

oh shizz just got real ?
Score
24
September 8, 2011 9:21:53 PM

damn, I was all set to buy a Gigabyte board. Now I may have to rethink my plans...
Score
16
September 8, 2011 9:23:03 PM

damnnnnnnn giga got raped
Score
27
September 8, 2011 9:29:59 PM

never seen this type of pwnage before
Score
28
September 8, 2011 9:31:14 PM

Either way i've never owned a gigabyte product, I've felt they're third tier products at a second tier price. I've been running an Asus board for years and it hasn't caused me problems yet!
Score
-17
September 8, 2011 9:31:36 PM

BAHAHA, cheers MSI. =D
Score
7
September 8, 2011 9:38:09 PM

ZOINKS!!!
Score
-8
a b V Motherboard
September 8, 2011 9:44:49 PM

wow....this coming from a company with a laughable track record...

So, any actual proof of MSI's claims....or is all the "proof" supplied by MSI? Also, did anyone else notice that the pics show different motherboards? The first comparison shows the Gigabyte GA-Z68X-UD7-B3, while the 5th pic shows the Gigabyte GA-P67A-UD4-B3....and on Gigabyte's website, there's no claim of PCIe 3.0 compatibility. In fact, for the GA-P67A-UD4-B3, Gigabyte actually states that all PCI Express slots conform to Gen2 specs....surprisingly enough, the same notice is posted on the spec page for the GA-Z68X-UD7-B3... So, exactly where is Gigabyte claiming these 2 boards support PCIe 3.0 when they list them as being PCIe Gen2 on the spec pages for both products? From what I can find, it seems MSI is simply grabbing at straws and using fraud as a marketing tool. Gotta love the purely false claim on the last page of their marketing scheme..."Only MSI Has True PCI Express GEN3"....so, I guess the PCIe3.0 controllers on newer ASRock boards are just a figment of my imagination....

So....where is this software that MSI used to "test" these slots and determine that Gigabyte is using PCIe Gen1 slots on the GA-P67A-UD4-B3... I'd like to publicly bitch slap MSI by proving the results are false.
Score
16
September 8, 2011 9:45:36 PM

+1 to MSI for the giant "FAIL" stamp on the bottom of the last slide.

If you're gonna throw down with a competitor you gotta go all out and that's about as far you can go on a professional Power Point presentation.

Next up, all members of congress must where a FAIL stamp on their forehead imposed by the American People until they balance the national budget.
Score
25
Anonymous
September 8, 2011 9:46:28 PM

So painful :( 
Score
-4
September 8, 2011 9:59:45 PM

otacon72Sounds like a Gigabyte employee...lmao.

Actually it sounds like someone who has had every MSI board they've owned fail (like me!). Except they are a little more verbose about it.

I consider Gigabyte to be a solid choice for motherboards when you're on a budget ($50 mobos that don't suck!). If I have >$100 to spend on a mobo, then I'll look Asus. They're better, but with that comes extra cost.
Neither brands have failed on me, save for a 5 yr old Asus that I thoroughly abused and overclocked. Frankly, I'm surprised it lasted that long.
Score
12
September 8, 2011 10:05:49 PM

ikefu+1 to MSI for the giant "FAIL" stamp on the bottom of the last slide.If you're gonna throw down with a competitor you gotta go all out and that's about as far you can go on a professional Power Point presentation.Next up, all members of congress must where a FAIL stamp on their forehead imposed by the American People until they balance the national budget.


Got some benchmarks to show that they're wrong?
Score
-4
September 8, 2011 10:06:33 PM

misquoted, sorry ikefu, meant to quote sykozis
Score
-8
a b V Motherboard
September 8, 2011 10:10:13 PM

LoL Corporate trolling
Score
9
a b V Motherboard
September 8, 2011 10:10:56 PM

And... as far as we know, pcie 3 is still just an intel gimmick that won't amount to sh!t in the real world... i guess we'll find out sooner or later. Until all the pieces of the pcie 3 puzzle (cpu + board + gpu) come together, its wasted e-ink.
Score
9
Anonymous
a b V Motherboard
September 8, 2011 10:12:53 PM

pwned power point style... like..... we need more whistle blowers to sound out all of the monkeys, not just in tech but globally.
Score
-6
September 8, 2011 10:17:48 PM

Gigabyte just got Gigabytch-slapped
Score
8
a b V Motherboard
September 8, 2011 10:18:43 PM

What bothers me the most is that they show Gigabyte's boards stepping back to PCIe 1 when switching to dual x8 slot config. I took a look at my Asus board, and how the video cards connect, and they're connecting at PCIe 2 speeds. Stepping all the way back from PCIe 2 x16 to PCIe 1 x8 is a huge drop in bandwidth, and a lot of wasted resources.
Score
2
a b V Motherboard
September 8, 2011 10:22:39 PM

clonazepamAnd... as far as we know, pcie 3 is still just an intel gimmick that won't amount to sh!t in the real world... i guess we'll find out sooner or later. Until all the pieces of the pcie 3 puzzle (cpu + board + gpu) come together, its wasted e-ink.


What would be most useful would be having a bridge chip that would take the PCIe 3 x16 interface from the CPU and bridge it to 4 PCIe 2 x8 or 2 PCIe 2 x16 slots for CrossfireX or SLI 3 way capability without a major loss of performance.
Score
5
September 8, 2011 10:23:36 PM

I've been reading wikipedia & it had written that the ivy bridge & panther point chipsets will have native intel 3.0 support & there you have it! These new intel chipsets will not come out for another 6 to 7 months approximately.
Score
3
September 8, 2011 10:28:56 PM

Also Z77 motherboards will be part of the panther point chipsets that will be using ivy bridge processors. Ivy bridge processors as I understand the write up from wikipedia said that you will be able to install an ivy bridge cpu into a cougar point motherboard e.g.:-Z68,P8P67,etc.
Score
1
September 8, 2011 10:33:48 PM

To tell the truth I was thinking of getting a G1.sniper,but it costs around $500- up & below that figure that I have given.
Score
0
September 8, 2011 10:35:08 PM

That is going to leave a mark!
Score
0
September 8, 2011 10:35:41 PM

Always liked Gigabyte boards for builds but good point MSI and your Twin fan MSI GTX 560Ti I have works great.
Will have to look hard at MSI on my next build, if I ever do one again.
Score
2
September 8, 2011 10:40:26 PM

otacon72Sounds like a Gigabyte employee...lmao.

No, it sounds like an informed and objective individual that actually looked at facts and the context of a marketing scheme between two (in this case) competitors. You don't seriously think that Gigabyte would falsify such basic data as technical specifications, do you? And you don't seriously believe that MSI constructed an objective campaign to discredit one of their most viable competitors, do you?
Score
0
September 8, 2011 10:40:46 PM

Smile on my face after a long day. :) 
Score
-2
a b V Motherboard
September 8, 2011 10:49:08 PM

dgingeriWhat would be most useful would be having a bridge chip that would take the PCIe 3 x16 interface from the CPU and bridge it to 4 PCIe 2 x8 or 2 PCIe 2 x16 slots for CrossfireX or SLI 3 way capability without a major loss of performance.


It's confusing. I'd rather have a specific board for a specific setup. Once you try to make one be everything for everyone, you end up paying a premium for features you won't use.
Score
-2
a c 223 V Motherboard
September 8, 2011 10:57:41 PM

sykoziswow....this coming from a company with a laughable track record...So, any actual proof of MSI's claims....or is all the "proof" supplied by MSI? Also, did anyone else notice that the pics show different motherboards? The first comparison shows the Gigabyte GA-Z68X-UD7-B3, while the 5th pic shows the Gigabyte GA-P67A-UD4-B3....and on Gigabyte's website, there's no claim of PCIe 3.0 compatibility. In fact, for the GA-P67A-UD4-B3, Gigabyte actually states that all PCI Express slots conform to Gen2 specs....surprisingly enough, the same notice is posted on the spec page for the GA-Z68X-UD7-B3... So, exactly where is Gigabyte claiming these 2 boards support PCIe 3.0 when they list them as being PCIe Gen2 on the spec pages for both products? From what I can find, it seems MSI is simply grabbing at straws and using fraud as a marketing tool. Gotta love the purely false claim on the last page of their marketing scheme..."Only MSI Has True PCI Express GEN3"....so, I guess the PCIe3.0 controllers on newer ASRock boards are just a figment of my imagination....So....where is this software that MSI used to "test" these slots and determine that Gigabyte is using PCIe Gen1 slots on the GA-P67A-UD4-B3... I'd like to publicly bitch slap MSI by proving the results are false.

Before you even post a comment maybe you should do some research for yourself.

Before you even think that I'm an MSI fan, I'm not. The last time I bought an MSI motherboard was over six years ago. It had problems running in SLI from Day 1. The micro-stuttering was just horrible.

MSI is disputing GIGABYTE's claim of PCIe 3.0 Ready Motherboards in GIGABYTE's news release from August 8, 2011. Read it for yourself here:

http://www.gigabyte.com/press-center/news-page.aspx?nid...
Score
5
a b V Motherboard
September 8, 2011 11:22:11 PM

Some one at Asus is getting yelled at for missing this, lol.
Score
8
September 8, 2011 11:31:32 PM

I knew Gigabyte wasn't full quid & now we've proof!
Score
3
Anonymous
a b V Motherboard
September 8, 2011 11:34:20 PM

gigabyte supports ONLY 16gb/s ahahahah how ancient is that... seriously though, do we even utilize 1/4 of that?
Score
-7
September 9, 2011 12:08:41 AM

Honestly it just looks like whoever typed up the Gigabyte news thingy made an error. I browsed through a few of the boards and none of them actually *say* that they're gen3 compatible. The spec pages specifically state:
Quote:
All PCI Express slots conform to PCI Express 2.0 standard.

And on none of the pages other than the Sniper 2 could I find the little logo and associated description in the features tab. It's possible that they quickly took them all off, but I somehow doubt that.
Score
2
a b V Motherboard
September 9, 2011 12:16:34 AM

Gigabyte is behind the times, used to be my fav. They haven't even moved on to UEFI bios yet. Asrock is my new fav.
Score
2
September 9, 2011 1:24:21 AM

Well, I just had a look at the Gigabyte website, and besides the G1.Sniper 2, none of them are listed as being PCI 3.0 ready.
Score
2
September 9, 2011 1:30:47 AM

Is this a joke between company ? anyway true or false this is funny.
Score
-3
a b V Motherboard
September 9, 2011 1:38:09 AM

For better or worse, consoles are here to stay... If the next-gen consoles don't need this kind of bandwidth, will anyone?
Score
-5
September 9, 2011 1:57:39 AM

Never went MSI before. I had Biostar before but everytime I flashed the BIOS it failed on me and every time they expect me to pay for shipping. Besides my one Abit board, Gigabyte and ASUS has NEVER failed me once and I've bought roughly 10 boards from them combined.
Score
1
September 9, 2011 2:43:26 AM

I was gonna buy MSI for my next computer, anyway thanks to Overclock Genie. I'm HOPING my computer holds out until Spring 2012 for this. I do want PCIe 3.0 components in it, as well. Hopefully, I'm giving manufacturers enough time to screw up the first run and do a better job on the second.
Score
-3
September 9, 2011 2:44:08 AM

internetladEither way i've never owned a gigabyte product, I've felt they're third tier products at a second tier price. I've been running an Asus board for years and it hasn't caused me problems yet!


FYI, the first(and last until now) ASUS board I tried was based on the 845D chipset , the P4-XP-X series, it has a particularly annoying issue that if you connect an IDE device and power-up the computer. It wouldn't detect any IDE devices. If you disconnect some of the devices and reconnect them again, the board recognizes them. And I was the lucky one out of the many that tried ASUS that year and had various problems. This caused ASUS to be out of my country's market for quite sometime. They only returned recently. On the other hand, I've used three Gigabyte boards with no issues. The 955X royal is 6 years old and still going strong.

Don't get me wrong, I respect ASUS and their revolutionary board designs. I was thinking of trying my luck with them again. But it'll take a lot to believe they're as reliable as Gigabyte.
Score
-3
September 9, 2011 3:24:59 AM

ZOINKS!!!
Score
-3
a b V Motherboard
September 9, 2011 3:28:19 AM

so the fastest graphics cards use up a whole 8-9 lanes of the 16 available to it. So now the fastest cards will only use 4 lanes (once the cards themselves support pcie3). It really wont make a hill of beans difference for most people, but will make SLI and crossfire much easier to manage for quad+ graphics configurations. Not to mention supporting more lanes will be cool... but seemingly unnecessary in the immediate future.
Score
1
September 9, 2011 3:31:05 AM

I´ll get ready my popcorn!
Score
-1
September 9, 2011 4:07:13 AM


> It really won't make a hill of beans difference for most people ....


I disagree, for a few fundamental reasons.

The PCI-E 3.0 spec published last November is very clear about increasing the clock rate of each PCI-E 3.0 lane to 8 GHz, and replacing the aging 8b/10b frame with a 128b/130b jumbo frame at the bus level.

This means that each PCI-E 3.0 lane will support a bandwidth of 1.0 Gigabyte per second in each direction -- hence, a MAX HEADROOM of 32 GBps through an x16 Gen3 edge connector (i.e. 16 GBps in each direction).

Where I anticipate this making a bigger impact is on high-performance storage subsystems that begin with an x16 Gen3 edge connector, and "fan out" to a multitude of very fast SSDs e.g. with multi-lane connectors like the current SFF-8087 and/or variations on that theme now being considered by the PCI-E SIG.

The fastest 6G SSDs are already bumping against the ceiling of the current SATA-III standard of 600 MB/second (6 GHz / 10).

The Gen3 standard adds only 1 start bit and only 1 stop bit for every 128 bits (16 bytes) of data, in effect removing almost exactly 20% of the data transmission overhead on the PCI-E 3.0 bus.

So, look at the approaching horizon and look forward to larger and larger SSDs with a standard interface speed of 8 GHz, instead of 6 GHz.

aka SATA-IV perhaps?

Hopefully by that time, there will be an option in both the SATA and SAS standards to extend the 128b/130b "jumbo frame" into the cable transmission protocol, and thus also into add-on RAID controller cards.

This could be easily implemented by storage manufacturers with a simple jumper, as is now the case with Western Digital HDDs that require a jumper to override the factory default and downgrade the interface speed to 150 MBps.

At the other end of the data cables, hopefully a simple BIOS setting will enable 130/128 "jumbo frames" over SATA and SAS transmission cables, and/or an on-board jumper will suffice as an interim measure.

What these 2 changes to the storage ecosystem accomplish, in effect, is a logical / topological extension of the PCI-E 3.0 bus standard outwards to all storage subsystems, on an "as needed" basis, with perhaps at most an engineering limit to the length of such data transmission cables -- not unlike the differences among CAT-5, CAT-5e and CAT-6 Ethernet cables.


I hope this helps.


MRFS
Score
4
September 9, 2011 4:10:54 AM

sykozis said:
wow....this coming from a company with a laughable track record...

So, any actual proof of MSI's claims....or is all the "proof" supplied by MSI? Also, did anyone else notice that the pics show different motherboards? The first comparison shows the Gigabyte GA-Z68X-UD7-B3, while the 5th pic shows the Gigabyte GA-P67A-UD4-B3....and on Gigabyte's website, there's no claim of PCIe 3.0 compatibility. In fact, for the GA-P67A-UD4-B3, Gigabyte actually states that all PCI Express slots conform to Gen2 specs....surprisingly enough, the same notice is posted on the spec page for the GA-Z68X-UD7-B3... So, exactly where is Gigabyte claiming these 2 boards support PCIe 3.0 when they list them as being PCIe Gen2 on the spec pages for both products? From what I can find, it seems MSI is simply grabbing at straws and using fraud as a marketing tool. Gotta love the purely false claim on the last page of their marketing scheme..."Only MSI Has True PCI Express GEN3"....so, I guess the PCIe3.0 controllers on newer ASRock boards are just a figment of my imagination....

So....where is this software that MSI used to "test" these slots and determine that Gigabyte is using PCIe Gen1 slots on the GA-P67A-UD4-B3... I'd like to publicly bitch slap MSI by proving the results are false.


READ IT BEFORE MAKING ANY CRITICISM, GUYS !
Score
1
a b V Motherboard
September 9, 2011 4:42:15 AM

MRFS> It really won't make a hill of beans difference for most people ....I disagree, for a few fundamental reasons.The PCI-E 3.0 spec published last November is very clear about increasing the clock rate of each PCI-E 3.0 lane to 8 GHz, and replacing the aging 8b/10b frame with a 128b/130b jumbo frame at the bus level.This means that each PCI-E 3.0 lane will support a bandwidth of 1.0 Gigabyte per second in each direction -- hence, a MAX HEADROOM of 32 GBps through an x16 Gen3 edge connector (i.e. 16 GBps in each direction).Where I anticipate this making a bigger impact is on high-performance storage subsystems that begin with an x16 Gen3 edge connector, and "fan out" to a multitude of very fast SSDs e.g. with multi-lane connectors like the current SFF-8087 and/or variations on that theme now being considered by the PCI-E SIG.The fastest 6G SSDs are already bumping against the ceiling of the current SATA-III standard of 600 MB/second (6 GHz / 10). The Gen3 standard adds only 1 start bit and only 1 stop bit for every 128 bits (16 bytes) of data, in effect removing almost exactly 20% of the data transmission overhead on the PCI-E 3.0 bus.So, look at the approaching horizon and look forward to larger and larger SSDs with a standard interface speed of 8 GHz, instead of 6 GHz.aka SATA-IV perhaps?Hopefully by that time, there will be an option in both the SATA and SAS standards to extend the 128b/130b "jumbo frame" into the cable transmission protocol, and thus also into add-on RAID controller cards.This could be easily implemented by storage manufacturers with a simple jumper, as is now the case with Western Digital HDDs that require a jumper to override the factory default and downgrade the interface speed to 150 MBps.At the other end of the data cables, hopefully a simple BIOS setting will enable 130/128 "jumbo frames" over SATA and SAS transmission cables, and/or an on-board jumper will suffice as an interim measure.What these 2 changes to the storage ecosystem accomplish, in effect, is a logical / topological extension of the PCI-E 3.0 bus standard outwards to all storage subsystems, on an "as needed" basis, with perhaps at most an engineering limit to the length of such data transmission cables -- not unlike the differences among CAT-5, CAT-5e and CAT-6 Ethernet cables.I hope this helps.MRFS


Yeah sure but that's completely different than the intended or implied usage from the motherboard manufacturers. They're hyping it all for gaming and graphics cards. We know SSDs don't really enhance gaming to a "must have" level. The storage options all sound great for other arenas. I think its out of context for this article, and this discussion / comments.

I very much respect your contribution(s) to the community, so don't get me wrong. I enjoyed reading it. /peace
Score
1
!