P67 Motherboard Roundup: Nine $150-200 Boards
Tags:
- Performance
-
Motherboards
- Round-Up
- Product
Last response: in Reviews comments
Improved per-clock performance and higher achievable frequencies are sure to put Intel’s latest K-Series CPUs on top of many builders’ whish lists, but they’ll still need a new socket to put it in. We test nine enthusiast-oriented LGA-1155 motherboards.
P67 Motherboard Roundup: Nine $150-200 Boards : Read more
P67 Motherboard Roundup: Nine $150-200 Boards : Read more
More about : p67 motherboard roundup 150 200 boards
Tamz_msc
January 10, 2011 4:25:03 AM
reprotected
January 10, 2011 4:27:55 AM
rantsky
January 10, 2011 4:29:16 AM
Tamz_msc
January 10, 2011 4:30:17 AM
rmse17
January 10, 2011 4:31:08 AM
Thanks for the prompt review of the boards! I would like to see any differences in quality of audio and networking components. For example, what chipsets are used for Audio in each board, how that affects sound quality. Same thing for network, which chipset is used for networking, and bandwidth benchmarks. If you guys make part 2 to the review, it would be nice to see those features, as I think that would be one more way these boards would differentiate themselves.
Score
3
VVV850
January 10, 2011 4:50:11 AM
flabbergasted
January 10, 2011 5:25:08 AM
VVV850
January 10, 2011 5:25:14 AM
stasdm
January 10, 2011 5:47:24 AM
Do not see any board worth spending money on.
1. SLI "support". Do not understand why end-user has to pay for mythical SLI "sertification" (all latest Intel chips support SLI by definition) and a SLI bridge coming with the board (at least 75% of end users would never need one). The bridge should come with NVIDIA cards (same as with AMD ones). Also, in x8/x8 PCIe configuration nearly all NVIDIA cards (exept for low-end ones) will loose at least 12% productivity - with top cards that is about $100 spent for nothing (AMD cards would not see that difference). So, If those cards are coming as SLI-"sertified" they have to be, in the worst case, equipped by NVIDIA NF200 chip (though, I would not recommend to by cards with this PCIe v.1.1 bridge). As even NVIDIA GF110 cards really need less than 1GB/s bandwidth (all other NVIDIA and AMD - less than 0.8GB/s)and secondary cards in SLI/CrossFire use no more than 1/4 of that, a normal PCIe v.2.0 switch (costing less than thrown away with x8/x8 SLI money) will nicely support three "Graphics only" x16 slots, fully-functional x8 slot and will provide bandwidth enough to support one PCIe v.2.0 x4 (or 4 x x1) slot(s)/device(s).
2. Do not understand the author euphoria of mass use of Marvell "SATA 6G" chips. The PCIe x1 chip might not be "SATA 6G" by definision, as it woud newer be able to provide more than 470GB/s (which is far from the standard 600GB/s) - so, I'd recommend to denote tham as 3G+ or 6G-. As it is shown in the upper section, there is enough bandwidth for real 6G solution (PCIe x8 LSISAS 2008 or x4 LSISAS 2004). Yes, will be a bit more expensive, but do not see the reason to have a palliative solutions on $200+ mobos.
1. SLI "support". Do not understand why end-user has to pay for mythical SLI "sertification" (all latest Intel chips support SLI by definition) and a SLI bridge coming with the board (at least 75% of end users would never need one). The bridge should come with NVIDIA cards (same as with AMD ones). Also, in x8/x8 PCIe configuration nearly all NVIDIA cards (exept for low-end ones) will loose at least 12% productivity - with top cards that is about $100 spent for nothing (AMD cards would not see that difference). So, If those cards are coming as SLI-"sertified" they have to be, in the worst case, equipped by NVIDIA NF200 chip (though, I would not recommend to by cards with this PCIe v.1.1 bridge). As even NVIDIA GF110 cards really need less than 1GB/s bandwidth (all other NVIDIA and AMD - less than 0.8GB/s)and secondary cards in SLI/CrossFire use no more than 1/4 of that, a normal PCIe v.2.0 switch (costing less than thrown away with x8/x8 SLI money) will nicely support three "Graphics only" x16 slots, fully-functional x8 slot and will provide bandwidth enough to support one PCIe v.2.0 x4 (or 4 x x1) slot(s)/device(s).
2. Do not understand the author euphoria of mass use of Marvell "SATA 6G" chips. The PCIe x1 chip might not be "SATA 6G" by definision, as it woud newer be able to provide more than 470GB/s (which is far from the standard 600GB/s) - so, I'd recommend to denote tham as 3G+ or 6G-. As it is shown in the upper section, there is enough bandwidth for real 6G solution (PCIe x8 LSISAS 2008 or x4 LSISAS 2004). Yes, will be a bit more expensive, but do not see the reason to have a palliative solutions on $200+ mobos.
Score
-4
Anonymous
a
b
V
Motherboard
January 10, 2011 6:37:35 AM
stasdm
January 10, 2011 7:12:16 AM
To rmse17
Do not think they use anything better than native southbridge or Realtec controller. Adding better chip will add some cents to the board cost. And, anyway, these boards are not positioned as "Hard gaimer" ones - more like low-middle class (though, with proper design LGA1155 boards may be exellent gaming boards.
Do not think they use anything better than native southbridge or Realtec controller. Adding better chip will add some cents to the board cost. And, anyway, these boards are not positioned as "Hard gaimer" ones - more like low-middle class (though, with proper design LGA1155 boards may be exellent gaming boards.
Score
0
Vatharian
January 10, 2011 7:48:42 AM
@Lutfij - they're over the price limit for this article.
@stasdm - SLI - it's a trick nVidia pulls to make money from every mobo sold (with SLi support). Mythical or not, intel's PCHs DO NOT support SLi by default - they do not support SLi at all. It's all by means of nV's driver and BIOS-included string. Everything is supported by the fact that PCI-Express has enough bandwidth to sustain two cards - that wasn't exactly always possible with PCIe 1.0 lower bandwidth on previous-gen chipsets. About SATA 6G - most of the crowd won't utilize even half of the SATA ports at all. If they will, then probably they'll treat them only as additional sata ports with no regard to their speed. Enthusiasts will attach their SSDs to PCIe in extreme cases or at least to intel's own controller, which in turn handles SATA internally in PCH without using PCIe. External controllers are out of scope for 99% of the crowd, mind you. For example - show me PCIe based 4x SATA (or SAS, for availablity sake) controller with RAID 5 support below 300$. Any? Don't think so, save one crappy LSI. It's budget side, man. If you want top-of-the-line, get server board for storage and second, gaming, or performance rig, but that's not what's this article about.
@Author - Thank you for great comparison. Too bad it ended on counting what does not work on the boards. It seems that atm intel and ASUS have the most mature and reliable products. However, I'd still wait for second-gen P67 boards (in Q2?), before upgrading. Still wondering what to do with my 1366 rig.
@stasdm - SLI - it's a trick nVidia pulls to make money from every mobo sold (with SLi support). Mythical or not, intel's PCHs DO NOT support SLi by default - they do not support SLi at all. It's all by means of nV's driver and BIOS-included string. Everything is supported by the fact that PCI-Express has enough bandwidth to sustain two cards - that wasn't exactly always possible with PCIe 1.0 lower bandwidth on previous-gen chipsets. About SATA 6G - most of the crowd won't utilize even half of the SATA ports at all. If they will, then probably they'll treat them only as additional sata ports with no regard to their speed. Enthusiasts will attach their SSDs to PCIe in extreme cases or at least to intel's own controller, which in turn handles SATA internally in PCH without using PCIe. External controllers are out of scope for 99% of the crowd, mind you. For example - show me PCIe based 4x SATA (or SAS, for availablity sake) controller with RAID 5 support below 300$. Any? Don't think so, save one crappy LSI. It's budget side, man. If you want top-of-the-line, get server board for storage and second, gaming, or performance rig, but that's not what's this article about.
@Author - Thank you for great comparison. Too bad it ended on counting what does not work on the boards. It seems that atm intel and ASUS have the most mature and reliable products. However, I'd still wait for second-gen P67 boards (in Q2?), before upgrading. Still wondering what to do with my 1366 rig.
Score
0
stasdm
January 10, 2011 8:30:58 AM
@Vatharian
1. As SLI is software only solution (BIOS string is just a trick), that is why it is by default supported by Intel (AMD too).
2. Even PCIe 1.1 bandwidth is over the head to support four-way SLI/CroaaFire. They use a few administrative tools from 2.0 now - but that's all. The difference between AMD and NVIDIA is that for at least two generations already AMD uses standard PCIe protocol, but at 1/4 of the standard speed. NVIDIA used even slower speed at pre-GF110 chips (that's why they decided not to issue 512-cores NF100 cards - they would not be faster than "abridged" version), with non-stsndard "Graphics PCIe" protocol (Basically PCIe, but w/o parity control, using parity bits for data, w/o distributed clock support and some other "speed-up" tricks. On x8 bus their cards have to return to standard PCIe protocol and automatically loose the "no parity" part of the bandwidth.
1. As SLI is software only solution (BIOS string is just a trick), that is why it is by default supported by Intel (AMD too).
2. Even PCIe 1.1 bandwidth is over the head to support four-way SLI/CroaaFire. They use a few administrative tools from 2.0 now - but that's all. The difference between AMD and NVIDIA is that for at least two generations already AMD uses standard PCIe protocol, but at 1/4 of the standard speed. NVIDIA used even slower speed at pre-GF110 chips (that's why they decided not to issue 512-cores NF100 cards - they would not be faster than "abridged" version), with non-stsndard "Graphics PCIe" protocol (Basically PCIe, but w/o parity control, using parity bits for data, w/o distributed clock support and some other "speed-up" tricks. On x8 bus their cards have to return to standard PCIe protocol and automatically loose the "no parity" part of the bandwidth.
Score
0
stasdm
January 10, 2011 8:36:37 AM
belardo
January 10, 2011 9:11:33 AM
Good article... But these are still 1st gen boards and at $150~200 for feature sets that's the same for an AMD Chipset board is not impressive. Obviously, the new CPUs are usually faster than AMD of course. Which helps to regulate AMD into the low-end ~ mid-range computer systems.
Still not impressed with intel's locking down flexibility of their boards & CPUs. But that's intel for you. Sandy Bridge would be great for my video encoding... but it most likely not work for me... blah blah.
Still not impressed with intel's locking down flexibility of their boards & CPUs. But that's intel for you. Sandy Bridge would be great for my video encoding... but it most likely not work for me... blah blah.
Score
-1
feeddagoat
January 10, 2011 9:36:07 AM
aaron88_7
January 10, 2011 9:50:28 AM
The Deluxe version of that Asus board comes with a USB 3.0 drive bay, but I was a little confused as to why one would want that if their Case already has front faced USB 3.0 ports like the one I got does.
I'll have to look at the connections again, but can you use that same cable to plug into the board on the inside or do those case USB 3.0 ports have to be connected to the rear of the board? Personally I think the drive bay including just 2 USB ports looks kind of lame and I'd much, much rather use the ports on my case....even if that means running a cable out the back of the case, (nobody looks at the back anyway).
I'll have to look at the connections again, but can you use that same cable to plug into the board on the inside or do those case USB 3.0 ports have to be connected to the rear of the board? Personally I think the drive bay including just 2 USB ports looks kind of lame and I'd much, much rather use the ports on my case....even if that means running a cable out the back of the case, (nobody looks at the back anyway).
Score
0
Vatharian
January 10, 2011 10:03:40 AM
@stasdm - Dual core Atoms are strangely rare. To be honest their performance is an insult considering the price you pay for them. For compact PC in ITX case, sure, but SFF is pricey, I'd say, comparable to far-faster normal microATX format. Also very simple, common situation: I want to use budget mobo, decent CPU (not the slowest one), and I'd like to have TV tuner card, some SB Audigy2 lying around, (or ASUS Xonar
), still better than the dreaded Realtek. Maybe some used PCIe graphics, so my daughter can play Sims 3 without a problem. So there is a problem, because I do not know any not-SFF Atom board. It's either PCIe or PCI, and only single slot. If you want a typing machine, go buy used PC for 40$ or even less.
), still better than the dreaded Realtek. Maybe some used PCIe graphics, so my daughter can play Sims 3 without a problem. So there is a problem, because I do not know any not-SFF Atom board. It's either PCIe or PCI, and only single slot. If you want a typing machine, go buy used PC for 40$ or even less. Score
0
Please remember that due to the need for 2 days of testing per motherboard while using the same CPU each time to assure consistent overclocking results, this comparison was limited to one product per manufacturer.
Each manufacturer was given the opportunity to chose the motherboard model itself. Christmas and New Year's day testing were particularly fun
Each manufacturer was given the opportunity to chose the motherboard model itself. Christmas and New Year's day testing were particularly fun
Score
0
malphas
January 10, 2011 10:11:22 AM
Vatharian
January 10, 2011 10:13:19 AM
sudeshc
January 10, 2011 10:15:49 AM
malphasWhen you do these comparison tables, can't you have them open in a new window or something so we can read them in one set of rows? Having the table split into three parts kind of defeated the benefit of having a table in the first place.
VatharianI support that!
Right now it's a limitation of the CMS, but it may be possible to do this as an image of the chart. I'll ask around to see if there are any other options.
Score
0
stasdm
January 10, 2011 10:18:53 AM
stasdm
January 10, 2011 10:27:24 AM
stasdm@Crashman Is it a standard "Pins ports" or ASUS/ASRock propriatory solution?
CrashmanFirst introduced by ASRock in response to a request by Tom's Hardware, using an Intel design according to ASRock's engineers.
Asrock, Asus, ECS, Gigabyte and MSI all have the same USB 3.0 front-panel connector. You should contact your case manufacturer to see if a cable end adapter is available so you can use it, otherwise the internal ports are wasted.Score
0
shinnjon
January 10, 2011 10:40:47 AM
aaron88_7The Deluxe version of that Asus board comes with a USB 3.0 drive bay, but I was a little confused as to why one would want that if their Case already has front faced USB 3.0 ports like the one I got does.I'll have to look at the connections again, but can you use that same cable to plug into the board on the inside or do those case USB 3.0 ports have to be connected to the rear of the board? Personally I think the drive bay including just 2 USB ports looks kind of lame and I'd much, much rather use the ports on my case....even if that means running a cable out the back of the case, (nobody looks at the back anyway).
The cables for USB3.0 is different from USB2.0...not so many cases have that. Do you really have the case with "USB3.0" ports?
Score
0
shinnjonThe cables for USB3.0 is different from USB2.0...not so many cases have that. Do you really have the case with "USB3.0" ports?
Several high-end cases were introduced that have USB 3.0 extension cables for I/O-panel access using pass-through holes to the rear. Most of those cases were introduced after ASRock introduced the Intel connector design, so I think it was a little short-sited for companies not to consider this when introducing their rear-panel-only products. Score
0
hixbot
January 10, 2011 11:44:43 AM
stasdm
January 10, 2011 11:48:14 AM
hixbot
January 10, 2011 11:56:04 AM
Well 2-way on these boards we're forced to x8, same as P55. but my thinking is now that the lanes are 2.0 the issue should be moot.
Another issue is using USB3 and Sata 6 (which use PCIe lanes) combined with 2-way graphics. The issue is worth exploring, I'm hoping no performance is compromised on P67.
Another issue is using USB3 and Sata 6 (which use PCIe lanes) combined with 2-way graphics. The issue is worth exploring, I'm hoping no performance is compromised on P67.
Score
0
straatkat
January 10, 2011 12:24:40 PM
Anonymous
a
b
V
Motherboard
January 10, 2011 1:47:43 PM
z06psi
January 10, 2011 2:05:56 PM
hixbot
January 10, 2011 2:07:25 PM
cadder
January 10, 2011 2:07:41 PM
stasdm
January 10, 2011 2:24:48 PM
@ hixbot
1. It's the third generation of Intel processors with PCIe x2.
2. Read my post ealier on nVidia "Graphics PCIe"
3. USB3 and Sata 6 - there is nothing to explore - Graphics needs only 1GB/s and modern PCIe 2.0 switches are at least no worse than outdated NF100 (aka NF200).
@straatkat
With UEFI boot time will greatly depend on threading effisiency, so the first UEFI versions will generally boot longer (as the code length is much bigger).
@ALcALoIDe
Same as on 1156 originals - nothing changed here really.
@z06psi
Who will buy LGA 2011 boards/processors then?
Afraid such deoptimised use is Intel policy, else we would have seen the optimised solutions on Nehalem board by this time. And here I'd give a good ass kick to those who stand on the way.
1. It's the third generation of Intel processors with PCIe x2.
2. Read my post ealier on nVidia "Graphics PCIe"
3. USB3 and Sata 6 - there is nothing to explore - Graphics needs only 1GB/s and modern PCIe 2.0 switches are at least no worse than outdated NF100 (aka NF200).
@straatkat
With UEFI boot time will greatly depend on threading effisiency, so the first UEFI versions will generally boot longer (as the code length is much bigger).
@ALcALoIDe
Same as on 1156 originals - nothing changed here really.
@z06psi
Who will buy LGA 2011 boards/processors then?
Afraid such deoptimised use is Intel policy, else we would have seen the optimised solutions on Nehalem board by this time. And here I'd give a good ass kick to those who stand on the way.Score
0
stasdm
January 10, 2011 2:29:42 PM
elbert
January 10, 2011 3:46:19 PM
Most all the boards here only support 16GB's. Currently $160 can max out these motherboards memory. A low memory max doesn't sound like an enthusiast to me. Wish this review included the ASUS P8P67 EVO because its max memory is 32GB's and is only $199.99.
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
Score
0
Problem with the AsRock and Asus board review portions: the last slot is not meant for triple SLi. It's there for PCieX4 and X8 accessory cards. What I want to know is how these boards work with raid cards in that slot. Do I have to forgo using the other PCIe slots to use that last slot in X4 mode? More importantly, do I have to disable the addon SATA 6G and USB 3 controllers to use that slot in X4 mode?
I currently have 2X GTX470 video cards, a PCI sound card, and a PCIeX4 SATA raid controller (8 port 3Ware 9650). Can I use either of these boards? This is an important question.
you guys just answer it with smart remarks about using a third video card, which obviously wasn't intended.
I currently have 2X GTX470 video cards, a PCI sound card, and a PCIeX4 SATA raid controller (8 port 3Ware 9650). Can I use either of these boards? This is an important question.
you guys just answer it with smart remarks about using a third video card, which obviously wasn't intended.
Score
1
hixbot
January 10, 2011 4:47:29 PM
stasdm, i was under the impression that lane bandwidth has doubled on P67 vs P55.
It's all right here http://www.tomshardware.com/reviews/sandy-bridge-core-i7-2600k-core-i5-2500k,2833-8.html
P55 lanes are 2.5GT/s, P67 are 5GT/s.
It's all right here http://www.tomshardware.com/reviews/sandy-bridge-core-i7-2600k-core-i5-2500k,2833-8.html
P55 lanes are 2.5GT/s, P67 are 5GT/s.
Score
1
agnickolov
January 10, 2011 4:51:48 PM
iLLz
January 10, 2011 5:40:38 PM
"stasdm, i was under the impression that lane bandwidth has doubled on P67 vs P55."
Agreed. The PCIe lanes are now a full 500MB/s (1GB/s Bidirectional), whereas, with the older chipsets x58 and P55, they were only 250MB/s (500MB/s Bidirectional).
With this doubling of bandwidth an x8/x8 configuration should yield the same bandwidth as a previous x16/x16 (x58 or P55), right?
Even though the previous gen chipsets say they are PCIe2.0 they were limited to half the bandwidth and now with P67 they are fully PCIe2.0.
Agreed. The PCIe lanes are now a full 500MB/s (1GB/s Bidirectional), whereas, with the older chipsets x58 and P55, they were only 250MB/s (500MB/s Bidirectional).
With this doubling of bandwidth an x8/x8 configuration should yield the same bandwidth as a previous x16/x16 (x58 or P55), right?
Even though the previous gen chipsets say they are PCIe2.0 they were limited to half the bandwidth and now with P67 they are fully PCIe2.0.
Score
0
iLLz"stasdm, i was under the impression that lane bandwidth has doubled on P67 vs P55."Agreed. The PCIe lanes are now a full 500MB/s (1GB/s Bidirectional), whereas, with the older chipsets x58 and P55, they were only 250MB/s (500MB/s Bidirectional). With this doubling of bandwidth an x8/x8 configuration should yield the same bandwidth as a previous x16/x16 (x58 or P55), right? Even though the previous gen chipsets say they are PCIe2.0 they were limited to half the bandwidth and now with P67 they are fully PCIe2.0.
Not quite. the x58 northbridge has 36 lanes of PCIe 2.0, which can be split as much as needed, while the LGA1156 chips have 16 lanes of PCIe 2.0, intended for video cards, but can be split into 2 x8 slots. In both cases, the southbridge have 4 lanes of PCIe 1.1 for expansion cards.
With the P67 chipset, the CPU has 16 lanes of PCIe 2.0 that can only be split to 2 x8 slots or a single x16 slot. However, the chipset has 8 lanes of PCIe 2.0 that can be used for expansion cards. That's the part where the bandwidth doubled, not the video card bandwidth. So, a video card would not see any difference, but a PCIe x4 2.0 raid controller would.
Score
0
rusbee
January 10, 2011 6:36:34 PM
Thanks for the review.
A major point missing the board info is the power phases. While most manufacturers have switched to digital, some boards have stayed with analog (Gigabyte GA-P67A-UD7 for one, I am not sure about the rest of their line-up). 2nd is the number of power phases and how it affects the life-time of the boards. While Asrock Extreme4 has 8+2, Extreme6 has 16+2. Asus P8P67 Pro and Evo have 12+2; Delux version has 16+2. How much does it matter (does one need less power phases with digital ones)?
Another point which I am curious about is the quality of components used. MSI and Asrock use Polymer caps, where Asus seems to have cheaped out on a little here (their lower end H67 bards still use Polymer caps, but not the P67s). Is this going to significantly increase the probability of the boarding failing a few years down the road when the guarantee is over? Perhaps this is the reason VRMs on Asrock and MSI are running so much cooler compared to Asus despite using less phases?
Currently, I am looking at two boards in particular: Asrock Extreme6 vs. Asus P8P67 Delux. Quality-wise, Asrock seems to use better components overall while being cheaper at the same time, but it does not use the Intel network controller which Asus does for one of the network controllers. I wonder how this affects ping times for online-gaming.
As pointed out in other comments, I also want to know how and when the PCIe slots get saturated with two high end graphic cards and a 3d device as PCIe x4. I have two 6950s and a Revodrive 120GB which is going to use it's software-raid. Can any of the boards (even Asrock Extreme4, Extreme6 or the Asus P8P67 Delux which all have the PLX chip) handle and balance so much band-width?
A major point missing the board info is the power phases. While most manufacturers have switched to digital, some boards have stayed with analog (Gigabyte GA-P67A-UD7 for one, I am not sure about the rest of their line-up). 2nd is the number of power phases and how it affects the life-time of the boards. While Asrock Extreme4 has 8+2, Extreme6 has 16+2. Asus P8P67 Pro and Evo have 12+2; Delux version has 16+2. How much does it matter (does one need less power phases with digital ones)?
Another point which I am curious about is the quality of components used. MSI and Asrock use Polymer caps, where Asus seems to have cheaped out on a little here (their lower end H67 bards still use Polymer caps, but not the P67s). Is this going to significantly increase the probability of the boarding failing a few years down the road when the guarantee is over? Perhaps this is the reason VRMs on Asrock and MSI are running so much cooler compared to Asus despite using less phases?
Currently, I am looking at two boards in particular: Asrock Extreme6 vs. Asus P8P67 Delux. Quality-wise, Asrock seems to use better components overall while being cheaper at the same time, but it does not use the Intel network controller which Asus does for one of the network controllers. I wonder how this affects ping times for online-gaming.
As pointed out in other comments, I also want to know how and when the PCIe slots get saturated with two high end graphic cards and a 3d device as PCIe x4. I have two 6950s and a Revodrive 120GB which is going to use it's software-raid. Can any of the boards (even Asrock Extreme4, Extreme6 or the Asus P8P67 Delux which all have the PLX chip) handle and balance so much band-width?
Score
0
aaron88_7
January 10, 2011 6:50:53 PM
Score
0
- 1 / 2
- 2
- Newest
!
