foolishone

Distinguished
Dec 23, 2008
24
0
18,520
I've got a basic framework with some components I've been researching. The majority of these parts of off newegg at the moment to allow for easy searching and combined shipping. This still requires quite a bit of research, but I plan to get the 920 up to 3.5Ghz and hopefully around 3.8Ghz. Was hoping to find some 3x4GB DDR3 sets rated up to 2000Mhz, but it appears that there aren't decent 3x4GB sets or 3x2GB 2000Mhz sets. If some are released around the time the GTX 295 those will be the models of choice. If these higher frequency dimms are released the motherboard will probably have to change to an ASUS extreme II to handle the frequencies easier. Looking for thoughts on current motherboard selections and cooling options. I'm planning on having two GTX 295's in there pending January benchmarks. I know the Load Wattage hasn't been released for these, but I'm curious about Power Supply ratings. I've had it estimated at 1275W, but wasn't sure about the accuracy of the calculator. Either way, suggestions are appreciated.

Intel Core i7 920 Nehalem 2.66GHz LGA 1366 130W Quad-Core Processor Model BX80601920 - Retail

OCZ Platinum 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1866 (PC3 15000) Triple Channel Kit Desktop Memory Model OCZ3P1866C9LV6GK - Retail
OCZ Platinum 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1866 (PC3 15000) Triple Channel Kit Desktop Memory Model OCZ3P1866C9LV6GK - Retail

Option A: GIGABYTE GA-EX58-EXTREME LGA 1366 Intel X58 ATX Intel Motherboard - Retail
Option B: ASUS P6T Deluxe LGA 1366 Intel X58 ATX Intel Motherboard - Retail

Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
G.SKILL FM-25S2S-128GB 2.5" 128GB SATA II Internal Solid state disk (SSD) - Retail
G.SKILL FM-25S2S-128GB 2.5" 128GB SATA II Internal Solid state disk (SSD) - Retail

Option A: PC5-1326SL
w/ RP-1000SL
Option B: LIAN LI PC-V1200Aplus II Silver Aluminum ATX Mid Tower Computer Case - Retail
w/ Ultra-120 eXtreme 1366 RT
Option C: LIAN LI PC-V1200Bplus II Black Aluminum ATX Mid Tower Computer Case - Retail
w/ Ultra-120 eXtreme 1366 RT


 

Akebono 98

Distinguished
Dec 5, 2008
560
0
18,980
Thoughts:

Asus' website indicates a new P6T Deluxe V2 mobo. If you're waiting for GTX 295, then consider waiting for that mobo too. It will most likely be better for overclocking.

In choosing mobo, consider if you want to eventually run tri-SLI. If so, then look closely at the PCIe x16 expansion slots and how they are laid out. The P6T Deluxe can't run tri-SLI with dual-slot cooler cards.

Also look closely at the Asus LGA1366 "WS" series boards. May have some features that you might find useful (like tri-SLI and more I/O connectors).

At the 1200W level, the only PSU that I would use would be the PC Power & Cooling Turbo-Cool 1200.
 

foolishone

Distinguished
Dec 23, 2008
24
0
18,520
Thanks for your response. I tried to find some information on the P6T Deluxe V2, but it seems that ASUS is trying to keep it under wraps and continue dishing out V1's. Now that the GTX 295 was released, I have a better grasp on how everything is going to pan out.

CPU: I7 920 overclocked 2-3% below what is safely possible with a Ultra-120 eXtreme 1366 RT or another cooler per recommendations

Memory: Still nothing in the way of 3x4GB kits to start off with 12GB via 3 dimms. Hopefully progress will be made that leaves room for expansion here, but...
Currently : 1-2 sets of OCZ Platinum 6GB (3 x 2GB) 240-Pin DDR3 SDRAM DDR3 1866 (PC3 15000) Triple Channel Kit Desktop Memory Model OCZ3P1866C9LV6GK - Retail
Anyone know of 3x2GB kits capable of 2000Mhz?

Motherboard: Since I have decided upon a GTX 295 and they are incapable of tri-SL (which would be overkill) that feature isn't necessarily required. Still looking at the GIGABYTE GA-EX58-EXTREME and ASUS P6T Deluxe V1/2? unless there are any new comers I haven't read reviews on yet. Would like some good raid chips onboard due to storage section. Important... Which ones are capable of the best simultaneous RAM/CPU overclocking?

Storage:

Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
Seagate Barracuda 7200.11 ST31500341AS 1.5TB 7200 RPM SATA 3.0Gb/s Hard Drive (bare drive) - OEM
^Raid 0
G.SKILL FM-25S2S-128GB 2.5" 128GB SATA II Internal Solid state disk (SSD) - Retail
G.SKILL FM-25S2S-128GB 2.5" 128GB SATA II Internal Solid state disk (SSD) - Retail
^Raid 0

Might drop a Barracuda and add another SSD if I can find any benchmarks with raid 0's containing SSD's. This is a foreign area for me, so I'm not sure how much performance would be gained with how impressive these drives are already.

Graphics: EVGA 017-P3-1292-AR GeForce GTX 295 Plus
Decided to go with the EVGA currently due to the tradeup program (what will be above it in 90 days???) and the lack of any shootouts. I'll give this a bit of time for the different brands to impressive me. 1-2 of these...

Power: Nothing will do except a Tier 1 source. I would like the headroom for 2 GTX 295's even though I will only purchase one initially. Needs to have the connections for 2x GTX 295's also, obviously.
SILVERSTONE OP1000-E 1000W <- $170
SILVERSTONE ST1200 1200W < - $270
PC Power & Cooling T12W 1200W <- $410 shees

Anyone have some impressive cases I can examine?
The LIAN LI PC-V1200 PlusII Is beautiful but I have read about some cooling issues with having the motherboard inverted... :/
Any info on this?

Thanks in advance for any comments :) :)

BTW, I wouldn't mind added 3d gaming capabilities to this for the hell of it
 

falconhr92

Distinguished
Dec 20, 2008
130
0
18,680
Damn that will cost an arm and a leg.
How much have u worked it out?

If u want good cooling the Cooler Master HAF 932 is GREAT.http://www.newegg.com/Product/Product.aspx?Item=N82E16811119160
 

foolishone

Distinguished
Dec 23, 2008
24
0
18,520
I have an excel spreadsheet setup with all the components and various options built into the price and it comes in anywhere from 2400 to 3100 w/o a PSU or the 3d extras atm. Considering that I spent near $1k last time I built a desktop on 2GB of RDRAM 1066, it generally doesn't seem that bad in my book.
"
Thanks for the case link. I'm about to go through some reviews for it. Any chance they have a similar design with more 3.5" bays? I have 4 + at least 2 2.5" SATA at the moment and need somewhere to plant them. I don't mind spending more on any specific area if it makes the build process easier or impresses more people :)
 

xthekidx

Splendid
Dec 24, 2008
3,871
1
22,790
Do you really need 6TB of storage space? I can't imagine ever filling that. As you increase the number of drives in RAID 0 your chances of failure increase dramatically. Since you have 2 SSD's in RAID 0 for your OS and your games (providing you plenty of speed for load times), I would say you probably don't need more than 2 of those 1.5 TB drives for RAID 0. If you do want all 4 of them, consider Raid 1 or 5 so you won't lose all your data if a drive fails.

Also, seagate drives tend to have trouble. Look at WD drives.
 

foolishone

Distinguished
Dec 23, 2008
24
0
18,520
I see your point in relation to the total capacity. Hadn't seen any numbers for the failure rate, even though I knew it would be inherently weak due to any single failure resulting in complete failure. Going to change this to 3xbulk drives in a raid 5. The performance of raid zero wouldn't be as required anyway since it should be storage and will have limited access. Raid 5 gives me a best of both worlds sort of... I'll have to read up on the comparable WD drives. I've owned quite a bit of WD drives in my time and would prefer them, but they haven't released a 1.5TB single drive and I already own a couple of these 1.5TB Seagates within another pc. What would you recommend over them? and would you recommend a third SSD to turn the other raid 0 into 5?
 

apollux

Distinguished
Nov 23, 2008
19
0
18,510
If you are considering the GA-EX58-Extreme then you should consider the UD5 as well. I was torn between those 3 MBs (P6T Deluxe, UD5 and the Extreme) and when with the UD5 in the end.

When it comes to UD5 vs. Extreme, the Extreme is causing more headaches to it's owners and is somehow crippled by the NB cooler placement... not to mention that people who has actually used it for water cooling report that the barbs has a tendency to leak.
 

xthekidx

Splendid
Dec 24, 2008
3,871
1
22,790
well if you already own some seagate drives then you can go with them. also, i'm not an expert on this, but i don't think all your drives need to be the same size, so you could buy 1 tb drives from WD and run raid 5 with them (I know that it doesn't matter for RAID 0, but 5 might be different, im not sure). Don't bother raid 5 for your ssd's. ssd's are generally very safe and 250 gb is plenty for your games and OS
 

Akebono 98

Distinguished
Dec 5, 2008
560
0
18,980
Let me chime in with a few thoughts.

Memory: Not sure why you are so intent on getting DDR3-2000. The i7 has a memory multiplier of 8, so working backwards 2000/8=250 base clock. With its CPU multiplier being 20, 20x250=5 GHz. You sure you can do that?

If not, then Corsair Dominator DDR3-1866 in 3x2GB form is the way to go. That still gives you a non-OC limit on the RAM of 1866/8=233 bclk, then 233x20=4.665GHz CPU speed. Plenty fast if you get there, IMHO.

Have a look at this article: (link) where Tom's Hardware shows minimal gain going from dual channel to tri-channel DDR3. If you believe that, then go with Corsair Dominator DDR3-2000 in 2x2GB configuration, or perhaps 2 sets of 2x2 for 8GB in total.

I'm not sure how this works on the new i7 because the memory controller has been moved to the CPU, but with the old northbridge, using all 4 slots would overstress the NB memory controller so if you wanted the highest OC, then you should only run 2 slots of RAM. In that case, then you might consider 3x1GB if the highest OC was your goal.

You need to balance high speed vs. the need for actual physical memory if you are working with large files.

If you are still intent on getting DDR3-2000 in 3x2, then wait for G.Skill Perfect Storm: article link. Not sure when they will arrive, though.

In the meantime, if you want a close substitute, have a look at these Mushkin DDR3-1600 modules in 3x2 form. They have specified timings of 7-8-7-20, so if you relax them out a bit, you should probably get a pretty high overclock out of them, knowing Mushkin.

Storage & RAID: Balancing reliability, sophistication and performance, I would recommend the following setup for your hard drives: 2 Velociraptor 300GB in RAID 0 as your system drive, and then 3 Samsung 1TB Spinpoint F1 drives in RAID 5 for storage.

I'm not absolutely sure I would go with SSD drives and would thus prefer the tried and true Velociraptor. And at your level, don't take a risk with Seagate 7200.11 right now because they are having serious problems.

If you want to be really fancy, then consider this: get 6 WD Caviar Black 1TB drives and put them in Matrix RAID. RAID 0 a small partition for OS and apps, then RAID 5 the rest for storage.

To be practical, you really should get a hardware RAID controller so that you can migrate your RAID array to your next computer.

The other RAID issue to consider is to put your storage drives into an external enclosure--that is a slightly better way to secure your data. E.g. 4 drives in an external RAID 5 setup.

Motherboard: If you want to OC high with 2 GTX 295 cards, then the only board I would go with is the Asus P6T Deluxe V2. Asus has a better power design than Gigabyte and this mobo also has one feature that the others don't: there is a gap between the two video cards so it has superior cooling and airflow. All the others squish them together, especially the tri-SLI boards. Since you will be running 2 GTX 295s, cooling will be very important. Even the Rampage II Extreme, with its even better power design, puts the two video cards together. Look at the expansion slots carefully in your case.

The only other mobo that I would consider in your situation is the Asus P6T6 WS Revolution. AFAIK, it is the only X58 mobo that can run tri-SLI with 3 lanes at x16, meaning it will allow for 3 GTX 295s. May not work now, but it will sometime in the future. Its other interesting feature is the on-board SAS drive support. E.g. you can get 2 Seagate Cheetahs (15,000 rpm) at 300GB each and put them in RAID 0 for your system drive.

If you do decide on the P6T Deluxe, then for the money you will be spending don't risk anything other than the V2.

Case: I quite like the HAF myself, as I think cooling is paramount for this type of rig. If you want more hard drive bays, then the Coolermaster Cosmos 1000 has 6 of them internally.

Two super high-end cases that you should look into are the Zalman Z-Machine LQ1000 (with integrated liquid cooling) and the others in the Z-Machine series (there is a silver model), and the ABS Canyon 695 (although this one might present you with some installation problems). If I were to pull out all the stops, then I would go for Zalman here.

Heatsink: On the air cooling side, definitely Thermalright Ultra 120 Extreme 1366RT and its included 1600 rpm fan FTW. Don't forget the Thermalright LGA 1366 Bolt-Thru Kit to go with it.

Hope this helps you in your planning.
 

apollux

Distinguished
Nov 23, 2008
19
0
18,510


The Asus P6T6 WS 16-16-16 pci-express slots has demostrated been both slower and more expensive than the other X58 solutions out there. This is an interesting read and video on the subject: http://enthusiast.hardocp.com/article.html?art=MTYwNiwxLCxoZW50aHVzaWFzdA== I guess they didn't call it "workstation" instead of "gamestation" for nothing.
 

foolishone

Distinguished
Dec 23, 2008
24
0
18,520
First of all, thanks for all of your comments. To start this off:
Akebono 98 wrote :
Memory: Not sure why you are so intent on getting DDR3-2000. The i7 has a memory multiplier of 8, so working backwards 2000/8=250 base clock. With its CPU multiplier being 20, 20x250=5 GHz. You sure you can do that?
If not, then Corsair Dominator DDR3-1866 in 3x2GB form is the way to go. That still gives you a non-OC limit on the RAM of 1866/8=233 bclk, then 233x20=4.665GHz CPU speed. Plenty fast if you get there, IMHO.

Thanks for showing me my errors here. It appears I was under the assumption that the memory multiplier was unlocked in the 920. Must have gotten confused by some 965 reference at some point. Ideally the highest frequency memory I should purchase would be DDR3-1600. This allows for a 4GHz CPU clock using the 8x setting and never really worrying about the use of the 6x setting. It is my understanding that the Dual to Tri channel issues were only on early beta bios designs and these issues have sense been squashed. Please correct me if I am misinformed about this point.

I looked into those Mushkin modules and they ended up being one of the two choices I narrowed it down to... If you can explain why a set of those is worth $94 than a OCZ Platinum 6GB (3 x 2GB) DDR3-1600 @ 7-7-7-24, then I will purchase those over these. 12GB of DDR3-1600 for $322 seems quite nice.

Akebono 98 wrote :
Storage & RAID: Balancing reliability, sophistication and performance, I would recommend the following setup for your hard drives: 2 Velociraptor 300GB in RAID 0 as your system drive, and then 3 Samsung 1TB Spinpoint F1 drives in RAID 5 for storage.
I'm not absolutely sure I would go with SSD drives and would thus prefer the tried and true Velociraptor. And at your level, don't take a risk with Seagate 7200.11 right now because they are having serious problems.
If you want to be really fancy, then consider this: get 6 WD Caviar Black 1TB drives and put them in Matrix RAID. RAID 0 a small partition for OS and apps, then RAID 5 the rest for storage.
To be practical, you really should get a hardware RAID controller so that you can migrate your RAID array to your next computer.
The other RAID issue to consider is to put your storage drives into an external enclosure--that is a slightly better way to secure your data. E.g. 4 drives in an external RAID 5 setup.

While there is no question that those Velociraptor drives are incredibly impressive, what do I gain with that pair over a pair of the G.Skill SSDs that are $500 instead of the $400 Velociraptors other than capacity. This is being fulfilled with the other system drives. Considering how many awards those drives have won, I might be in the error here as well, but I'll let the reviews decide. I'm starting to see a trend with the 1.5TB Seagate as both you and xthekidx have suggested against it. It's hard to argue with the pricepoint for that capacity and the fact that I already have some in another system, but it seems as though those purchases might have been made too quickly. Seems I need to read some reviews on WD vs. Samsung since each of you has supported one of them.

As for the RAID controller, I don't see this necessary as this build will remain unchanged except for the possible video card addition/upgrades and 32nm CPU upgrade to 6(12) cores. At the point in time that the data would be moved to a new system, I'm sure an array of 4TB drives or maybe a holographic drive will take all of it swiftly.

I hadn't thought about doing an external RAID solution due to the performance hit when trying to interface with it, but if the controller is impressive enough it still won't be capable of exceeding the E-SATA specifications so it's worth investigating.

Akebono 98 wrote :
The only other mobo that I would consider in your situation is the Asus P6T6 WS Revolution. AFAIK, it is the only X58 mobo that can run tri-SLI with 3 lanes at x16, meaning it will allow for 3 GTX 295s. May not work now, but it will sometime in the future. Its other interesting feature is the on-board SAS drive support. E.g. you can get 2 Seagate Cheetahs (15,000 rpm) at 300GB each and put them in RAID 0 for your system drive.
If you do decide on the P6T Deluxe, then for the money you will be spending don't risk anything other than the V2.

The first point from this is that the current GTX 295's are incapable of 3-way SLI. While I would have a 90 trade up program with the EVGA card, only time will tell if anything better is released. Having this limitation on the first card applies it to the next card, even if that one is capable of 3-way SLI. This coupled with the lower benchmarking made me initially shy away from the P6T6 WS. Would the Cheetahs be worth the ~$80+ premium over the SSD's each?

Akebono 98 wrote:
Motherboard: If you want to OC high with 2 GTX 295 cards, then the only board I would go with is the Asus P6T Deluxe V2. Asus has a better power design than Gigabyte and this mobo also has one feature that the others don't: there is a gap between the two video cards so it has superior cooling and airflow. All the others squish them together, especially the tri-SLI boards. Since you will be running 2 GTX 295s, cooling will be very important. Even the Rampage II Extreme, with its even better power design, puts the two video cards together. Look at the expansion slots carefully in your case.

Apollux wrote:
If you are considering the GA-EX58-Extreme then you should consider the UD5 as well. I was torn between those 3 MBs (P6T Deluxe, UD5 and the Extreme) and when with the UD5 in the end.
When it comes to UD5 vs. Extreme, the Extreme is causing more headaches to it's owners and is somehow crippled by the NB cooler placement... not to mention that people who has actually used it for water cooling report that the barbs has a tendency to leak.

Thank you for this intriguing information on the P6T Deluxe V2 layout. I've been coming up relatively short on information in relation to the motherboard and it's been in the back of my mind. Thanks for pointing out the UD5 Apollux, it was probably the board colors that took it out of my initial list, :). It has replaced the GA-EX58-Extreme on my list and it appears that I need to get my eyes on some Asus P6T Deluxe V2 reviews to finally decide this one.

For the case... Since it now appears that I will only need 5 internal 3.5" bays max, I'll keep the HAF on the back burner. Are there any non-full tower designs that you guys would recommend that have a bit more room internally, or at least have good capabilities for cable management?
 

Akebono 98

Distinguished
Dec 5, 2008
560
0
18,980
@Apollux: Thanks for your critique of the P6T6 WS Revolution mobo. While that is a very interesting review, but I'd have to disagree with it in the context of this thread discussion. It is somewhat misleading in that the reviewer seems to emphasize the 4 fps difference on a 15" monitor (how is that "CPU limited"???). I doubt that anyone looking at tri-SLI of any sort would use less than a 30", so the only figure that has any relevance to me is the equivalent frame rate of the two mobos at 30" on maximum AA settings.

This is revealing because don't forget, extreme video setups only matter at the highest resolutions and the highest AA/AF settings. This singular stat indicates that the Revolution board is not lacking in any way at all. It's also a matter of driver optimization (doubt that nVidia even has a 15" monitor within a mile of it's office).

The other significant flaw in the review is that they used GTX 280 cards--don't need an x16 lane for that, because my point for 3 lanes at x16 only applies when you have a video card that can fully utilize x16. Even now, with GTX 280 in tri-SLI, the second and third cards are not maxed out. There is no way that you need more than x16/x8/x8 for GTX 280s.

I know that the GTX 295 cannot be tri-SLI'd. However, we are looking at an uber-build here and that is analyzed differently. If you look at the first paragraph of my post, I have recommended P6T Deluxe V2 as the first choice.

The Revolution mobo is only an alternative suggestion, to be considered as a very long term investment and the OP has already stated that the mobo will not be changed until a new system is purchased. This also matters because mobo based RAID I/O is currently under consideration and that is tied to the mobo itself.

Let's gaze into the crystal ball:
1. nVidia comes out with the GTX 395 / 495 that is designed for tri-SLI/6 GPUs.
2. A new application comes out that requires it.
3. Drivers are optimized for this whole setup and scaling improves dramatically.
4. Parallel computing becomes the norm.
5. A certain market segment needs this level of performance.

Depending on when that happens, I'd be glad I spent $70 extra for the Revolution in that case. Like I said, this is a long-term investment. There is no need for this mobo now for gaming and it is designed for industry--hence its "WS" moniker.
 

Akebono 98

Distinguished
Dec 5, 2008
560
0
18,980
More points:

Memory Configuration: There is some confusion on the dual vs. tri-channel memory point. As others have pointed out elsewhere, the new memory controller is designed for tri-channel, so just use it going forward. I only raised the point for consideration because I thought that you had some reason for wanting higher straight speed (liquid nitrogen OC, maybe?). Since you know that you're in need of a large quantity of physical RAM for your applications, then I'd go with the tri-channel 6x2GB configuration.

Mushkin vs. OCZ: I would have gone with Mushkin for overclocking above 1600 speed and contacted Mushkin Greg for the subtimings. Unfortunately, he resigned at year end so that is no longer possible. That alone brings Mushkin down a couple of notches, IMO. :fou:

So you're right to go with OCZ Platinum for the best 12GB value and tightest timings. However, make sure you're completely satisfied on the RAM compatability first--OCZ can be problematic sometimes.

Velociraptor vs. SSD: Reason for the Velociraptor pair? In one word--reliability. For gaming, it doesn't matter. In business, it does.

Note that the G.Skill SSD that you're looking at is also MLC, if you care about that issue at all (I'm not going to wade into that debate).

The capacity difference between G.Skill SSD and Velociraptor generally won't matter, when used as a system drive--probably won't need more than 128GB there anyways.

Seagate: I hate to knock Seagate as they are a fine HDD company. However, there are problems with their 7200.11 series of SATA drives and I read somewhere that it was a design flaw. However, the 1.5TB capacity can't be beat, and that is their greatest advantage. Chalk it up to the risk of being on the cutting edge.

WD vs. Samsung: Caviar Black works a little better as a system drive and the Spinpoint F1 has better throughput, for use as a data drive.

P6T6 WS Revolution: As mentioned in my previous post, just a long-term consideration. Think about your platform timeframe here.

Cheetahs: have a look at this article and decide for yourself: article link. Doubt that you can beat 2 Cheetah 15K.6 in Raid0 for pure speed.

Motherboards: Given the above comments on SAS drives, too bad they're dropping the SAS controller on the P6T Deluxe V2--the original design would have been a little too perfect if you ask me...

The Gigabyte EX58-UD4P mobo is also comparable too, but just a few less features if you don't need them (like dual LAN). I'm with you on those Gigabyte colors, though--where do they come from???

Cases: Silverstones are excellent cases in general, and look closely at the TJ-09 and its special video card cooling fan design. However, 2 GTX 295 cards might be a tight fit, depth-wise.

I like CM-690, but with your components, it might be a little too "lightweight".

Just make sure that with whatever case you get, the PSU fits because those higher wattage models are a bit longer in length.

Also, if you get a "lesser" case, then add extra fans to deal with the heat.
 

apollux

Distinguished
Nov 23, 2008
19
0
18,510
@ Akebono 98: Honesty, I don't want to antagonize you, but there are a few points that need to be made about the Nvidia chip on the P6T WS Revolution:

1. Until Intel comes out with a new clipset, we -and the WS Revoltion- are stuck with the X58. The X58 only has 32 PCI express lanes. That's why a native X58 board can do dual SLI with 16X for each card and can only do 16+8+8=32 for triple card setups. The Nvidia chip does it's "true 3-way SLI" by taking 16 lanes from the X58 and multiplexing each twice so the cards see 16 lanes from the X58 + 32 lanes from the Nvidia chip. I guess we can all agree so far.

Here is the catch: When it comes to sending back the data from the 3 cards to the CPU you only have 32 lanes, so the previously multiplexed 32 lanes on the Nvidia chio are "de-multiplexed" back into 16 lanes going toward the X58... and so any bandwidth you gained is now lost, plus you wasted time doing the multiplexing/de-multiplexing thing.

In other words: If card #1 is hooked to 16 lanes from the X58 while cards #2 and #3 are hooked to the NF200 chip, and now card #1 needs to conference with cards #2 and #3 there is a lot of wasted effort on the multiplexing process, while you are physically limited to the bandwidth of 32 lanes, minus the time the nvidia chip took to do it's magic. See from where comes the reputed latency on NF200 boards?

2. Notwithstanding what I said on the previous point, when the cards are working in SLI most of the data sharing between them happens trough the SLI bridge on top, somehow working around the multiplexing/de-multiplexing issue. But native X58 boards also use the same SLI bridge... in other words, this doesn't give any additional leverage to the NF200 cards, just makes them somehow less inefficient.

3. Sure enough, nowadays there isn't a single, dual or triple card that can actually take advantage of the bandwidth from all the X58's 32 express lanes. And sure enough someday in the future there will exist such cards, but by then it is more than likely that the X58 and the NF200 will be both obsolete. IMO, buying todays' technology hoping to only leverage it on a distant future is posing yourself for disappointment.

 

Akebono 98

Distinguished
Dec 5, 2008
560
0
18,980
@apollux: No antagonism at all--it's all in the spirit of vigorous debate, for which these forums are meant to provide. I'm actually glad because it's probably giving the OP some interesting points to consider for the uber-build.

But if there were a situation where the power of 6 GPUs could outweigh that multiplexing thing... :lol:

Thanks for keeping me on my toes!
 

Siffy

Distinguished
Jul 18, 2008
27
0
18,530
If you're considering SSDs in RAID 0 for absolute performance, I'd go with more, smaller drives on an actual controller with a dedicated processor and cache. Looking at newegg's prices you're currently considering $470 for 2 128GB drives and you might even be looking at $600 if you like the specs on the Titan series.

I'd consider a High Point 3510 for $360 and 4 OCZ Core 2 drives at $99 each plus a $20 on one of them. Note the 3520 is only $100 more for twice the ports (8). At one point you were considering a 3rd 128GB SSD for extra performance, this would likely be the better option for only $40 more than the 3rd ssd drive.

This guy did good bit of testing showing the gains you'll get from ditching the onboard garbage when using 4 80GB VelociRaptors. Usually I'd never recommend this, but this is not an "average" build.

So that out of the way, I'd recommend picking up the High Point 3520, 4 ~32GB SSDs ran in RAID 0 or 5 for the OS+Apps, and 4 1TB WD Blacks or Greens run in RAID 5 netting you 2TB total storage (3 disk array and 1 hot spare). If you absolutely need more than 2TB storage, drop that hot spare into the array instead. A RAID 5 + HS is the most sane way to run a $3k rig imho. That way when the failure occurs, likely when you're not around, by the time you've even noticed a drive is dead the computer will be telling it it's already rebuilt the array for you and it's operating like normal again, with a nice little note asking you to replace the dead drive.
 

Siffy

Distinguished
Jul 18, 2008
27
0
18,530
No, but if you read enough wikipedia pages you can make people think you know something about anything! j/k WP actually does have really good entries for explaining what the different RAID levels mean and they hint at performance, but you'll really need to either search for benchmarks or perform them yourself to know exactly what your hardware setup will do best. I know there have been some really good articles here on TH in the past covering RAID...
http://www.tomshardware.com/reviews/adaptec-serial-controllers,1806.html
http://www.tomshardware.com/reviews/HETEROGENEOUS-RAID-ARRAYS-WORK,1789.html
http://www.tomshardware.com/reviews/SERIAL-RAID-CONTROLLERS-AMCC,1738.html

http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS,1735.html
http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS,1662.html
http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS,1635.html

There, 75 pages or so with pretty charts should be enough to get anyone started. :) I just pasted them newest to oldest but I'd recommend starting with the last set backwards.

Oh, and newegg does have the High Point 3510 and High Point 3520 cards I mentioned earlier. I only used the link from pc-pitstop (whom I've never ordered from before) because it was one of the first google hits for the card and they provided good detailed specs. The HP 35xx cards (along with several others) have quite a bit of user testing with SSDs and here is a perfect example of the performance one could expect out of them.
 

foolishone

Distinguished
Dec 23, 2008
24
0
18,520
Here I was thinking that Akebono 98 and apollux gave me quite a bit of new information to research/consider and then Siffy brings up some great information that puts me back to square one for storage design. The High Point performance numbers are far too impressive to ignore. Thank you Siffy for pointing out just how much performance is lost due to using the onboard raid controllers. This brings up an earlier concern, in that, it somewhat relates to the tri-SLI issue of only having 32 lanes connected to the CPU. By random chance would anyone know of any reviews or postings related to just how much performance is lost when the second GTX 295 is reduced to 8x? This is a unique design so it's probably going to be hard to find information related to SLI combined with an 8x PCI-E raid card.

I'm going to review the storage configuration I had planned and I'll post the updated build over the weekend probably. Again, thanks for all of your comments.

What would be the best solution to mounting 4 2.5" and 3 3.5" drives in a desktop without having to purchase a 5' tower?

On a side note, would it be worth waiting for those green 2TB drives to be released?
 

Siffy

Distinguished
Jul 18, 2008
27
0
18,530

They gave you a lot of great info.

Also don't ignore that there will be a move to SATA 6gbps probably by the end of the year and also by then there will probably be single SSD offerings that will more than saturate a single SATA 3gbps cable. The Vertex 2 which is rumored to be released by CeBIT/March is an example. It's OCZ next "big thing" for performance by putting 2 Vertex 1 drives in a single 2.5" package and building a RAID controller directly into the drive. Just saying don't hate me if you're in the same position a year from now and wanting to buy another $400 SATA card to get better performance. :)

As for the PCIe 8x restriction, does the TH GTX 295 review that was published a few days ago hit on that any? It would be the first place I'd check. I wouldn't have a clue. I doubt it would make much difference though, 8x is what? 4GB/s? DDR3 @ 1600 only has about 13GB/s bandwidth is why I say that.

Something like this, but from reviews of it, both expert and user, it's apparently not a great product. Worst thing that could happen is you end up ripping the front and back off the thing, move the fans to the front, cover it with a typical drive blank, and have to run individual power to each drive. But at least it would provide secure mounting. I don't know about too many other options beside a DIY project.

Fitting 3 3.5" drives and 1 more 5.25" shouldn't be a problem for any case.

IMO, no. Main reason, what if they're a flop? I'd hate to lose all my data because I bought the "latest and greatest". Quality seems to have been put on the back burner for a lot of things these days. Oh, and if you go for the current latest of the green drives that have been out a while make sure you get the 10EADS and not the 10EACS if picking up 1TB drives. The D is the 2nd generation Green line.
 

Akebono 98

Distinguished
Dec 5, 2008
560
0
18,980
No need to go back to square one on storage, just have a closer look at the P6T6 WS Revolution design itself, and the PCIe standards.

If you put two dual slot graphics cards in slots 1 and 3, then you can still use slot 6 in x4 for the RAID controller. This x4 capacity is apparently independent of the other 5 slots, though I could be wrong.

Alternatively, you could use slots 1 and 5 for graphics (Asus recommends this for nVidia SLI) and then use slot 4 in x8 for the RAID controller, as this doesn't affect you because you are not using 3x16.

Remember that when using SLI, the second and third cards are not maxed out, because of how scaling works. Therefore, they don't need the full bandwidth of their slot.

I believe that there is room to increase the PCIe frequency in BIOS from the default 100 MHz to 115 MHz without blowing things up, and in order to increase overall bandwidth. You might want to check into this.

An x16 slot is spec'd for 8GB/s. Given that the fastest burst speed in that review from Siffy is 530 MB/s, that's still only one x1 lane of capacity, so I'd guess that the difference would be negligible with so much available overhead and bandwidth.

If you go with multiple SSDs in RAID0, then you're probably looking at something in the neighborhood of 0.5GB/s to 1GB/s reads and writes, with bursts of 2GB/s and up. You may not be using burst speeds when maxing out the graphics (e.g. when playing Crysis on ultra high). So I still think it's OK to share the lanes here, given the nature of real world use.

There's also no space to run tri-SLI with a RAID controller anyway, so there's no issue of eating into the 3x16 configuration.

The real beauty of the P6T6 is that all of the slots are PCIe full length, leaving it up to you to determine your most optimal configuration. Look at p. 2-19 of the P6T6 WS Revolution manual for commentary on how to set up the expansion slots.

A hardware RAID controller allows you to migrate the array and offload the processing to the controller card, thus it avoids eating into CPU function. The slowdown of on-board RAID is because the CPU is doing the work (in addition to its primary job). That, and because the on-board RAID chip is not as good as that of a dedicated controller with hardware acceleration.

Drive Mounting: Velociraptors have heatsink frames that make them effectively 3.5" drives.

The Antec 1200 has 12 drive bays--the 9 lower ones with fans in front of them for cooling. Here you would use mounting brackets to mount the HDDs or SSDs in the larger 5.25" bays. This would be ideal, as it leaves space in-between each drive for better airflow & cooling.

The Cooler Master CM stacker has 11 drive bays--use their 4-in-3 device module, which is especially designed for your situation: link.

You'll need to look into drive mounting brackets to finalize your build.

The Caviar Green 2TB is a perfect drive for redundant storage RAID arrays (especially that hot spare), except for one factor--reliability, which is unknown at this point. Witness what happened with Seagate 7200.11 drives. Though it also depends on how mission critical your data is, I'm with Siffy on this one.

Looking forward to seeing your updated build!