Sign in with
Sign up | Sign in
Your question
Closed

Breaking Records With SSDs: 16 Intel X25-Es Do 2.2 GB/s

Last response: in Reviews comments
Share
July 30, 2009 6:11:40 AM

how fast does it boot windows?
Score
4
July 30, 2009 6:15:48 AM

can toms give this away like the SBM! I have no idea why I would need this tho. :) 
Score
6
July 30, 2009 6:20:36 AM

how fast does it open solitaire ?
Score
12
July 30, 2009 6:24:18 AM

Porn delivered in .1 seconds or your (insert something witty) back...
Score
-8
July 30, 2009 6:25:56 AM

can we have some benchmarks that aren't just I/O performance? How about boot times and/or program load times?
Score
13
July 30, 2009 6:36:14 AM

You should always include a retail price tag for these articles. If it's in there someplace i missed it.
Score
4
July 30, 2009 6:58:29 AM

Any non windows based benchmarks incase there is any sort of limit of throughput etc?

Windows does some funky things to hdd transfers - buffering things through ram and all sorts to find extra performance - wouldnt supprise me if that 2gb/s limit had something to do with software accessing the ram through the layers and windows subsystem etc
Score
3
July 30, 2009 6:59:01 AM

I am pretty sure the new Intel SSDs still don't have a good write speed compared to the Indolex controlled SSDs.
Score
-7
July 30, 2009 6:59:40 AM

xyz001how fast does it boot windows?


half of the start up time on the windows side (aka not including bios time) is the PNP initialization and network loading/waiting etc - check the hdd read light on high end systems
Score
1
July 30, 2009 7:00:28 AM

falchardI am pretty sure the new Intel SSDs still don't have a good write speed compared to the Indolex controlled SSDs.


Every other spec Intel owns hands down like random writes etc which makes them the far better drive
Score
1
July 30, 2009 7:25:50 AM

dirtmountainYou should always include a retail price tag for these articles. If it's in there someplace i missed it.


Dirt,
You're looking at close to $14k worth of drives/controllers :) 
Score
5
July 30, 2009 7:31:56 AM

Too bad my money tree couldn't buy me even one X25-E.

And yeah where are the application load times?
Score
3
July 30, 2009 7:57:25 AM

When/if I ever have enough people paying me for space on my server, I know what to do.

We've come a long way from "Loading..." screens in Half Life 2 every five minutes or less.
Score
0
July 30, 2009 8:01:05 AM

Gonna say it as well: Please benchmark application loadtimes; photoshop with different filesizes and ofcourse level loadtimes in Crysis :) 
Score
5
July 30, 2009 8:32:33 AM

I wish they also have real-world results/benches. Im not that familiar with synthetic benchmarks.
Score
4
July 30, 2009 8:35:43 AM

You will not be able to get faster speeds than that using 2 8x PCI-E. Even though the theoretical bandwidth is 2GB/s I have only even been able to get around 1.15GB/s, whwich is pretty close to what you are seeing. I would be interested to see what happens if you use 3 Raid controllers :) , although i cant remeber how many total physical lanes are available on the X58 chipset
Score
3
July 30, 2009 8:36:31 AM

Dear Tom,

another great article! Logically the cpu power should be the bottleneck, therefore you should try loading up same the config on a dedicated dual or multi cpu servermotherboard with a windows 2008 Server R2 RC 64-bit as OS for more simultaneous cpu operations. That might bump up your figures beyond 2.3GB. And then finally, this is a bit "breaking the frontiers" but hey isn't this what you guys are known for by now...you should grab that new workstation board from Asus (forget the exact name) that's filled only with PCIE slots (about 5 or more I think) and try adding 2 more adaptec cards with each 4 SSD's. This would eliminate the possible bottleneck of limited cpu operations per raid controller.
Score
-4
July 30, 2009 8:38:16 AM

"Bottlenecks can most likely be found in CPU performance as well as farther down the platform in the storage controllers."
That's over-simplified, if not pure B$. Any modern CPU has more than enough BW. There are a lot of other limiting factors, as local buses, memory, and last, but not least, the OS (crappy vi$hta DRM-O$).
As both arrays (the more heterogeneous one from Samsung and that one) are hitting a very similar peak transfer rate (5% doesn't really count), despite very different HW setups, the most probable explanation lies in the OS as limiting factor (the single common denominator).
As for the retarded comments, inquiring windblow$ booting, or some application, or crappy game level loading times:
A large RAID is nothing for desktops, with inherent weak task and IO parallelization, but for servers with high IOPs and a lot of clients.
Score
-8
July 30, 2009 8:53:27 AM

I wonder what performance Linux's ext4 file system would get out of that array... Since, after all, Windows (any version) is sorely lagging behind *NIX systems on I/O throughput.
Score
-2
July 30, 2009 8:55:06 AM

from what ive seen those are perfectly valid questions because we ARE reading because were curious. By the way most comments on toms arent retarded (flaming,fanboys = retard post.)Anyways I think most of us were thinking the same thing since most of us won't ever buy something like that. windows boot time = around 2 min for my pc
ultimate array = ?
Score
3
July 30, 2009 9:35:23 AM

Interesting.
The $14,000 not quite so... (Why the hell are RAID cards more valueable than everything I own?)
How about trying with a single RAID card (either 16x or 8x PCIe 2.0) on a complete extreme system (4.5GHz i7 [multi socket if possible ifusing the server variant), 12GB DDR3 2000 cas 9 etc.) to see how much you could get through 16 of these babies.

=D


What I want to see is two systems.

System 1:
128MB or RAM (or the lowest you can possibly use) and setting half of the SSD storage space as virtual ram, and test performance on a multitude of applications.

System 2:
System running 12GB+ of DDR3 running 1600 cas6 (or whatever awesomeness you can come up with) and set most of it as a ramdisk on a i7 system. Them check performance using that ramdisk in place of a HDD/SDD.
Score
-1
July 30, 2009 9:35:45 AM

err, did you even read the article ? That video is the whole reason this article was born...
Score
-1
July 30, 2009 9:52:16 AM

It would be also interesting to see with how many drives per controller it's enough for reaching the same performance, which would give an idea of the bus or controllers bottleneck.
Score
1
July 30, 2009 9:52:43 AM

It would be also interesting to see with how many drives per controller it's enough for reaching the same performance, which would give an idea of the bus or controllers bottleneck.
Score
-1
July 30, 2009 9:53:19 AM

Sorry for the double post...
Score
-1
July 30, 2009 11:05:02 AM

I'd like to see some benchmarks for RAID systems as well as professional graphics cards with GIS (Geographic Information Systems) software. I also noticed that LSI now owns 3ware and LSI's new 6Gb/s controllers are rated at 2.1GB/s read and 1.6 GB/s writes, once they're released.
Score
-1
July 30, 2009 11:54:18 AM

xyz001how fast does it boot windows?

It doesn't
Score
0
July 30, 2009 12:16:27 PM

Sneaking a little hard drive review?Didn't think I'd notice.LOL Bring on the hard drive reviews.Can't beat em...Join em.LOl
Score
-1
July 30, 2009 1:15:25 PM

Didn't they get something close to 1 GB/s just a couple of weeks ago with 12 Samsung drives for about $1k? So in order to get a little better than twice the performance they had to increase the cost by 14X?
Score
-2
July 30, 2009 2:15:20 PM

OK, I'm dying to know what the bottleneck is. My guess: the processing speed of the RAID controller. One thing you could do is test the scaling of drives to see where you hit the wall on a single RAID controller (RAID 0 - 2 drives, 4 drives, 6 drives, 7 drives, 8 drives). I'm sure you can find the point of diminishing (or no) return. I want to see a more in-depth investigation into this monster.
Score
1
July 30, 2009 2:23:14 PM

For practical purposes, do you think you could run tests on different numbers of flash drives? The primary concern being when does the number of drives stop scaling well. Afterall, you're looking at 16x the number of drives but most of your performance is in the range of 3x to 5x faster. But what if you just buying a second drive could provide a performance boost of nearly 2x?
Score
0
Anonymous
July 30, 2009 2:27:37 PM

Could you please do the same tests in Windows 7? I´m curious to know about the optmizations MS and Intel do toguether on that system, about SSD management. Is the W7 RC capable of handle better IO numbers?

From Brazil, Cassius.play>.Felipe
Score
-1
Anonymous
July 30, 2009 2:27:42 PM

Definitely try this again when you get your hands on some PCIe Gen2 RAID cards like LSI recently announced. I think the Asus board mentioned earlier is the Z8NA-D6 for dual Nehalem i7 in ATX?
Score
-1
July 30, 2009 2:38:13 PM

2200 to 2300 MB/sec / 16 drives = around 140MB/sec each drive. I'm guessing you didn't need SLC to do that? Pretty sure you have a bottle neck further up. HDD scaling test would help.
Score
1
July 30, 2009 2:41:38 PM

Good article!

But I find few thing strange.
I read about SSD (majorly on OCZ forum about how to tweak OCZ SSD).
Most time you have to fix many setting like start track, strap block size and NTFS block.
And SSD benchmark are mostly do with a software called ATTO.

This article totally omit those settings.

I fell like you can OPTIMIZE the result with few tweaks.
Score
-2
July 30, 2009 3:04:37 PM

16 Intel X25e 64GB SSDs costing $750 each = $12,000

2 Adaptec 5805 RAID controllers costing $450 each = $900

...that's a total of $12,900!
Score
-2
Anonymous
July 30, 2009 3:05:28 PM

How about overclocking the CPU, and increase the PCI bandwidth with a few Mhz?

I'd also like to see how 8 to 10 drives in raid perform, seeing that 16 drives don't seem to give a full 16x improvement over a single drive.

Also, what's the price difference between 16 intel drives, and the 24 samsung SSD's?

Perhaps this kind of test was better to perform on Xeon-like powered machines? I could be wrong, but I think they have a better throughput in some areas, they must be more expensive for a reason!
Score
-1
Anonymous
July 30, 2009 3:08:45 PM

Second thought, could it be a bottleneck in the raid cards?
Score
0
July 30, 2009 3:11:28 PM

All this is greadt but the question remains...

Will it load Crysis?
Score
-3
July 30, 2009 3:33:16 PM

Does anyone there know about Enterprise level RAID? You can combine 2 or more RAID controllers at the hardware level, instead of using the built in Windows BS software RAID to combine the 2x arrays. That way you can split up to 3 or more cards and get more bandwidth from your PCI-E bus.
Score
0
July 30, 2009 3:42:28 PM

hmmmmmm nevemind... maybe SLC ftw..... need to spend some more time looking at the SSD charts
Score
-1
July 30, 2009 3:53:31 PM

Interesting but of little practical value.
Score
0
July 30, 2009 3:54:55 PM

Actually maybe your SSD charts need to be updated, going by customer reviews, it seems 16 small cheap MLC drives might still max out those controllers.
Score
-1
July 30, 2009 4:04:24 PM

Wow! Wicked performance...i wouldn't mind having one myself. Intel is beast right now, not questions asked.
Score
-2
July 30, 2009 4:08:51 PM

As none has asked yet, I would ask if it will run Crysis with very high settings, but with a MX440 on a PCI port I think I'll pass...
Score
-4
July 30, 2009 4:15:27 PM

profundidoDear Tom,another great article! Logically the cpu power should be the bottleneck, therefore you should try loading up same the config on a dedicated dual or multi cpu servermotherboard with a windows 2008 Server R2 RC 64-bit as OS for more simultaneous cpu operations. That might bump up your figures beyond 2.3GB. And then finally, this is a bit "breaking the frontiers" but hey isn't this what you guys are known for by now...you should grab that new workstation board from Asus (forget the exact name) that's filled only with PCIE slots (about 5 or more I think) and try adding 2 more adaptec cards with each 4 SSD's. This would eliminate the possible bottleneck of limited cpu operations per raid controller.


CPU performance bottlekneck? pure BS and even if that were true the L1, L2 and QPI speeds exceed 25gb/s etc.

File transfer speeds have nothing to do with CPU performance, it has ALL to do with sub-systems etc.
Score
2
July 30, 2009 4:18:02 PM

raptor550Does anyone there know about Enterprise level RAID? You can combine 2 or more RAID controllers at the hardware level, instead of using the built in Windows BS software RAID to combine the 2x arrays. That way you can split up to 3 or more cards and get more bandwidth from your PCI-E bus.


Software raid for RAID0 and RAID1 are simple and have little overhead or performance penalties with this type of array (may not be as quick as hardware raid0/1 etc but close enough), on the other hand RAID5 will perform poorly at best with software raid5.
Score
-1
July 30, 2009 4:18:46 PM

My head Asplode.
Score
-2
!