Intel's G3 X25-M SSD Specs Revealed

Solution

What's good about it? A filesystem that doesn't use checksums to protect the integrity of your files, is more close to FAT than to ZFS.

Ext3 is way old and shouldn't be used, Ext4 is new and should only be used on workstations where performance is important and data security is not.

ZFS is more or less a filesystem aimed at data security; anything you store on ZFS you intend to keep and you want protection against corruption.

Basically, ZFS is one of the biggest revolutions in filesystem design; gone are your unsafe FAT/NTFS/Ext3/Ext4/XFS/JFS/HFS/ReiserFS; now you have CHECKSUMS to protect your data and any filesystem without it belongs to a different generation...

wortwortwort

Distinguished
Jun 1, 2010
291
0
18,810
That looks pretty sweet. The sequential write performance is still a bit slower than other drives, but it's nowhere near as bad as before. The random read speeds look nice.

I hope they can get the 80GB version under $200...
 

adampower

Distinguished
Apr 20, 2010
452
0
18,860
Why did the power consumption increase? Would almost rather have the g2 in the laptop. Don't need huge write performance there.

I'm a little disappointed. Was hoping for a bigger iops increase and read/write at least equal to sandforce.
 

sub mesa

Distinguished
adampower: yes, same reason 32nm CPUs are many times cheaper than 65nm CPUs; you get many more cores from the same 300mm wafer. So smaller chips means more chips on the same wafer.

Power consumption rose from 65mW to 75mW; not that much a big deal is it? BTW those values only apply when using DIPM (Device-Initiated Power Management) and as far as i know only certain laptops have DIPM support. For normal desktops the power consumption would still be half a watt, for maintaining the SATA link all the time. DIPM basically switches the SATA link off when not used.

Since Intel appears to be at the bottom of the high-performance SSDs for 2011, it could be that Intel will be the cheaper SSDs on the block, providing reliable and decent performance but not being the high-performance preferred choice any longer. SF-2000 looks much more nice.

One underlighted feature of the G3, however, is that they can write safely. This is very important for some systems as SSDs can corrupt themselves when they lose power; all the remaps stored in DRAM memory will disappear without being written to NAND. A supercapacitor can provide power when the main power goes down and protect the mapping tables in DRAM by flushing them to NAND before the Supercapacitor has drained its energy capacity.
 
The chips themselves aren't cheaper, but since the memory cell size has gone down from 32nm to 25nm each chip will hold more cells - that means more data per chip and as a result the net cost per GByte does down.

The reason the power consumption has risen is because there are more transistors on every chip, and each one consumes a bit of juice while switching.
 
Looks like Intel has some stiff competition for the performance crown: http://www.anandtech.com/show/3971/sandforce-announces-nextgen-ssd-controller-sf2000-capable-of-500mbs-and-60k-iops

One underlighted feature of the G3, however, is that they can write safely. This is very important for some systems as SSDs can corrupt themselves when they lose power; all the remaps stored in DRAM memory will disappear without being written to NAND. A supercapacitor can provide power when the main power goes down and protect the mapping tables in DRAM by flushing them to NAND before the Supercapacitor has drained its energy capacity.
Won't the SF-2xxx have this too? And currently, IIRC, the SF - 1500 has it.

Btw, how long of a charge can that super cap hold? Just enough to be able to flush the buffer or a pretty good safety margin?
 

sub mesa

Distinguished
Well long enough to write the mapping table in DRAM to NAND; could be just a fraction of a second but i assume the supercap does have a decent margin yes.

The problem with unsafe writes such as with Intel X25-M G2, is that when the controller remaps writes (i.e. write at another location then the OS thinks) then this is stored in DRAM mapping table; these are backed up on the NAND to survive power cycles; but if power goes down abruptly the mapping table is older than the state on the physical NAND; in this case you have corruption and it makes the G2 SSDs unsuitable as 'log' device such as in ZFS or databases.

So the G3 would make an excellent SLOG device under ZFS. But 170MB/s sequential write is somewhat disappointing. And 3Gbps SATA is clearly limiting the potential of this controller. Modern SSDs using ONFI 2.0 have 166MB/s * 10 channels = 1,6GB/s of internal bandwidth. So i'm looking forward to newer PCI-express SSDs with Sandforce 2xxx controller; something like a newer Revodrive.
 
The other factor with safe writes is that it may enable the controller to use a more efficient write and/or wear leveling algorithm. Without safe writes the controller has to sequence its write operations so that it would be able to reconstruct the state of its mapping tables even in the face of a power failure. Safe writes would relax that restriction as long as the controller has enough time from the point of power failure to complete the outstanding writes.
 
So the G3 would make an excellent SLOG device under ZFS. But 170MB/s sequential write is somewhat disappointing. And 3Gbps SATA is clearly limiting the potential of this controller. Modern SSDs using ONFI 2.0 have 166MB/s * 10 channels = 1,6GB/s of internal bandwidth. So i'm looking forward to newer PCI-express SSDs with Sandforce 2xxx controller; something like a newer Revodrive.
So you run Solaris /BSD eh? What's wrong with good ol ext3/4? :p

Anyways, yeah, if Sand Force is really able to push that much with SF 2xxx then SATAIII is already history.

There was a open box Revo for $150 (120GB) on sale at Microcenter a few days ago... so tempted to buy it, but alas, don't have money!

What exactly is stopping manufacturers from producing higher end SSDs using PCIe? OCZ seems to be the only one pushing PCIe SSDs. I'll gladly pay $10-20 extra for a PCIe SSD over SATA based SSD. I guess this would be a problem for people who have only a single or double PCIe x16 sockets. I guess for this to really take of motherboard manufacturers will need to get PCIe x4 or x8 sockets along with the needed chipset support.
 
Are there any issues with booting from a PCIe-attached drive? Or do the current implementations emulate a conventional controller so that the BIOS will recognize them as bootable devices?
 

sub mesa

Distinguished

What's good about it? A filesystem that doesn't use checksums to protect the integrity of your files, is more close to FAT than to ZFS.

Ext3 is way old and shouldn't be used, Ext4 is new and should only be used on workstations where performance is important and data security is not.

ZFS is more or less a filesystem aimed at data security; anything you store on ZFS you intend to keep and you want protection against corruption.

Basically, ZFS is one of the biggest revolutions in filesystem design; gone are your unsafe FAT/NTFS/Ext3/Ext4/XFS/JFS/HFS/ReiserFS; now you have CHECKSUMS to protect your data and any filesystem without it belongs to a different generation just like a car is of different generation then horse & carriage.

If you would like to know more about ZFS, you can check the ZFS fileserver thread on HardOCP forums over here.

What exactly is stopping manufacturers from producing higher end SSDs using PCIe? OCZ seems to be the only one pushing PCIe SSDs. I'll gladly pay $10-20 extra for a PCIe SSD over SATA based SSD. I guess this would be a problem for people who have only a single or double PCIe x16 sockets. I guess for this to really take of motherboard manufacturers will need to get PCIe x4 or x8 sockets along with the needed chipset support.
There are no third-party native PCI-express to NAND controllers available; so if you want a PCIe SSD you would have to use 'weird' components. There will be no TRIM support in Windows; performance could be low due too less read-ahead on Windows; low-level optimizations may be needed. FakeRAID also has terrible drivers and thus there simply isn't a good controller chip for PCIe SSDs yet.


Revodrive uses Silicon Image FakeRAID controller, and those have Windows-only drivers to make it bootable.


Revodrive uses Silicon Image SiI-3124 SATA/300 to PCI-X controller, and a PCI-X to PCI-e bridge chip makes it possible to use PCI-express interface instead of PCI-X.

Under Windows, you have to use the proprietary sucky Silicon Image Drivers with weak performance, no TRIM and bad stability.

Under Linux/BSD, you can use this SSD as normal SATA controller with 2 SSDs connected. I.e. you see one SSD for each Sandforce controller; the Silicon Image controller is just a normal SATA controller for anything outside of Windows; we aren't using their sucky drivers but BSD drivers instead. Advantage? We get TRIM capability on both SSDs behind Revodrive, we can use advanced software RAID to RAID0 both Sandforce controllers on the Revodrive, with better speeds than the sucky Windows FakeRAID drivers will give you.

So for Windows, i wouldn't look at Revodrive because you can only use it if you use the crappy Silicon Image drivers. But under other OS this SSD can be a formidable choice since it gives you TRIM, up to 1000MB/s bandwidth and excellent Sandforce NAND controllers.

But i'm kind of waiting on a newer Revodrive with SF-2000 and with SATA 6Gbps controller chip; potentially from Silicon Image. No reason why you can't exceed 2GB/s with a cheap but efficient NAND design.
 
Solution

elivance

Distinguished
Jul 9, 2010
84
0
18,630
I don't quite get it. Does this mean all of the "next-gen" drives are going to be SATA II? If that is the case, I'm buying a Crucial C300 right now. I was unhappy to hear the next gen drives delayed again, but I thought it would be worth the wait because I expected to see much faster speeds. These look like more of the same speeds. Still 250 MB/s read? Why so crappy?
 

sub mesa

Distinguished
elivance: sequential speeds are not the most important reason why SSDs are so much faster than HDDs; they are only rated twice that of HDDs in the best circumstances.

Random I/O, measured in IOps, instead would give you much higher differences between SSDs and HDDs. SSDs have high IOps not only due to low latencies, but also due to having multiple channels to utilize parallel I/O; while your HDD can only do one thing at a time.

While sequential reads have reached SATA/300 levels, IOps have not. To give you an idea of IOps scores:

Random Read HDD 0,5MB/s
Random Write HDD 1,0MB/s

Random Read Intel 200MB/s
Random Write Intel 70MB/s

Random Read Intel G3 250 MB/s (?)
Random Write Intel G3 160 MB/s (rated 40.000 IOps)

So while sequential reads may have reached limits of SATA/300, the random writes particularly are not there yet.

I'm not sure, but there was some discussion about SATA 6Gbps support being integrated already but shipped disabled for now. Though perhaps that is just wishful thinking. But even if you could do 500MB/s of sequential read; you have no storage capable of writing that fast to actually be useful.
 

elivance

Distinguished
Jul 9, 2010
84
0
18,630


But the C300 also has:

READ: 50,000 IOPS
WRITE: 30,000 IOPS

Will this new drive even be any better? It seems to me it has worse sequential read speeds, the same read IOPS, and slightly better write speeds, which aren't really important.
 
IOPS is not as useful a measure for SSDs in destkop computers as it is for servers that have to respond to requests from many clients in parallel. For desktop computers the single most important measure of performance is the access time, and that's typically best shown by benchmarks of small random I/Os (i.e., 4K random IO).

sub mesa is right to point out that SSDs are only 2-3 times faster than a hard drive in terms of sequential I/O, but they're on the order of 100X faster in terms of access time. That's by far the most important difference.

For a typical OS drive an SSD that can transfer data faster than 300MByte/sec won't perform as well as one that transfers at 200MByte/sec if the later has an access time that's 10 to 20% faster.
 

elivance

Distinguished
Jul 9, 2010
84
0
18,630


Those numbers I put were 4k Access times. (It seems like everywhere I've read IOPS specs, thats what they use).

My question is, is there anything to indicate this drive will perform any better than the C300? Are any "next-gen" drives going to be significantly better? (ie. enough that its worth the >4 month wait versus buying now).
 
ZFS is more or less a filesystem aimed at data security; anything you store on ZFS you intend to keep and you want protection against corruption.
Ahh.... that make sense. Yeah, I use ext4 on my Fedora set ups. I have been playing with OpenSolaris Live and so far it's been OK. What kind of a performance hit (if any) are we talking about between ext4 and ZFS?

Under Linux/BSD, you can use this SSD as normal SATA controller with 2 SSDs connected. I.e. you see one SSD for each Sandforce controller; the Silicon Image controller is just a normal SATA controller for anything outside of Windows; we aren't using their sucky drivers but BSD drivers instead. Advantage? We get TRIM capability on both SSDs behind Revodrive, we can use advanced software RAID to RAID0 both Sandforce controllers on the Revodrive, with better speeds than the sucky Windows FakeRAID drivers will give you.
Do you know any one who's run the Revo on Linux? I may be able to get a Revo soon as it seems the local Microcenter seem to have quite a few Revos go on Open Box sales (I'm guessing this is because people are stupid and couldn't get it to work) from time to time. And yes, I'd probably be using Fedora(most likely)/openSuse/openSolaris/Gentoo(highly unlikely) if I get a Revo. It will be used as a dev set up to test new builds,etc.

Off topic: Do you think OpenSolaris dev will slow down since Sun got bought off by Oracle?

Are any "next-gen" drives going to be significantly better? (ie. enough that its worth the >4 month wait versus buying now).
I'd say yes. Not mainly from the performance prespective, but rather from price per GB and price vs performance. Going 25nm on the G3 should allow the G3 to be cheaper than the current X25-M G2. I think you may be able to get a 160GB G3 for about $250-270.

Imo, Intel seems to be aiming for price rather than performance with the G3.
 

elivance

Distinguished
Jul 9, 2010
84
0
18,630
Well, my concern is much more on performance than price. I'd rather get a 128GB drive that is faster than a 300GB drive for the same price. I wonder if other companies will follow suit? Focus on price rather than performance?

I am thinking I will just buy a C300 now if that is the fastest drive we are going to get for a while.