Santimun

Distinguished
Oct 9, 2009
46
0
18,540
Hello everyone!
Been a fan of Toms Hardware for many years but never thought much about joining the forums until now.

So with that aside, I have a question that hopefully some of you will be able to educate me on.
I recently purchased a 1TB hdd (WD Caviar Green) and while I was waiting for it to be shipped I read an interesting article on hdd partitioning (http://partition.radified.com/).

Basically what the link explained was that you want to partition large drives and have your most frequently used data on the first partition since its located in the outer half of the disk as opposed to the second partition which would be in the inner portion of the disk. At least thats what I came to understand.

So, with this info I have my hdds partitioned as follows:

On the new 1Tb hdd, the first partition was set to 8 gigs which I use as the "Windows Swap File" for the paging file.
Following that are 3 more partitions each at barely over 300 Gbs. The first being for multimedia files like movies and music where the second is for storage (downloads) and lastly the third being for whatever else.

I have an additional 300 hdd partitioned into two half's, first half for games second half for non priority programs such as web browsers, media players, and dvd/cd burners etc.

My OS resides on a seperate drive. A 75 gb Raptor drive partioned in two halfs; the first being where the OS is installed and the second partition being an "image" of said drive for backup.

So my question to you all is am I doing this inteligently or am I going about this whole partitioning setup all wrong? If so any recommendations?

Thank you for reading this long, first post :hello:
 
Solution
It shouldn't be too difficult to see what kind of controller is serving a drive if you perform some specific benchmarks, such as these:

19507.png


read latency is a product of the latency (speed) of the flash memory chips, and an additional latency caused by the controller. The Samsung controller is clearly slower and hampering performance; it could also dislike random I/O and be optimized for sequential I/O because the company knows they can only sell their products on MB/s arguments because their IOps performance sucks.

19508.png


Oh yes - this is the ultimate torture test for SSDs...

sub mesa

Distinguished
While your intentions are good, i think you're making things more complicated to yourself. The swap file at the first partition? What if the swap file just exists but is never consulted, like in any modern system. Then what you did was just a waste; you could even delete the pagefile and continue without one. Swapping = for old systems with too few RAM. Virtual memory design = something else than swapping. Swapping is actually switing to the pagefile because there is a shortage of RAM. You shouldn't be ever in that position, as once you begin to swap any performance a mechanical disk can offer is so slow your computer will feel like a pre-pentium age PC.

Its true though, that the first parts of the HDD are faster. However by seperating using partitions, it may be argued that this can also lower performance because the head is forced to seek alot between the beginning of the disk and the middle/end etc; because the partitions force the drive to seek so far; while with just one big partition it would have less seek times.

Generally do what makes sense, just don't overdo it. :)
And remember any HDD is slow when comparing to an SSD; in terms of IOps performance and latencies. HDDs are like tape recorders used in the past to store data on; ancient mechanical technology.
 

Santimun

Distinguished
Oct 9, 2009
46
0
18,540
Really? I figured the best way to pep up Windows XP would be to have it installed on the raptor hdd, with it installed on the partition that resides on the out area of the platter as well as having the page file located on a separate, partitioned hdd located on the outermost side of the platter.

This doesnt = increase in OS functionality?
 

sub mesa

Distinguished
The performance of the page file is irrelevant if its not being read from or written to. Do you agree? Do you also agree modern systems have no RAM shortage and thus the pagefile won't be used? The only real use for the page file these days is locking up your computer -- if any program has a bug and is using up more and more memory as fast as it can, it will consume paged memory pretty soon. This will make your system totally unresponsive, and could have been prevented by disabling the page file.

Paging = for old systems. No modern system should EVER have to page something. Its from a time where systems didn't have enough RAM and RAM was very expensive.

So the pagefile dies here. What's left is separating operating system from data. Generally i recommend a setup where the operating system is on the first partition (fastest), but all user-generated files are on a different partition. If you ever want to re-install this also eases the procedure, as you don't have to delete your personal files; they'll just be on the D-partition while windows is on C.

If you want a fast PC with fast I/O, SSD is your only option. If SSD is no option due to price/capacity, HDDs have to be used unfortunately. The Velociraptor is a reasonably fast HDD, the older raptors are less fast in realistic benchmarks because the benefit of increased data density supersedes the advtange in seek times from the 10k raptors. So it's either the new Velociraptor, or generic 7200/5400rpm disks.

Personally, i would try to have an SSD as system drive, and focus on large 5400rpm disks for data storage. That solves mostly anyone's performance problems.
 

Santimun

Distinguished
Oct 9, 2009
46
0
18,540
Wow thanks for the great info! I agree with everything you said, i just wasnt sure if my paging file was being used or not.

I used to think the same thing (disable the paging file) until I brought it up to some IT guy at my last job and he was very stern when he said that I should ALWAYS have my paging file enabled. So I just took his word for it based on his experience.

I think I will look into SSD's and find one for Win7 when it comes out.

Thanks for clearing this up for me!
 

sub mesa

Distinguished
Well the paging file is a little more complex than i told. But generally people can't seperate two things: virtual memory design and paging. Virtual memory has to do with saying there is more memory than what is actually present in RAM. The page file is used as 'backup' if the space is actually needed it can be provided by using the pagefile as RAM; as it should be bigger than the RAM. However, this design is without any vision, and IMO the swapping should be completely disabled. If programs use more than 4GB memory; they have malfunctioned and should be KILLED (which is what happens if you really run out of memory). Ofcourse, this can be somewhat different for special server systems with a very large active data set; they might still require swapping; but this is very specific to the purpose.

In general, no consumer systems should ever be swapping. Swapping is bad; it kills all performance. If your computer is swapping, its downgraded to pre-pentium pc speeds. Disabling the page file works quite well as long as you have enough RAM; 4GB or more should do. Note that most applications cannot use more than 2GB memory because they are 32-bit. Only 64-bit applications or those with /largelba compiler flag can use more than 2GB memory. This includes 32-bit applications on 64-bit Windows. So generally there are no games / consumer apps which use more than 2GB RAM in itself.

If you want to buy a good SSD; i recommend the Intel X-25M 80GB G2. Its price is only a little above that of the Vertex, while providing performance and longitivity benefits over the vertex. The Intel drive is pretty superior in almost everything except sequential write, and will be a good system disk for many years to come.

So buying a good SSD might last longer than other PC equipment; it should be useful for at least 10 years.
 

Santimun

Distinguished
Oct 9, 2009
46
0
18,540
Hmm some quick research shows that I can install an SSD either via standard SATA or IDE but also on PIC-E? bus?

Thats interesting and it looks like advantages connecting it via PCI-E are higher throughput and lower latency compared to SSDs connected via traditional hard disk style interfaces.

Now the model you recommended to me connects via SATA so does that mean personally you feel through that connection offers the optimum choice over connection on the pci-e bus?
 

sub mesa

Distinguished
PCIe is the supreme choice, but it focuses on server-based SSD products with extremely high performance, up to a million IOps where the fastest harddrive can only do 200 IOps. So PCIe is superior, as provides for lower latency and higher throughput; and multiple lanes allow parallel I/O which SSDs are very good at - unlike HDDs which process everything in serial order.

But because of their intended customers (server-industry) these products are expensive, the FusionIO or IOdrive are beyond $1000 which make them fall outside of consumer pricing. The Intel SSDs are also targeted towards high-end workstation users.

Aside from the interface, the most important is the storage medium itself. Harddisks have high latencies whereas the SSD has very low latencies. Currently there's a very big performance difference between CPU+Memory, and Harddrive storage. The SSD can make this performance difference much smaller; with the result of the entire machine running better and provides for a superior performance experience.

I however, do hope and think PCIe SSDs will become more popular, but probably a new standardised instruction set replacement for the ATA-standard is required to make this going the right direction. As SSD storage is different from HDD storage, the language in which we communicate with the storage device was suitable to HDDs but less for SSDs.

So even though the SSD market is young, its already too exciting to leave it for the future. You can benefit from SSD performance now, and including all the benefits in reliability, noise, vibrations, resistance to temperature/vibrations, power consumption and performance.
 

Santimun

Distinguished
Oct 9, 2009
46
0
18,540
Okay that makes sense and thank you for your patience with my lack of knowledge in this area. But I would appreciate your thoughts on this:

A quick look on TigerDirect shows that I can purchase a 64gb PCI-E SSD for $169:
http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=4505218&csid=ITD&body=MAIN#detailspecs

While a 64gb SATA II SSD goes for surprisingly a little more:
http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=4665161&CatId=4215

Am I looking at the wrong type of PCI-E SSD? I see that it mentions "for Eee PC's" but isn't that irrelevant since the bus is PCI-E??
 

sub mesa

Distinguished
Thanks for the link, i haven't seen such priced products yet with PCIe interface. They look small enough to be cheap - though NAND flash shortage causes high pricing still.

However, if you want to separate a good SSD from a bad SSD all you have to do is look at what controller is being used. Intel made their own controller, by looking closely at what their competitors were doing wrong. Intel found a way to overcome the problem of high write latencies with small writes with MLC memory. Because of the fundamental changes to storage, SLC storage doesn't have this problem so making an SLC controller is much more easy.

To get good performance out of MLC memory, the controller needs to be smart and have its own DRAM memory chip, like normal HDDs have as well. Typically, there are two kind of SSDs:

- those with light controllers and no DRAM chip (low performance)
- those with smart controllers and with a DRAM chip (high performance)

Currently, there are three popular modern MLC controller chips out there: Intel, which uses it on their own products, Indilinx which you can find on many good SSDs like OCZ Vertex, and the Samsung controller, which is ok and used by many brands.

The Intel controller is superior to the other two, in virtually every aspect. Because of the marginal additional cost of Intel SSD, this product is the way to go for consumers at the moment. Be sure to update the firmware if the factory-shipped firmware is too old; since Intel has made some good progress in terms of performance by tuning the firmware.

The PCIe product you mentioned falls in the earlier category; this can be easily seen on the sequential read+write speeds; as without using multiple channels ("RAID0") an SSD won't be that fast. The cool thing about flash storage is that they can all work independently, and each work on a different I/O request. Advanced controller like the Intel one feature multiple channels (8 to be exact) to use the flash memory chips in parallel. This boosts performance significantly, and is a must-have. So i wouldn't go for that light SSD, though i would like to see such products with a real controller and DRAM onboard; which would make them more expensive though.
 

sub mesa

Distinguished
If you don't know the difference between MLC flash and SLC flash: this is the type of NAND memory being used. SLC has benefits of longer life (more write cycles) and don't have certain performance problems the MLC memory has.

MLC is cheaper, it can store double or triple the data. Each cell stores multiple data, which adds to latency and some performance problems. But the advantage here is the significantly lower cost per GB - since MLC offers twice or triple the amount of storage of a comparable SLC chip.
 

Santimun

Distinguished
Oct 9, 2009
46
0
18,540
Very interesting! Problem is, looking at specs for these SSD's doesnt really specify what controller they have which is kinda annoying. My only other option would be to google the model # and look for the product manual I guess.

Also seems that some people think SSD's are not cost effective when compared to a RAID setup with hdds so all in all Im still somewhat confused as to which would be the better investment but Im definitely a lot more informed than before.

My other concerns is a lot of warnings on how, defrag, Windows prefetching seem to not play well with SSDs so i'll need need to do a bit more research.

Hopefully by then the prices for SSD's with SLC controllers and DRAM will be easier to distinguish from the low performance models AND more affordable!
 

sub mesa

Distinguished
It shouldn't be too difficult to see what kind of controller is serving a drive if you perform some specific benchmarks, such as these:

19507.png


read latency is a product of the latency (speed) of the flash memory chips, and an additional latency caused by the controller. The Samsung controller is clearly slower and hampering performance; it could also dislike random I/O and be optimized for sequential I/O because the company knows they can only sell their products on MB/s arguments because their IOps performance sucks.

19508.png


Oh yes - this is the ultimate torture test for SSDs and HDDs alike. Unlike HDDs, who don't care much whether you're reading or writing on the surface medium, SSD is different. If controllers don't do something smart, it would require many read-erase-program cycles which take a very long time.

We can clearly see the Samsung controller doesn't do anything smart here, its ok for reading though but since this is a random write test it totally fails here. The velociraptor is twice that of the Samsung, but any HDD will be weak in heavy random I/O.

And yet... this is where Intel has a marketing problem:

19505.png


Intel could easily raise the sequential speeds to the likes of 150MB/s+ i guess a next generation will to give them a selling point. Fact is many people conclude the Intel SSD is not superior and the SSD with Samsung controller looks very good here - yet it wouldn't be the best SSD by far.

What makes the Intel controllers so special is that they both do wear leveling very good, as well keep both read and write latencies very low. The drive doesn't have any significant weakness aside from maybe the mediocre sequential write speeds; but this is fast enough for almost all intents and purposes for the drive. What really matters is IOps.
 
Solution