Sign in with
Sign up | Sign in
Your question

Using SSD under linux

Last response: in Linux/Free BSD
August 3, 2012 9:47:22 PM

Hey Guys,

I am building a new desktop, and since I'm going to use it mostly for software development purposes ( and MySQL mostly); I was wondering if I did the right choice by getting this HDD as far as performance goes, as it is a 5900 RPM drive

I was hoping if i got this 120 GB SSD, it would probably improve my performance

(not a bad deal too)

Again, I also have 16GB of ram on the system ....

Please, let me know if I will see any noticeable difference if I indeed got the SSD for my OS (Ubuntu/Fedora)


If you want to suggest any other SSD, that works too

Thank you.

More about : ssd linux

August 4, 2012 10:47:11 PM

I don't think the 5900 RPM HD will make such a great performance difference versus a 7200 RPM version, in particular in combination with a SSD. As long as you split the usage between the SSD and HD in a way where you can utilize the speed of the SSD.

The SSD gives a tremendous performance boost when loading applications. If you have enough spare RAM you could use it easily as RAM disk for compilation work or any temporary data storage.

I wouldn't use the SSD for data storage with lots of writes as todays SSDs give you only 3000 writes per memory unit. If you do plan to use the SSD for heavy write access you should leave plenty of SSD space spare.

At the price of your SSD, you could also consider a second SSD later on if you see your SSD fills up and you do have many writes. Reading doesn't have an impact on SSD life time so if you had a read-only file system it would last a long time.

I assume you are going to run Linux so pay attention to the partition theme. I'm using a 120GB SSD with Linux Mint on it and have the swap space on the HDD. In my case I have 32 GB of memory which gives me plenty of space for putting the tmpfs and other temporary file systems on RAM (for example /var/log, though you may want to have that on a permanent file system to check log files after a reboot).

If you're on Linux, I can also recommend using LVM2 or logical volumes.

Disk space is cheap these days and if you really need more performance of a HDD you can add another one and run it as RAID for performance improvement. Or use LVM in striped mode for better similar performance increase. Or even combine LVM and RAID.

I have 5 or 6 HDD in my old PC, from WD Black 250 GB 7200 RPM drives to WD 20 EARX drives holding 2 TB on 5600 (or 5900 ?) RPM disks. In this case the "slower" disks aren't any slower than the faster (7200 RPM) disks because of the higher density of the larger disks.

I would refrain from getting the latest 3 TB disks though, having read plenty of user reports about their troubles.

As for MySQL and database usage, I'm not an expert but I believe a RAID setup would improve data write and read speed, if that is not sufficient on your system.

Make sure you adjust the /etc/fstab file for your SSD drive/partitions to enable trim support on ext4 partitions (using "discard") and noatime to reduce the writes to your SSD (see for an example).

Good luck!
August 4, 2012 11:39:06 PM

Thank you for your reply powerhouse32.
Yea I was sort of reading this article, on how to setup Linux using a combination of SSD and mechanical HDD.

Yea, I will probably put the /tmp, /etc and /swap on my HDD, as the tmp files are accessed for writes several times. I also plan on putting my /home on the HDD and leave the rest on my SSD

In about two weeks I might add another 16 Gigs to the memory, so I should be fine i suppose.

As far as MySql is concerned, I does involve several Writes, so might have to move that to the HDD. So yea a RAID would be good.
Related resources
August 4, 2012 11:41:11 PM

One more thing, right now I just want to run the system with a 120 GB ssd and 2 TB HDD and 32 Gigs of memory.

If need be, can I add the Raid Setup later on ?

Or is it wise to have it from the very beginning ?
August 5, 2012 5:07:52 PM

Insomniac0des said:
...Yea, I will probably put the /tmp, /etc and /swap on my HDD, as the tmp files are accessed for writes several times. I also plan on putting my /home on the HDD and leave the rest on my SSD

In about two weeks I might add another 16 Gigs to the memory, so I should be fine i suppose.

As far as MySql is concerned, I does involve several Writes, so might have to move that to the HDD. So yea a RAID would be good.

Why put the /etc on HDD? I would keep / and /boot on SSD, and /home where you prefer (I have it on SSD, too).

I'm not sure you can add another 16 GB RAM to an existing memory kit. They are usually matched. It might work, or might not.
August 5, 2012 11:45:00 PM

I read it in some place that it's better not to have /etc and /tmp on the SSD, but have the / and /boot on the ssd, with the /home on the HDD. One article also mentioned to move the browser cache files to the HDD, as it get a lot of writes.

Arch Linux has an entire page of information on SSD optimizations.

I don't think adding memory is any problem, should work actually as long as the CAS latency and the timings and the speed are the same.

It actually helps if you got the exact same model of memory sticks you have right now in your system.

Here's a discussion link .. in case you're interested

August 7, 2012 11:02:44 PM

Regarding memory upgrade, it depends on what system / chipset you have and of course on the memory itself. I've got no experience with Sandybridge (e.g. i5 or i7 2xxx and 38xx series), but with the Sandybridge-E (i7 3930K for example) with the X79 chipset and you could be facing some surprises. Memory kits for those systems (and even the regular Sandybridge systems) are matched, that is the DIMMs are matched to meet the performance etc. requirements. Not even that can guarantee you a working memory kit, it also depends on the motherboard and the motherboard BIOS revision.

I've had good experience with Kingston memory, in my case a 32GB quad channel PC 1600 memory kit. The quad-channel kits are only relevant to X79 boards where the memory controller is on the i7 39xx CPU.

Trying DIMMs from different vendors is certainly not advised, but even 2 16GB kits of the same vendor may or may not work. And the more memory you have the bigger the chances for memory errors. Unless you invest in a server platform, but that will cost you.

It's unfortunately not only about CAS latency and timing, but also about voltages such as Vcore, VCCSA, CPUVTT, etc. and secondary timings such as DRAM REF Cycle Time (tRFC) and about 3-4 dozens of other adjustments one can tweak. Some memory kits play better with certain motherboards than others. In the ideal case, all you have to do is enable XMP and it should work. But that is the ideal case. Some settings are also damaging the CPU on the long run, for instance high VCCSA and Vcore voltages. Lots of memory can also put a high load on your motherboard voltage regulators and you may have to consider that as well (extra cooling, etc.).
Motherboard manufacturers are also fast in advertising support for lots of memory, but in practice their motherboard design or their BIOS software may not be up to the task. Running 8GB or 16GB is one thing, running 32GB or even 64GB is a totally different story.
As I said before, I had a hard time getting 32GB to work. I'm now running the Kingston 8x4GB memory kit which worked out of the box. But please check specifically for your motherboard and CPU, as things may be totally different for that.
August 8, 2012 9:57:36 PM

I just read the thread you were pointing to - sorry I missed it out before. Also, I re-read your original post and understood that the SSD was optional, but I gather you are planning to get the SSD.

Installing 16 GB may be OK in most cases, depending on motherboard and BIOS and RAM, but above that could be more tricky. In any case, I would stick with complete kits (e.g. a 16 GB kit or a 32 GB kit) and not try to match two (or more) kits. It may work, but there is no guarantee.