I am prepping a new box for gamnig and work. I deal a lot with VMs (up to 8 at a time) and need a raid solution.
I am currently using a pair of Raptors in a stripe and was wondering what the performance comparison is in relation to a 6 drive, 7200 RPM disk array (I would prefer 1+0 as it is less wear and tear on the disks then a raid 5) in comparison to a single 10k raptor and a pair of 10k raptors in a stripe.
Two big stickie points is the number of IO operations as with 8 VMs running there can be a sick amount of IO going on with nothing really happening. As I am a broke SOB I can really pop for a nice raid controller so this is strickly via an onboard solution (soft raid in other words).
Anyone have 6 7200 rpm drives and a pair of 10k raptors want to do some benchmarking? Big focus on IO and latency as with soft raid data has to go to the cpu and back so IO operations should have a more signifcant hit on the system then via a controller card...
If the performance hit is too bad then I might be able to win the argument and get a decent raid controller.
Please help I need hard numbers to win this.
(On a side note dear TH crew: Please do a Virtualzation special on running 4-8 concurrent VMs with various configs to find out the most optimal solution for hosting VMs, compare opterons, xeons along with desktop processors and see if the server cpus afford better performance in VM hosting)
I do enterprise test environment management along with metric analysis and deployment simulation.
8 VMs for =
1 Squid Proxy Server system running a form of embedded linux setup that gets migrated to a ssd (cuthulu1)
1 Redhat VM called ClarkKent, a reports server + Cacti monitor (ckent)
2 clustered MYSQL hosting servers (mysql1 & mysql2)
1 Statistics processing server (R) (cratchet, reference to Bob Cratchet from "A Christmas Carol".. a number cruncher)
1 Windows 2003 Server running active directory (bigdaddy)
1 WindowsXP (liljohn)
1 Ubuntu (mandella1)
I test interactions so the load isn't very high but I use this framework for unit testing prior to migrating the VM snapshots to an integration hardware environment.
(For those familiar with system development cycles I migrated a revised snapshot into the integration environment hardware rather then a fresh deployment which is done for acceptance environment.)
A common script I run is from the XP vm is I "leech" a whole crap load of test data from around the internet (google news etc.) to get performance data into squid and Cacti. Then at hourly intervals Clark runs simulated proxy reports and behavioral analysis on those logs looking for unusual behavior from the XP test machine (which we infect randomly with viruses as needed to test the behavioral models, sometimes the internet does it for us). The data and reports are stored in RRDs and the MYSQL servers which we periodically kill one of the two to test for failover etc.. The 2003 server is daddy to network management infrastructure and AD. The Ubuntu environment is where I do my coding etc...
Then when I am happy with all the setups (XP just gets rolled back each time) I just hit the realtime migrate scripts to move them to the integration environments where we test dynamic load balancing between raw iron.
Based on seasonal acitivity of the servers I dynamically shift VMs from the Idle boxes to the THUST boxes just before they are statistically supposed to start getting heavy traffic. Mainframers have it easy by adjusting MIP allocations but distributed servers aren't so lucky. Just a proof of concept I do in my spare time. I haven't started digging into KVM's capabilities yet (using VMWare stuff for this atm) but will be switching everything over to KVM here shortly (hence part of the rebuild.) When I am not doing this work I just play a lot of Team Fortress 2 and WoW.
It works suprisingly well but my current processors do not have any hardware VM support (older AMDs without the VM extensions. Heard VM support is "better" on Intel)