Virtualization disk usage

David86_1608

Reputable
Jan 26, 2016
12
0
4,520
TL;DR - How do I reduce my disk usage from 100% when running VMs on Windows 10 Hyper-V?

I just setup a RAID 10 with four 1TB WD Red 2.5" drives. I was hoping that a RAID 10 would be safer than a RAID 0 or 5, but faster than a RAID 1 (which I don' think you can do with 4 drives anyway). I am running Windows 10 off of two Kingston 120GB SSDs in a RAID 1. My problem is that when I start 2+ VMs, at the same time, on the WD RAID, my disk usage goes from 1-5% to 99-100% and just sits there while my disk latency just goes up and up until my ability to interact with my VMs becomes completely unfeasible.

My question is, is my problem that
1) I have my WD in a RAID 10, and it's two slow
2) I do not have enough drives to spread the usage over
3) Some other problem like drive type, interface, etc.?

I ran into this before when I was trying to run VMs on 8.1, but that was running it on a second disk on my ROG laptop (so that was a bit different). I was hoping to overcome the problem by creating a faster array of multiple disks instead of just one disk, but maybe I didn't raise my ceiling high enough?.
 

David86_1608

Reputable
Jan 26, 2016
12
0
4,520


Each VM is running Server 2012 R2 Standard with 4GB of memory and 2 Procs. That should be plenty for 3 systems that have nothing but and OS and on DC. It pounds the disk the hardest during startup, but if it drops below 98%, it doesn't drop much.
 

David86_1608

Reputable
Jan 26, 2016
12
0
4,520


I'll answer the first question because it is easier - Because I do not have a 2016 beta and the Hyper-V on 10 is a later version than that that you get with 2012 R2. Plus, I don't want JUST a virtualization server. I primarily wanted something that could push something like SCII or WoW with a beautiful FPS. And since my video card is a $200 card and not a $2500 card, I can't virtualize any games if I roll out Server 2012 as the base instead of virtualizing it. The Hyper-V is just so I can spin up enough to work on my MCSA/-SE. I'm just concerned that if spinning up a DC, and 3 other server VMs is going to bludgeon my data RAID that it is not going to get any better when those other three have an actual service running on them (like WSUS or a Forefront server).

The specs of the system are:
Mobo - ASUS Sabertooth 990FX R2.0
CPU - AMD FX-8350 8-core 4GHz
RAM - 16GB of Corsair DDR3 1600 & 8GB of DDR 1333 (the second bit is because I had them laying around and the difference between 1333 and 1600 doesn't seem drastic enough for me to notice)
OS is on a RAID 1 of 2 Kingston 120 SSDs and the DATA volume is on 4 WD Red 1TB in a RAID 10 (because I have had a RAID 5 fail, and lose my data, like 5x in the last year)

Since I already have all 8 SATA ports on the Mobo sucked up by all the drives I have, and a DVD reader, (and I don't have an additional $175-$1000 to spend on a PCIx RAID card), I can't really raise my ceiling much higher. I am just trying to figure out if there is more I can do without adding additional components, like changing my current configuration.

The VMs stay in a saved state when I switch from studying to gaming, so I am not dumb enough to try to run them both at the same time, so that isn't as big of an issue. I just don't want the interaction latency to be through the ceiling to the point where there is an unmanageable delay when trying to do something on the VMs.
 
The disks shouldn't be that busy. Do they perform well when being accessed by Windows 10? My system is different (Intel i7-3770, 32 GB of RAM) and my VMs are on non-RAID disks (backups and 2 hosts meet my requirements), but my drives aren't that busy with several VMs that do more than yours. Have you tested a VM on a single 7200 RPM drive?
 

David86_1608

Reputable
Jan 26, 2016
12
0
4,520


When my VMs aren't running all of my disks have a utilization of <5%
When they start, the drive they are running on goes through the ceiling. I would expect that, but then once they are running, it should calm down a bit. That is where my concern is.

Haven't tested on a single 7200 because last time I ran hyper-v vms on a single 7200, back with 8.1, the disk ran so hot that NOTHING on it was useable. The interface latency was about 10-15 seconds. Like, hit start, 10-15 seconds the start screen would show up. That was what I was trying to avoid this time around.
 
I can't determine what you're doing wrong, but I never ran into that issue with Hyper-V, VMware ESX or VMware Workstation. Does a newly created VM cause the issue? Does high disk I/O occur in the VMs as well or only on the host? If it occurs in the VMs, what is casing it?