Planning for Virtualization using Hyper-V and Hyper-V Replicas with Win2012 R2 Guests
Tags:
- Hyper-V
- Hyper-V Replica
-
Servers
- Virtualization
- Backup
-
Business Computing
Last response: in Business Computing
crackerstastic
March 31, 2014 9:10:03 PM
Hello all!
I am new to virtualization. I have used the VMware player briefly for personal enrichment, but I am now to the point that I may need to use it for more than that.
At my work, we have a single Win2003 Standard server. The network was already in place when I took it over. With end of life for Win2003 approaching rapidly, I am trying to convince my boss that we need to upgrade to newer server software, and to supplement my verbal argument I am preparing a proposal for him on the subject.
Now I have a particular setup I would like to go with. I want to get a dual server solution in place so I can split the network services up and provide fault tolerance. Most of my real life server experience is with Win2003. I've dabbled with Win2008 in a college class and I've toyed a bit with Win2012 in VMware, but that is the extent of it. Scouring the internet for knowledge, Hyper-V and Hyper-V replicas seem to be the way to go. I am aiming for Win2013 Standard R2 as the guest OS's for the VMs.
I need to provide the following with my servers: Active Directory, DNS, DHCP, File Server, Print Server. These are the essential services. I would like to add: DFS namespace (mainly to unify the various shared folders under logical namespaces) and server-based storage for client backups.
So here's what I am after with my two physical hosts (both hosts running Hyper-V server, standalone package):
- I would like the first host to have a VM that is the Domain Controller, DNS, and DHCP server.
- I would like the second host to have a VM that is the print server, file server, and DFS namespace. I would prefer that this host also houses the storage location for the backups.
- Scheduled client backups should come in two flavors: full system images scheduled to happen 2-4 times monthly and user profile folders (documents, pictures, etc.) 2-3 times per week. I'm not too concerned about a retention policy, new backups will overwrite the existing ones. Retention might go as far as shadow copy on some shared folders, but that is it.
- Each server should host a second VM that is a Hyper-V replica of the opposite server's contents. (i.e. Server 1 has replica of Server 2's primary VM and vice versa.)
So on the subject of client storage we have 13 individual PCs and about 800GB - 1TB of initial data to backup. Some of the users here do not have much in terms of profile information. Literally, some individuals just have a vacation spreadsheet and a few Word documents that need to be backed up. I think I have the most data at about 180GB. In terms of future expansion, I am estimating that about 5 years from now we will have brought on another 750GB - 1TB of additional data. With that measure I am referring to system images and user profile folders. I'm high-balling that figure also. I feel that I am over estimating it.
I have a couple primary questions about this setup:
1. Is this sort of VM setup workable for my needs? I understand that the Replicas are not failovers.
2. What kind of storage considerations should I be concerned with? It is my impression that the VMs are stored in flat files. Is there anything utterly wrong with getting a pair of 3TB drives for each server and making a RAID 1 array of them? I was thinking I could slice off a small partition for the Hyper-V server and the remaining becomes a partition for the .vhd files.
If my understanding of the VMs storage is correct, each VM and replica is stored in a flat file, so I should have both my primary VMs and a copy of each on the opposite physical server. If a physical server dies, I should have a working version of it as the replica on the remaining server. Is this correct? Furthermore, if both these servers are replicated on a RAID 1 array, then I have redundancy for both VMs already in case a single drive fails, correct?
Some other questions, secondary considerations:
1. Would it be a bad idea to install the active directory role on the primary VMs on both servers? I don't see the implications of doing so off the top of my head, but nonetheless I would like some input.
2. Is the addition of DFS namespaces overkill? I have several folders on my current file server. I would like to put certain folders in a specific namespace (e.g. Complaint Logs, Installation Packages, Driver Packages, etc.). At this point the role seems aesthetic. If looks is all it can add, then I don't really care for it. Is there anything real special that DFS can bring for file server organization?
3. For the price point, I was considering an ASUS 990FX Sabertooth R2.0 motherboard with a AMD FX-8320 CPU and 16GB of RAM. I don't think I will have issues with virtualization there, but if anyone can see a problem with that, please let me know.
I should probably note that our servers are not high usage. Only a few individuals are making use of the file server. Print services are likely the most utilized role that the users are concerned with. I am also trying to keep things cost effective (hence the decision for AMD over Intel components). Which to my boss means the cost must be low and the performance must be effective. (In spirit he wears a big unicorn shirt when preaching about "having your cake and eating too" situations like this one....) The current (single) server is a dual-processor, dual-core setup with 4 GB of RAM and 32-bit software and a single 160GB RAID 1 array. It is literally like 10 years old and is *knocks on wood* moving along just fine. I would imagine that even at one server, an eight core desktop CPU, 64-bit platform, 16GB of RAM, and 3 TB RAID 1 would be comparable. However, I only get to buy this once, so I need to make it count, and I need to make it work for at least 5 years or more.
Thanks to any and all who can provide some input. Forgive me if my questions seems like rookie questions due to the subject. I haven't had to deal with server technology for years, so I am a bit rusty....and I've never worked with Hyper-V before. Searching the internet on the subjects have only half-answered some of my questions.
Again, many thanks in advance. My appreciation goes out to you....
I am new to virtualization. I have used the VMware player briefly for personal enrichment, but I am now to the point that I may need to use it for more than that.
At my work, we have a single Win2003 Standard server. The network was already in place when I took it over. With end of life for Win2003 approaching rapidly, I am trying to convince my boss that we need to upgrade to newer server software, and to supplement my verbal argument I am preparing a proposal for him on the subject.
Now I have a particular setup I would like to go with. I want to get a dual server solution in place so I can split the network services up and provide fault tolerance. Most of my real life server experience is with Win2003. I've dabbled with Win2008 in a college class and I've toyed a bit with Win2012 in VMware, but that is the extent of it. Scouring the internet for knowledge, Hyper-V and Hyper-V replicas seem to be the way to go. I am aiming for Win2013 Standard R2 as the guest OS's for the VMs.
I need to provide the following with my servers: Active Directory, DNS, DHCP, File Server, Print Server. These are the essential services. I would like to add: DFS namespace (mainly to unify the various shared folders under logical namespaces) and server-based storage for client backups.
So here's what I am after with my two physical hosts (both hosts running Hyper-V server, standalone package):
- I would like the first host to have a VM that is the Domain Controller, DNS, and DHCP server.
- I would like the second host to have a VM that is the print server, file server, and DFS namespace. I would prefer that this host also houses the storage location for the backups.
- Scheduled client backups should come in two flavors: full system images scheduled to happen 2-4 times monthly and user profile folders (documents, pictures, etc.) 2-3 times per week. I'm not too concerned about a retention policy, new backups will overwrite the existing ones. Retention might go as far as shadow copy on some shared folders, but that is it.
- Each server should host a second VM that is a Hyper-V replica of the opposite server's contents. (i.e. Server 1 has replica of Server 2's primary VM and vice versa.)
So on the subject of client storage we have 13 individual PCs and about 800GB - 1TB of initial data to backup. Some of the users here do not have much in terms of profile information. Literally, some individuals just have a vacation spreadsheet and a few Word documents that need to be backed up. I think I have the most data at about 180GB. In terms of future expansion, I am estimating that about 5 years from now we will have brought on another 750GB - 1TB of additional data. With that measure I am referring to system images and user profile folders. I'm high-balling that figure also. I feel that I am over estimating it.
I have a couple primary questions about this setup:
1. Is this sort of VM setup workable for my needs? I understand that the Replicas are not failovers.
2. What kind of storage considerations should I be concerned with? It is my impression that the VMs are stored in flat files. Is there anything utterly wrong with getting a pair of 3TB drives for each server and making a RAID 1 array of them? I was thinking I could slice off a small partition for the Hyper-V server and the remaining becomes a partition for the .vhd files.
If my understanding of the VMs storage is correct, each VM and replica is stored in a flat file, so I should have both my primary VMs and a copy of each on the opposite physical server. If a physical server dies, I should have a working version of it as the replica on the remaining server. Is this correct? Furthermore, if both these servers are replicated on a RAID 1 array, then I have redundancy for both VMs already in case a single drive fails, correct?
Some other questions, secondary considerations:
1. Would it be a bad idea to install the active directory role on the primary VMs on both servers? I don't see the implications of doing so off the top of my head, but nonetheless I would like some input.
2. Is the addition of DFS namespaces overkill? I have several folders on my current file server. I would like to put certain folders in a specific namespace (e.g. Complaint Logs, Installation Packages, Driver Packages, etc.). At this point the role seems aesthetic. If looks is all it can add, then I don't really care for it. Is there anything real special that DFS can bring for file server organization?
3. For the price point, I was considering an ASUS 990FX Sabertooth R2.0 motherboard with a AMD FX-8320 CPU and 16GB of RAM. I don't think I will have issues with virtualization there, but if anyone can see a problem with that, please let me know.
I should probably note that our servers are not high usage. Only a few individuals are making use of the file server. Print services are likely the most utilized role that the users are concerned with. I am also trying to keep things cost effective (hence the decision for AMD over Intel components). Which to my boss means the cost must be low and the performance must be effective. (In spirit he wears a big unicorn shirt when preaching about "having your cake and eating too" situations like this one....) The current (single) server is a dual-processor, dual-core setup with 4 GB of RAM and 32-bit software and a single 160GB RAID 1 array. It is literally like 10 years old and is *knocks on wood* moving along just fine. I would imagine that even at one server, an eight core desktop CPU, 64-bit platform, 16GB of RAM, and 3 TB RAID 1 would be comparable. However, I only get to buy this once, so I need to make it count, and I need to make it work for at least 5 years or more.
Thanks to any and all who can provide some input. Forgive me if my questions seems like rookie questions due to the subject. I haven't had to deal with server technology for years, so I am a bit rusty....and I've never worked with Hyper-V before. Searching the internet on the subjects have only half-answered some of my questions.
Again, many thanks in advance. My appreciation goes out to you....
More about : planning virtualization hyper hyper replicas win2012 guests
Alec Mowat
March 31, 2014 9:24:37 PM
Before you build this massive redundant server network, how big is this client?
This is a tiny network, you do not need this much redundancy.
I recommend two servers.
1 Host server that has a VM for file shares and printer shares, 1 VM for your DC, DNS and everything else. Your managed router can run DHCP no problem with only 13 desktops
The second server can just run Hyper V and manage only backup software, like AppAssure or Storage Craft. If any server goes down, you can just launch the backup VM from the Backup server.
Everything else is extremely high cost for such a small network.
1. No, it's fine. There's not enough network traffic to cause any issues, just very replication is working. It will just create more work for no reason.
2. For 13 work stations? Absolutely. It's just creating more work and maintenance for you/
2 (which is probably 3). Do NOT build a server on workstation equipment. Go out and buy a SERVER.
http://www8.hp.com/ca/en/products/proliant-servers/#!view=grid&page=1
None of that other information applies to a Sabertooth motherboard. It doesn't have the power to compare to a SERVER.
This is a tiny network, you do not need this much redundancy.
I recommend two servers.
1 Host server that has a VM for file shares and printer shares, 1 VM for your DC, DNS and everything else. Your managed router can run DHCP no problem with only 13 desktops
The second server can just run Hyper V and manage only backup software, like AppAssure or Storage Craft. If any server goes down, you can just launch the backup VM from the Backup server.
Everything else is extremely high cost for such a small network.
1. No, it's fine. There's not enough network traffic to cause any issues, just very replication is working. It will just create more work for no reason.
2. For 13 work stations? Absolutely. It's just creating more work and maintenance for you/
2 (which is probably 3). Do NOT build a server on workstation equipment. Go out and buy a SERVER.
http://www8.hp.com/ca/en/products/proliant-servers/#!view=grid&page=1
None of that other information applies to a Sabertooth motherboard. It doesn't have the power to compare to a SERVER.
m
0
l
crackerstastic
April 1, 2014 7:07:39 AM
Thanks for the tips Alec. I'd still like some additional feedback though.
For starters, I have also been leaning toward a Xeon E3-1220V3 3.1ghz quad coupled with a Supermicro X10SLL-F mobo. Along with the 16GB. The only thing I don't like about it is it is more expensive and I don't see our server being extremely taxed. However redundancy is important, so if I need server guts....so be it. (I was also looking at desktop equipment since on my home rig I used to run Folding@Home 24/7 and only experienced crashes with it when I max abusing my over clock settings....)
Based on your recommended configuration, it sounds like you are suggesting one server play the role of services (in the form of VMs) and the second is simply backups with additional software? Is there any concerns with the Hyper-V replica route? I am still looking to keep the costs down and since Hyper-V is free and the replica feature is built-in, I'd rather use a free solution if it is reliable. Can you also comment on my intended drive setup? Both servers would have 2 x 3TB HDD in RAID 1, partition for Hyper-V Server, remaining for VM and replica VM.
Thanks again!
For starters, I have also been leaning toward a Xeon E3-1220V3 3.1ghz quad coupled with a Supermicro X10SLL-F mobo. Along with the 16GB. The only thing I don't like about it is it is more expensive and I don't see our server being extremely taxed. However redundancy is important, so if I need server guts....so be it. (I was also looking at desktop equipment since on my home rig I used to run Folding@Home 24/7 and only experienced crashes with it when I max abusing my over clock settings....)
Based on your recommended configuration, it sounds like you are suggesting one server play the role of services (in the form of VMs) and the second is simply backups with additional software? Is there any concerns with the Hyper-V replica route? I am still looking to keep the costs down and since Hyper-V is free and the replica feature is built-in, I'd rather use a free solution if it is reliable. Can you also comment on my intended drive setup? Both servers would have 2 x 3TB HDD in RAID 1, partition for Hyper-V Server, remaining for VM and replica VM.
Thanks again!
m
0
l
Related resources
- Planning Hyper-V Storage using Windows 2012 R2 Guests - Forum
- How to use Win 2012 r2 Server Hyper-V? - Forum
Alec Mowat
April 1, 2014 8:12:11 AM
I recommend server equipment 100%
I am very much against building your Frankenstein box for server redundancy.
I would go specific with HP and Dell, take advantage of the remote access cards (That will save a lot of onsite time if the server goes offline)
Take advantage of 1 day replacement warranty with onsite tech.
I also would avoid making your network overly complicated, especially for a small client. It sounds like you are more interested in a running an experiment for yourself than actually building a network the client needs. If you put all this extra time into building this overly redundent, replicated network, you'll spend a lot more time troubleshooting when things go wrong.
A LOT more time.
I am very much against building your Frankenstein box for server redundancy.
I would go specific with HP and Dell, take advantage of the remote access cards (That will save a lot of onsite time if the server goes offline)
Take advantage of 1 day replacement warranty with onsite tech.
I also would avoid making your network overly complicated, especially for a small client. It sounds like you are more interested in a running an experiment for yourself than actually building a network the client needs. If you put all this extra time into building this overly redundent, replicated network, you'll spend a lot more time troubleshooting when things go wrong.
A LOT more time.
m
1
l
2Be_or_Not2Be
April 1, 2014 8:38:09 AM
crackerstastic said:
Thanks for the tips Alec. I'd still like some additional feedback though.For starters, I have also been leaning toward a Xeon E3-1220V3 3.1ghz quad coupled with a Supermicro X10SLL-F mobo. Along with the 16GB. The only thing I don't like about it is it is more expensive and I don't see our server being extremely taxed. However redundancy is important, so if I need server guts....so be it. (I was also looking at desktop equipment since on my home rig I used to run Folding@Home 24/7 and only experienced crashes with it when I max abusing my over clock settings....)
Based on your recommended configuration, it sounds like you are suggesting one server play the role of services (in the form of VMs) and the second is simply backups with additional software? Is there any concerns with the Hyper-V replica route? I am still looking to keep the costs down and since Hyper-V is free and the replica feature is built-in, I'd rather use a free solution if it is reliable. Can you also comment on my intended drive setup? Both servers would have 2 x 3TB HDD in RAID 1, partition for Hyper-V Server, remaining for VM and replica VM.
Thanks again!
In hearing more of the details here, I would recommend going with the business server from Dell or HP, primarily because it is a "business" server. If something dies on it, they will be onsite to replace it - not you. You're also not being directly blamed for "computer hardware" that you put together. Also, you can get as many years of warranty as you think you will need.
Secondarily, a Xeon E3 v3 or a E5 v2 processor is definitely the way to go. With 8-16GB of RAM, it could easily handle 13 clients for AD, DNS, DHCP, and file/print services. Getting the server with 4-8TB of storage in a RAID 5/6 array from the beginning will make sure you have storage for at least a couple of years. SAS drives (10k or 15k) will probably be the most cost-efficient, although you could probably use SATA 7,200rpm drives if you have at least 4-6 of them (since you indicated super-fast storage isn't a huge need).
As others have mentioned, don't go overboard with setting up complicated processes. Above all else, make sure you have a good backup system in place along with a good UPS (replace any UPS that 3-5yrs+ old). I would ignore the DFS part, and just keep it simple with basic file shares & a good system of folder hierarchy.
You could create the whole server as a VM and then keep a backup/replica VM on another system if you want; let your budget determine that. You could even run a replica VM on the same host & then backup that replica to another machine.
Above all else, have a good backup & test it at least once, if not periodically. People losing data can definitely be a reason for termination of employment.
m
0
l
crackerstastic
April 1, 2014 10:45:35 AM
2Be_or_Not2Be said:
In hearing more of the details here, I would recommend going with the business server from Dell or HP, primarily because it is a "business" server. If something dies on it, they will be onsite to replace it - not you. You're also not being directly blamed for "computer hardware" that you put together. Also, you can get as many years of warranty as you think you will need.
Secondarily, a Xeon E3 v3 or a E5 v2 processor is definitely the way to go. With 8-16GB of RAM, it could easily handle 13 clients for AD, DNS, DHCP, and file/print services. Getting the server with 4-8TB of storage in a RAID 5/6 array from the beginning will make sure you have storage for at least a couple of years. SAS drives (10k or 15k) will probably be the most cost-efficient, although you could probably use SATA 7,200rpm drives if you have at least 4-6 of them (since you indicated super-fast storage isn't a huge need).
As others have mentioned, don't go overboard with setting up complicated processes. Above all else, make sure you have a good backup system in place along with a good UPS (replace any UPS that 3-5yrs+ old). I would ignore the DFS part, and just keep it simple with basic file shares & a good system of folder hierarchy.
You could create the whole server as a VM and then keep a backup/replica VM on another system if you want; let your budget determine that. You could even run a replica VM on the same host & then backup that replica to another machine.
Above all else, have a good backup & test it at least once, if not periodically. People losing data can definitely be a reason for termination of employment.
Regardless if I have a pieced together set of hardware or a ready-made "business" server - if something goes down, I get pestered. My boss isn't ignorant to the fact that hardware unfortunately does not last forever and sometimes things break. He is also going to rationalize it as he pays me to service the computers and is not going to want to fork over the cash for services plans and things like that. (Even if it is our benefit.) He would much rather spend the one time fee of a few extra replacement components to put on the shelf in case something goes awry. Furthermore, we will also look at the fact that most hardware comes with some kind of warranty so why buy more? (Again, even if it is to our benefit. He is looking at the price, not potential....) In any event, I should be able equal (or better) bang for less buck building it myself and using server grade equipment. A primary goal here is to keep costs down. I am trying to keep it to $5-6k as it is, and that is going to be a tough sell. From an affordability position we simply cannot spend more than that.
On the storage side of things, 4TB is on the high end and 8TB is overkill. The estimated 800GB - 1TB I will have initially is factoring in a starting set of system images plus user files. A lot of users here have been saving up files for almost 10 years and I can still measure the amounts in megabytes. There are only a couple users with a large backup footprint, but even in those cases we are only talking like 10GB. I am the exception to this, I have around 180GB worth of data I would like backed up to the server. But much of it is due to ISO's of software we use, miscellaneous installers, full copies of our website, entire pulls off of a camera, etc. One day when I get the ambition I would probably go though those files and delete the stuff I no longer need. The point is that I have a good idea already of what rate our storage is going to grow. So whereas 4 drives in RAID 5 is something I am not against, a RAID 1 mirror would also seem to be sufficient. And sufficient is all I need. (However I will include the RAID 5 setup when I discuss this project with my boss.)
For the redundancy portion, information about the VM's and replicas are what I am specifically looking for some pointers on. I know that it seems as if I am trying to over-complicate this, but I really am not, or at least I don't believe I am. So far in this discussion I have been convinced that I need to stick with server grade hardware and to disregard using DFS. However, the server that we have now I did have to rebuild a couple years ago (the original IT guy went off the radar and the mess was thrown in my lap) and the end result involved installing a fresh server installation, fresh RAID 1 array, and joining all the users back to the domain. It was much grief, much stress, and I had to come in on a Saturday and Sunday to get it all done. The server has been going great since then, all by itself, all users are happy. But someday it will fail because it is old and that is was computer components do when they get old. This box is like 140 years old in computer years...
I'm trying to take the Hyper-V/Replica route home because it seems to be an easy failover option. I need to get redundancy built-in to the network. My understanding is that if server 1 experiences a fault, I can walk over to server 2 and turn on the replica of server 1 to get functionality back. Then I can fix the busted server 1 with minimal disruption. Perhaps I simply do not correctly understand the capabilities of a replica VM. Is this failover scenario possible?
Alec, as far as this project serving as a experiment, you are partially correct. It has been several years since I took any server related courses and Win2012 wasn't even released at that point. That said, there are new things that have been added to the software I want to try out to see if they fit our business. In another decade I'm sure I'll have to upgrade our servers again, so as time goes on I will always have to experiment. But I dare not take my "experiment" to the production side of things until I have what I want. I hope I am around when the day comes that we do need fancy business servers with multi-site replication across state lines. It would be foolish of me to not toy around and learn these advanced features before that day comes. As far as the client I am building this for, the client is the company I work for. I will be managing this on my own after it is deployed so it is in my best interest to find a sweet spot between redundancy, failover, and complexity.
Thanks again to both of you for your thoughts on the subject. I appreciate your patience.
m
0
l
Alec Mowat
April 1, 2014 4:41:16 PM
I think you a missing a huge comparison between a Dell server, with an integrated management card, and a home built PC.
You cannot manage a Frankenserver. You will never know if a Harddrive is failing on an AMD Onboard RAID controller. You will not be able to use two power supplies, and you will not know if it fails.
The Dell server will notify you via System logs when there is a Hardware error, or when you need to update the Firmware or drivers. Dell will replace the parts onsite.
You cannot run a server on Frankenstien hardware. IT companies will NOT provide service to servers that are customer built, or installed on Workstations.
The onboard RAID controller on a Sabertooth Motherboard does not have the reliable or performance to be seriously considered as a viable option. You ~WILL~ get BSOD's from the RAID controller on a Motherboard. You will absolutely need to purchase a $400 - $5000 Raid Card if you expect any sort of reliability.
If anything goes wrong on a customer build, you are seriously risking all of your data just trying to troubleshoot a dead stick of RAM. You are digging yourself into a massive hole.
You don't even really need two servers for this setup. You can easily get away with 16 GB of RAM and an 8 thread (4core Hyperthreaded Xeon) Dell server running Server 2012 R2 with Hyper-V.
Let the Router performance DHCP and if it can, DNS as well.
Set up the RAID on the server with all of the disks. Break it into partitions. Use a small partition for the Host OS install. Use one Parition for Data on the File share server. The other can host the remaining VHD's for Hyper V.
I assume you are not hosting a website or Exchange? You will need your Website in a DMZ if you are, on a separate network.
The second server doesn't even need to be a server. You can basically get away with a QNAP and just do image-level backups stored on the QNAP. A 4 bay QNAP with 4 x 4 TB Disk in a RAID 5 should hold a weeks worth of Data.
Remember that, if you only have 800 GB of Data, you need enough space to store as many full backups you plan to have. So that space triples rather quickly.
Anyway, my main points on this issue....
You are better off buying 1 good server instead of attempting to create a Hyper V cluster.
I would predict both servers in the cluster will fail and/or give major problems before one good quality server would fail. They will also not give the performance you need. It will actually make your network slower as they try to replicate with slow RAID controllers.
If you are just doing quick snapshot backups at night, you can easily get away with software installed on each VM just saving to an onsite NAS instead of having a whole server dedicated to backups. This will save you a fortune.
I would honestly toss your whole idea out the window and just buy one good "Server" (capital S). You are creating a convoluted IT nightmare that you're going to spending hours of late nights trying to maintain.
You cannot manage a Frankenserver. You will never know if a Harddrive is failing on an AMD Onboard RAID controller. You will not be able to use two power supplies, and you will not know if it fails.
The Dell server will notify you via System logs when there is a Hardware error, or when you need to update the Firmware or drivers. Dell will replace the parts onsite.
You cannot run a server on Frankenstien hardware. IT companies will NOT provide service to servers that are customer built, or installed on Workstations.
The onboard RAID controller on a Sabertooth Motherboard does not have the reliable or performance to be seriously considered as a viable option. You ~WILL~ get BSOD's from the RAID controller on a Motherboard. You will absolutely need to purchase a $400 - $5000 Raid Card if you expect any sort of reliability.
If anything goes wrong on a customer build, you are seriously risking all of your data just trying to troubleshoot a dead stick of RAM. You are digging yourself into a massive hole.
You don't even really need two servers for this setup. You can easily get away with 16 GB of RAM and an 8 thread (4core Hyperthreaded Xeon) Dell server running Server 2012 R2 with Hyper-V.
Let the Router performance DHCP and if it can, DNS as well.
Set up the RAID on the server with all of the disks. Break it into partitions. Use a small partition for the Host OS install. Use one Parition for Data on the File share server. The other can host the remaining VHD's for Hyper V.
I assume you are not hosting a website or Exchange? You will need your Website in a DMZ if you are, on a separate network.
The second server doesn't even need to be a server. You can basically get away with a QNAP and just do image-level backups stored on the QNAP. A 4 bay QNAP with 4 x 4 TB Disk in a RAID 5 should hold a weeks worth of Data.
Remember that, if you only have 800 GB of Data, you need enough space to store as many full backups you plan to have. So that space triples rather quickly.
Anyway, my main points on this issue....
You are better off buying 1 good server instead of attempting to create a Hyper V cluster.
I would predict both servers in the cluster will fail and/or give major problems before one good quality server would fail. They will also not give the performance you need. It will actually make your network slower as they try to replicate with slow RAID controllers.
If you are just doing quick snapshot backups at night, you can easily get away with software installed on each VM just saving to an onsite NAS instead of having a whole server dedicated to backups. This will save you a fortune.
I would honestly toss your whole idea out the window and just buy one good "Server" (capital S). You are creating a convoluted IT nightmare that you're going to spending hours of late nights trying to maintain.
m
1
l
There is a TON of great information and some very nice input throughout this thread, and I'm glad to see several people helping to pitch in on this subject.
I'm currently working on researching and testing a similar configuration for a client of mine. They have one main office in the same town as me, with several smaller branch offices in other neighboring towns. They need a domain controller, file storage, and a basic application server for a couple utilities just for running on the server (but not terminal services.) So nothing really heavy load, only about 30 computers in their main office.
First things first, as it has been hammered in above, don't risk a sub-standard build server that relies 100% on your ability to ensure everything is 100% compatible, 100% supported, and 100% performance. HP, Dell, Lenovo, and all these other companies spend millions and have entire teams of experts that do that day in and day out. Let them do that and it will eliminate a lot of additional work or responsibility you might have in ensuring these business-critical server systems are running correctly. Buying a pre-built system such as from Dell or HP may cost you a little bit more, but not quite as much as you might think. Do some asking around through vendors or other providers for pricing and you might be surprised.
Now, I don't think your idea of using replication between two servers is a bad idea. I just think that it is MUCH more important to ensure you have a good backup system in place ABOVE the need for replication. Yes, replication is a nice cheap alternative to a complex and expensive failover cluster environment, but it is still not a replacement to backup system. I think, though, that you understand this so that's not really an issue.
So, that being said, here is what I am so far leaning towards in my own research on this similar project:
I have two basic servers that will be running the network load. Again, each server will run have the needed VMs (two on each server) with replication enabled between them. This office's storage needs are very minimal, less than you are looking at. But you have to ensure you have enough capacity on BOTH servers for BOTH workloads if needed, because again it's going to be making duplicates on both computers. This is the downside, because of course with more storage capacity needs comes greater costs. I don't think replication is going to use up a whole bunch of your network traffic if you aren't making huge amounts of changes to your data between replications. However, I would suggest ensuring you have completely separate network channels for that, such as dedicated ethernet ports (or vEthernet NIC within a Hyper-V team) so that your replication transfers won't affect the rest of your network as well.
The servers I am looking at are just DL360e G8 servers with a quad-core processor and 16 GB of RAM, plenty enough for what we need. However, I am upgrading to the P420 RAID controller with cache. Call me paranoid, but I've learned from horrible past experiences that just using basic software RAID is a no-no a lot of times. It doesn't offer the performance or reliability of a hardware RAID controller. I would also recommend putting your host OS on a separate RAID1 array of drives, then your VM data on at least one other set of drives. The more you break it up the better throughput you might have. For example, two 300 GB SAS drives in RAID 1 for your host OS, two 300 GB SAS drives in RAID 1 for the OS VM VHDX files, and another two 2 TB drives in RAID 1 for the VM VHDX data files. This may be overkill, and there are a ton of options here, but I've seen some other places also do this sort of thing to spread out and focus where you need performance and where you need capacity.
Now the backup system. Everything gets backed up to a separate NAS device. There's lots of ways to do this again, either through network shares or iSCSI targets, etc. I'm still playing with this, but I am generally setting up a drive partition or iSCSI hard drive for each of the physical servers to run the included Windows Server Backup utility to automatically do daily backups to their own partition or space on the NAS. You also want to be sure you have copies of the raw files that your users share and all critical data. Basically the concept is if all of your servers and network and everything go kaput, you want to be able to still have a hard drive or something that you can take to any computer, no matter what the hardware, plug it in, and have access to your individual files and folders. Again there are many ways to do this but I like Uranium Backup, and set it to backup daily to network shares on the NAS, creating a new backup for each day for a week before overwriting. This gives you a full week worth of backups to go back to if needed. The next step is off-site backups. Utilize external hard drives to make complete duplicates of your data (and all the VM VHDX data as well ideally) and take those off site. We recommend doing this once a week, and rotate between two hard drives so in the end we actually have two weeks worth of backup data.
I'm currently working on researching and testing a similar configuration for a client of mine. They have one main office in the same town as me, with several smaller branch offices in other neighboring towns. They need a domain controller, file storage, and a basic application server for a couple utilities just for running on the server (but not terminal services.) So nothing really heavy load, only about 30 computers in their main office.
First things first, as it has been hammered in above, don't risk a sub-standard build server that relies 100% on your ability to ensure everything is 100% compatible, 100% supported, and 100% performance. HP, Dell, Lenovo, and all these other companies spend millions and have entire teams of experts that do that day in and day out. Let them do that and it will eliminate a lot of additional work or responsibility you might have in ensuring these business-critical server systems are running correctly. Buying a pre-built system such as from Dell or HP may cost you a little bit more, but not quite as much as you might think. Do some asking around through vendors or other providers for pricing and you might be surprised.
Now, I don't think your idea of using replication between two servers is a bad idea. I just think that it is MUCH more important to ensure you have a good backup system in place ABOVE the need for replication. Yes, replication is a nice cheap alternative to a complex and expensive failover cluster environment, but it is still not a replacement to backup system. I think, though, that you understand this so that's not really an issue.
So, that being said, here is what I am so far leaning towards in my own research on this similar project:
I have two basic servers that will be running the network load. Again, each server will run have the needed VMs (two on each server) with replication enabled between them. This office's storage needs are very minimal, less than you are looking at. But you have to ensure you have enough capacity on BOTH servers for BOTH workloads if needed, because again it's going to be making duplicates on both computers. This is the downside, because of course with more storage capacity needs comes greater costs. I don't think replication is going to use up a whole bunch of your network traffic if you aren't making huge amounts of changes to your data between replications. However, I would suggest ensuring you have completely separate network channels for that, such as dedicated ethernet ports (or vEthernet NIC within a Hyper-V team) so that your replication transfers won't affect the rest of your network as well.
The servers I am looking at are just DL360e G8 servers with a quad-core processor and 16 GB of RAM, plenty enough for what we need. However, I am upgrading to the P420 RAID controller with cache. Call me paranoid, but I've learned from horrible past experiences that just using basic software RAID is a no-no a lot of times. It doesn't offer the performance or reliability of a hardware RAID controller. I would also recommend putting your host OS on a separate RAID1 array of drives, then your VM data on at least one other set of drives. The more you break it up the better throughput you might have. For example, two 300 GB SAS drives in RAID 1 for your host OS, two 300 GB SAS drives in RAID 1 for the OS VM VHDX files, and another two 2 TB drives in RAID 1 for the VM VHDX data files. This may be overkill, and there are a ton of options here, but I've seen some other places also do this sort of thing to spread out and focus where you need performance and where you need capacity.
Now the backup system. Everything gets backed up to a separate NAS device. There's lots of ways to do this again, either through network shares or iSCSI targets, etc. I'm still playing with this, but I am generally setting up a drive partition or iSCSI hard drive for each of the physical servers to run the included Windows Server Backup utility to automatically do daily backups to their own partition or space on the NAS. You also want to be sure you have copies of the raw files that your users share and all critical data. Basically the concept is if all of your servers and network and everything go kaput, you want to be able to still have a hard drive or something that you can take to any computer, no matter what the hardware, plug it in, and have access to your individual files and folders. Again there are many ways to do this but I like Uranium Backup, and set it to backup daily to network shares on the NAS, creating a new backup for each day for a week before overwriting. This gives you a full week worth of backups to go back to if needed. The next step is off-site backups. Utilize external hard drives to make complete duplicates of your data (and all the VM VHDX data as well ideally) and take those off site. We recommend doing this once a week, and rotate between two hard drives so in the end we actually have two weeks worth of backup data.
m
1
l
crackerstastic
April 2, 2014 6:25:48 AM
Thanks for the answers. I've been reading up the hardware subjects and the consensus on the internet (including individuals from here) is that I stick with server grade components. I am now eyeballing an Xeon E1220V3 and a Supermicro mobo. I will find the 16GB RAM I need somewhere, and I will make sure that I get the same part number as the manufacturer has certified. These are all rated as 'S'erver grade components.
Now on the hardware side of things, just like buying a desktop, you can eliminate costs of hardware (while usually getting more crunch power) if you spec the system out yourself. This is exactly what all the IT companies like Dell, HP, etc. do - they pick a set of components, they build it (Frankenstein??), then they slap their name on it and jack the price. I know that they offer some nice support packages like next day onsite replacement, but these also cost extra. It would be nice knowing that 5 or 10 different times that a component goes out I can call Dell and they will come fix it. That might make me feel warm and fuzzy on service, but the more and more I use that, the less I will think of them as a company because the more their components fail, the more I think they are selling me cheap hardware. It really comes down to experience dealing with manufacturer's equipment. For example, back in my days of building desktops I found certain lines of MSI motherboards I didn't care for, too many RMA's. So I stayed away from them. The same will apply to server components. If I do my homework and select quality components, then it shouldn't really matter if I run with a Dell/HP/Lenovo or my own custom build. ANY server I select needs to do its job. I have actually been working with a couple Microsoft certified partners. One of them did recommend a $1400 Dell server that we would have had to upgrade to get it to 16GB from 4GB and I would have had to add the extra 3 TB of storage I wanted to it. We were estimating that the price was going to jump up to at least $2000 or more, and he was "giving me a deal". Then I get to buy a second server....then I get to buy my Windows licenses and CALs. Things were getting really costly, really quick. For the markup I would have to pay I can still get quality server components and a spare of things like a power supply, motherboard, hard drive, etc. If something breaks, I can fix it SAME day, then send the busted one in for warranty replacement and when it comes in I put it on the shelf. My philosophy on it is that just about whatever any IT company can do for me, I can also do for myself (and learning to get things done yourself increases your value as an employee and a professional). Regardless of who fixes what....my boss still wants answers from me. If a server I built breaks, I get blamed because I built the server. If a server from Dell breaks, I get blamed for picking Dell (and paying more for it). I WILL get blamed for any failure no matter what. The main questions become: How much will it cost to fix and how often is it going to break? As I said, knowing that a dozen different repairs are covered is nice, but it *may* also be indicative of poor quality components after a handful of failures. I'm not trying to knock on Dell or anyone else, but I've made it 10 years without them and don't see a big reason to start. (Of course hardware configurations have been subjective for quite some time. Nonetheless though, good pointers from everyone on the topic that I've seen so far.)
Now back on track.
For my backup solution, the implementation is very important. I will test it and make sure that things work properly. The main idea behind replication is that I get a copy of my servers (obviously this doesn't count as a backup of the backup data) so that if one server goes down the replica VM gets activated on the remaining server so that availability of the network services is restored. According to another MS certified partner, this is the way to go for the high availability/redundancy. For the actual backups, the most important portion of the data is going to be the individual users profile data. Document, pictures, Outlook stores, etc. This I want to schedule to happen daily. System images are for more of a restoration standpoint. I don't need daily system images. The images will serve the purpose of not have to install a base OS, hundreds of updates, apps, settings, etc. Now granted in the many years I have been doing this now I have only had to do an operation like that a handful of times, but it is tedious and time consuming. The profile backups I planned on scheduling from the client side with the Windows Backup and Restore tool. The backups will get sent to a designated share for that user. Overnight the server will copy those shares to another target, likely some external 1TB storage. I can do full backups of those folders at the end of the week and incremental backups leading up to the next weekly full backup. Since I will only capture profile folders with these backups, 1 TB on the external will go a long way. I'm not real concerned with retention on the system images. They are more for me and not the users. Users care about their data only. Either way, I may get another larger external drive to store the backups for system images as well. For retention while things are running smoothly, I can enable VSS on the shares so users can have file history, this will only be for profile documents and nothing else.
In the end, my planned backup will look like this:
- Client system images backs up to server (maybe twice, monthly)
- Client profile documents backup to the server (full backup, daily)
- Server backs up system images to external storage (again, twice monthly)
- Server backs up user shares to external storage (full backup once per week, incremental in between)
Can anyone see any glaring issues with that setup?
For storage requirements I am thinking about a pair of 200-300GB drives, RAID 1, for host OS. Then a pair of 3TB drives, RAID 1, for the VMs files. I've never used software RAID because of the horror stories. However I've always seemed to have success using onboard hardware RAID controllers. Is there anything wrong with that, or am I just lucky?
For VM replication purposes, I had already planned on getting a separate set of NICs dedicated to replication if that sort of thing is possible.
Lots of good feedback and discussion here. Even though there is disagreement on some aspects, this really is still a lot of good information.
Now on the hardware side of things, just like buying a desktop, you can eliminate costs of hardware (while usually getting more crunch power) if you spec the system out yourself. This is exactly what all the IT companies like Dell, HP, etc. do - they pick a set of components, they build it (Frankenstein??), then they slap their name on it and jack the price. I know that they offer some nice support packages like next day onsite replacement, but these also cost extra. It would be nice knowing that 5 or 10 different times that a component goes out I can call Dell and they will come fix it. That might make me feel warm and fuzzy on service, but the more and more I use that, the less I will think of them as a company because the more their components fail, the more I think they are selling me cheap hardware. It really comes down to experience dealing with manufacturer's equipment. For example, back in my days of building desktops I found certain lines of MSI motherboards I didn't care for, too many RMA's. So I stayed away from them. The same will apply to server components. If I do my homework and select quality components, then it shouldn't really matter if I run with a Dell/HP/Lenovo or my own custom build. ANY server I select needs to do its job. I have actually been working with a couple Microsoft certified partners. One of them did recommend a $1400 Dell server that we would have had to upgrade to get it to 16GB from 4GB and I would have had to add the extra 3 TB of storage I wanted to it. We were estimating that the price was going to jump up to at least $2000 or more, and he was "giving me a deal". Then I get to buy a second server....then I get to buy my Windows licenses and CALs. Things were getting really costly, really quick. For the markup I would have to pay I can still get quality server components and a spare of things like a power supply, motherboard, hard drive, etc. If something breaks, I can fix it SAME day, then send the busted one in for warranty replacement and when it comes in I put it on the shelf. My philosophy on it is that just about whatever any IT company can do for me, I can also do for myself (and learning to get things done yourself increases your value as an employee and a professional). Regardless of who fixes what....my boss still wants answers from me. If a server I built breaks, I get blamed because I built the server. If a server from Dell breaks, I get blamed for picking Dell (and paying more for it). I WILL get blamed for any failure no matter what. The main questions become: How much will it cost to fix and how often is it going to break? As I said, knowing that a dozen different repairs are covered is nice, but it *may* also be indicative of poor quality components after a handful of failures. I'm not trying to knock on Dell or anyone else, but I've made it 10 years without them and don't see a big reason to start. (Of course hardware configurations have been subjective for quite some time. Nonetheless though, good pointers from everyone on the topic that I've seen so far.)
Now back on track.
For my backup solution, the implementation is very important. I will test it and make sure that things work properly. The main idea behind replication is that I get a copy of my servers (obviously this doesn't count as a backup of the backup data) so that if one server goes down the replica VM gets activated on the remaining server so that availability of the network services is restored. According to another MS certified partner, this is the way to go for the high availability/redundancy. For the actual backups, the most important portion of the data is going to be the individual users profile data. Document, pictures, Outlook stores, etc. This I want to schedule to happen daily. System images are for more of a restoration standpoint. I don't need daily system images. The images will serve the purpose of not have to install a base OS, hundreds of updates, apps, settings, etc. Now granted in the many years I have been doing this now I have only had to do an operation like that a handful of times, but it is tedious and time consuming. The profile backups I planned on scheduling from the client side with the Windows Backup and Restore tool. The backups will get sent to a designated share for that user. Overnight the server will copy those shares to another target, likely some external 1TB storage. I can do full backups of those folders at the end of the week and incremental backups leading up to the next weekly full backup. Since I will only capture profile folders with these backups, 1 TB on the external will go a long way. I'm not real concerned with retention on the system images. They are more for me and not the users. Users care about their data only. Either way, I may get another larger external drive to store the backups for system images as well. For retention while things are running smoothly, I can enable VSS on the shares so users can have file history, this will only be for profile documents and nothing else.
In the end, my planned backup will look like this:
- Client system images backs up to server (maybe twice, monthly)
- Client profile documents backup to the server (full backup, daily)
- Server backs up system images to external storage (again, twice monthly)
- Server backs up user shares to external storage (full backup once per week, incremental in between)
Can anyone see any glaring issues with that setup?
For storage requirements I am thinking about a pair of 200-300GB drives, RAID 1, for host OS. Then a pair of 3TB drives, RAID 1, for the VMs files. I've never used software RAID because of the horror stories. However I've always seemed to have success using onboard hardware RAID controllers. Is there anything wrong with that, or am I just lucky?
For VM replication purposes, I had already planned on getting a separate set of NICs dedicated to replication if that sort of thing is possible.
Lots of good feedback and discussion here. Even though there is disagreement on some aspects, this really is still a lot of good information.
m
0
l
Alec Mowat
April 2, 2014 8:14:41 AM
Quote:
Thanks for the answers. I've been reading up the hardware subjects and the consensus on the internet (including individuals from here) is that I stick with server grade components. I am now eyeballing an Xeon E1220V3 and a Supermicro mobo. I will find the 16GB RAM I need somewhere, and I will make sure that I get the same part number as the manufacturer has certified. These are all rated as 'S'erver grade components.
I can't stress enough how much better a Dell branded system will be in comparison to something you build yourself.
Dell doesn't just piece components together, they build and a test specific set of components, they put a 5 year warranty on it and send their own onsite techs. The motherboards are proprietary for Dell.
And most importantly, they develop some serious monitoring software that communicates at a very low level with the hardware and provides some real time monitoring alerts.
I
DRAC'shttps://www.youtube.com/watch?v=lS2uQtgSfnk
If you need to RMA your motherboard for two weeks, you need to go buy a new board.
If you need a Dell motherboard replaced, they will send a tech onsite, do the work for you, and have you back running in a matter of 2 hours. Dell onsite techs are sent within 4 hours of the service ticket. They have a same day or next day warranty replacement turnaround time.
You have to remember, these are SERVER's, not workstations. Dell support for servers is based in Austin Texas. There is a huge quality leap between a Dell server and a Dell workstation, they two totally different products. This is like comparing a GM car to a GM pickup truck. It's two totally different standards.
You need to stop using your workstation and gaming knowledge to work on servers. It's a much different environment
From experience, as an IT professional, I do not support someone building their own server. It is very amateurish and laughable. I've honestly only seen 1 or two in a dozens of companies I have worked with, and I know one of them failed within a year and was replaced by a branded box.
That's just the way it is.
You are taking a really small, simple network and making it overly complicated. You are piling on more and more problems down the road. I seriously suggest avoiding all of this, avoiding all this extra work, and avoiding any major problems down the road.
Just keep it clean and simple. Stop trying to be creative, stop trying to apply workstation knowledge to a server and take a huge step backwards and start from the beginning. You are being to creative and ambitious.
The money you spend on a Dell now, is money you are going to save down the road in a major way.
m
1
l
jeff-j
April 2, 2014 9:58:33 AM
I have to agree with Alec dell support for servers hands down is the best, and for a 5 year pro support warranty it only about $200 dollars and that covers everything for 5 years from CPU to power supplies, RAID cards, you name it even hard drives depends on the failure, plus you speak to someone in America not India.
Also if you go Dell get an iDrac they are well worth it, I can call up dell read them what it says on the iDrac and next day I have a tech to replace the part. With the iDrac I can also monitor just about any piece of hardware on the server and be alerted if any warning happen.
I would recommend you just take a look at a Dell T320 configure it and see what the cost is there is no harm in that. I bet you could put together a server for around $5000 that will have enough power for years.
I have multiple clients on Dell servers running either VM or hyper V with between 25-50 users and those servers run without a problem. The server for one client does DHCP, DC, DNS, File share, user shares, and print server. What you are trying to achieve just seems like overkill to have that much for the number of users that there are. And going with the build your own route for servers just never works. about 80% of the servers I replace with Dell are home build ones that just cost to much to maintain or repair.
Also look at taking you existing server that is still running and using that to run/store your backups that will help you save some money and not have to do the replication.
You have been given a lot of great advice from some very knowledgeable people. I would say listen to them, and you can still get to experiment with what you like but in the end you have to do what is best for you client, and think long term even if you don't stay with the company that long the server will be there and someone will have to take over. And it would be nice not to have them walk into a mess like you were handed.
Also if you go Dell get an iDrac they are well worth it, I can call up dell read them what it says on the iDrac and next day I have a tech to replace the part. With the iDrac I can also monitor just about any piece of hardware on the server and be alerted if any warning happen.
I would recommend you just take a look at a Dell T320 configure it and see what the cost is there is no harm in that. I bet you could put together a server for around $5000 that will have enough power for years.
I have multiple clients on Dell servers running either VM or hyper V with between 25-50 users and those servers run without a problem. The server for one client does DHCP, DC, DNS, File share, user shares, and print server. What you are trying to achieve just seems like overkill to have that much for the number of users that there are. And going with the build your own route for servers just never works. about 80% of the servers I replace with Dell are home build ones that just cost to much to maintain or repair.
Also look at taking you existing server that is still running and using that to run/store your backups that will help you save some money and not have to do the replication.
You have been given a lot of great advice from some very knowledgeable people. I would say listen to them, and you can still get to experiment with what you like but in the end you have to do what is best for you client, and think long term even if you don't stay with the company that long the server will be there and someone will have to take over. And it would be nice not to have them walk into a mess like you were handed.
m
1
l
crackerstastic
April 2, 2014 11:12:58 AM
*sigh* Alec, I get it already, you are against me building a SERVER. You are starting to sound like some brute-force salesman for Dell. Evidently all of the Intel and Supermicro SERVER motherboards and the Xeon and Operton SERVER processors are a joke! The market for SERVER components is a joke and only the Dells, HPs, and Lenovos of the IT community got it right when it came to a SERVER! Some of your analogies are preposterous. Although you have only counted a couple custom servers I'll still wager that there are plenty of custom servers out there chugging away without issues. Workstation and Server environments do share some concepts. Servers simply expand a lot on it and add some more substance. (Much like a GM pickup truck expanded on a GM car and added the extra features desired of the pickup.) Since you have some experience with around a dozen companies: What is the ratio of custom server hardware failures and Dell onsite service appointments for hardware failures?? So far you've only cited one custom server failing. How many Dell servers had failures, on average?
Now although I felt this was irrelevant for the topic originally, let me give you some background and an idea of how things are going to work on my end. Years ago we had two servers on site, they guy who set all of the stuff up though was some outsourced guy that doesn't respond to support calls. Well one day a server died....for good. This is when I was brought into the picture (I worked in a different part of the company then). On the remaining server I installed the network services lost and everything was golden again. I said we need to replace the dead server (it is an old HP ProLiant), set up redundancy (and more) but I got shot down because it was too expensive. To my boss since the things he uses were up and running again everything was fine. He also had no knowledge if I was any good at IT either (I was still in college at that point). Words like "redundancy" and "fault tolerance" were simply technical jargon to him. To him I was asking for money to spend when we did not have it to spend. However, he is the boss, so what he says goes. I got a lot of stuff in writing about what kinds of fixes get approved and what doesn't. This way when crap hits the fan, my salary doesn't get any splashed on it. When the second server took a dive and we were really hurting then he listened a little better - he let me put a RAID 1 array in the server......and that was it.
I need to get our server upgraded since we are running Win2003 and EOL for that OS is a small number of days away. I need to craft a written proposal to submit to my boss (whom is not from an IT background, but from a business background) regarding our server. In that proposal I need to provide options, if I do not provide options, I will be sent back to the drawing board and told that I haven't looked around enough. I have looked at a pair of Dell servers, which are going into the proposal. But for options, I need to come up with a custom build as well. When I am done with the proposal he and I will discuss it further after he has had a chance to review it on his own. We will discuss the pros and cons of custom hardware and using a known label like Dell, HP, etc. We will also talk about the price of each one. Now this subject will be key to him because of the business background. If the whole package turns up $1,000 less in hardware, well then the business-oriented side of his brain is going to tell me to build them. If I am told take my hardware config and shove where the sun doesn't shine, I'll be buying a pair of servers from Dell or HP instead. Either way, if I enjoy being able to satiate my addiction to paychecks, I am going to build or buy based on my instructions.
The bottom line is that in the end my boss is going to tell me which way to proceed - I do not have the final say. Personally, I'm golden with either approach. I'm not going to get picky over a custom rig or a branded server. My shortcoming in this scenario is I need some better in depth knowledge with server grade equipment and the newer features of Win2012 and virtualization.
Now, from the sounds of your previous posts I believe you are an intelligent person on the subject and you have field experience in this area. That is great because it is knowledgeable folks such as yourself that I am looking for input from. If I though some IT wannabes could answer my question I wouldn't have come here to a forum dedicated to business computing. I will admit in my experience I've been stuck in the desktop environment. I've built a few dozen of them for the personal and professional environments. The PC's I've built have an extremely low failure rate (knocks on wood) because I do my homework and I don't buy "bottom of the barrel" components. I find it ludicrous to think that a well thought, well spec'ed server build, ensuring compatibility and such, is the bane of the server environment.
Hopefully that is enough for you to see why I must spec out a custom build. It is part of the process and proposal. In your previous post you quoted the CPU/Mobo combo I was looking at. I was really hoping that in your post you would comment on them instead of another pro-Dell lecture. I would prefer it if you could shed some light on what is a decent hardware mix for a server in the scale that I need. Dell might not be comparable in terms of motherboards because they are proprietary, but I'm thinking that is partially a business decision designed to create a recurring payment model.
As always, I appreciate the feedback.
Now although I felt this was irrelevant for the topic originally, let me give you some background and an idea of how things are going to work on my end. Years ago we had two servers on site, they guy who set all of the stuff up though was some outsourced guy that doesn't respond to support calls. Well one day a server died....for good. This is when I was brought into the picture (I worked in a different part of the company then). On the remaining server I installed the network services lost and everything was golden again. I said we need to replace the dead server (it is an old HP ProLiant), set up redundancy (and more) but I got shot down because it was too expensive. To my boss since the things he uses were up and running again everything was fine. He also had no knowledge if I was any good at IT either (I was still in college at that point). Words like "redundancy" and "fault tolerance" were simply technical jargon to him. To him I was asking for money to spend when we did not have it to spend. However, he is the boss, so what he says goes. I got a lot of stuff in writing about what kinds of fixes get approved and what doesn't. This way when crap hits the fan, my salary doesn't get any splashed on it. When the second server took a dive and we were really hurting then he listened a little better - he let me put a RAID 1 array in the server......and that was it.
I need to get our server upgraded since we are running Win2003 and EOL for that OS is a small number of days away. I need to craft a written proposal to submit to my boss (whom is not from an IT background, but from a business background) regarding our server. In that proposal I need to provide options, if I do not provide options, I will be sent back to the drawing board and told that I haven't looked around enough. I have looked at a pair of Dell servers, which are going into the proposal. But for options, I need to come up with a custom build as well. When I am done with the proposal he and I will discuss it further after he has had a chance to review it on his own. We will discuss the pros and cons of custom hardware and using a known label like Dell, HP, etc. We will also talk about the price of each one. Now this subject will be key to him because of the business background. If the whole package turns up $1,000 less in hardware, well then the business-oriented side of his brain is going to tell me to build them. If I am told take my hardware config and shove where the sun doesn't shine, I'll be buying a pair of servers from Dell or HP instead. Either way, if I enjoy being able to satiate my addiction to paychecks, I am going to build or buy based on my instructions.
The bottom line is that in the end my boss is going to tell me which way to proceed - I do not have the final say. Personally, I'm golden with either approach. I'm not going to get picky over a custom rig or a branded server. My shortcoming in this scenario is I need some better in depth knowledge with server grade equipment and the newer features of Win2012 and virtualization.
Now, from the sounds of your previous posts I believe you are an intelligent person on the subject and you have field experience in this area. That is great because it is knowledgeable folks such as yourself that I am looking for input from. If I though some IT wannabes could answer my question I wouldn't have come here to a forum dedicated to business computing. I will admit in my experience I've been stuck in the desktop environment. I've built a few dozen of them for the personal and professional environments. The PC's I've built have an extremely low failure rate (knocks on wood) because I do my homework and I don't buy "bottom of the barrel" components. I find it ludicrous to think that a well thought, well spec'ed server build, ensuring compatibility and such, is the bane of the server environment.
Hopefully that is enough for you to see why I must spec out a custom build. It is part of the process and proposal. In your previous post you quoted the CPU/Mobo combo I was looking at. I was really hoping that in your post you would comment on them instead of another pro-Dell lecture. I would prefer it if you could shed some light on what is a decent hardware mix for a server in the scale that I need. Dell might not be comparable in terms of motherboards because they are proprietary, but I'm thinking that is partially a business decision designed to create a recurring payment model.
As always, I appreciate the feedback.
m
0
l
Alec Mowat
April 2, 2014 1:33:49 PM
crackerstastic
April 2, 2014 1:51:16 PM
Alec Mowat
April 2, 2014 3:58:02 PM
It's super difficult to advise on a Hyper V cluster setup, if you have no real network redundancy.
Do you have multiple internet connections?
Multiple NICs?
Multiple PSU's?
Multiple power sources?
An APC for each server?
Offsite backups?
Enough storage?
A 1GB switch?
A RAID card fast enough to not bottleneck your system?
All of these things will be more efficient in your network than a cluster.
A version of 2012, 2 copies, that support cluster?
Did you evaluate the cost of this OS?
http://www.microsoft.com/en-us/server-cloud/products/wi...
There are plenty of simple, easy solutions to your problem. I feel like you are purposely making a complicated project for yourself, and it's going to burn you.
1 server, with warranty. 1 NAS, for backups.
Done.
Do you have multiple internet connections?
Multiple NICs?
Multiple PSU's?
Multiple power sources?
An APC for each server?
Offsite backups?
Enough storage?
A 1GB switch?
A RAID card fast enough to not bottleneck your system?
All of these things will be more efficient in your network than a cluster.
A version of 2012, 2 copies, that support cluster?
Did you evaluate the cost of this OS?
http://www.microsoft.com/en-us/server-cloud/products/wi...
There are plenty of simple, easy solutions to your problem. I feel like you are purposely making a complicated project for yourself, and it's going to burn you.
1 server, with warranty. 1 NAS, for backups.
Done.
m
0
l
crackerstastic
April 2, 2014 4:35:57 PM
I am not trying to achieve a cluster (i.e. automatic failover cluster), just a replica of a server for manual failover. I have no intention of using the clustering features in Win2012. My understanding is that Hyper V can make a replica of a VM, this VM more or less lies "dormant" until you wake it up. So in the production environment each physical server only has one VM running providing services. The second VM is the replica of whatever the opposite server is doing. The only time I would actually have two VM's actually running is if there is a problem with a physical server (or maintenance and such). So that is what I have come to believe of it. Is that something a Hyper V replica can be used for?
As for your questions
Multiple Internet Connection - No, but there is no site to site transfer of data.
Multiple NIC - There will be, I was planning on dual NICs.
Multiple PSU - Won't be installed, but I plan on keeping one on site.
Multiple Power Sources - No backup generator, if the utility company drops the ball it is lights out.
APC - I have one for sure.
Offsite backups - That is being planned, it will be done via external storage and kept to user profile folders. Not a huge data footprint there.
1 GB switch - No.
RAID card - No. Any recommendations?
I have gotten 3 different quotes on two copies of Win2012 R2 Standard and the needed CALs, so I am aware of the pricing on that.
As for your questions
Multiple Internet Connection - No, but there is no site to site transfer of data.
Multiple NIC - There will be, I was planning on dual NICs.
Multiple PSU - Won't be installed, but I plan on keeping one on site.
Multiple Power Sources - No backup generator, if the utility company drops the ball it is lights out.
APC - I have one for sure.
Offsite backups - That is being planned, it will be done via external storage and kept to user profile folders. Not a huge data footprint there.
1 GB switch - No.
RAID card - No. Any recommendations?
I have gotten 3 different quotes on two copies of Win2012 R2 Standard and the needed CALs, so I am aware of the pricing on that.
m
0
l
Alec Mowat
April 2, 2014 5:26:30 PM
crackerstastic said:
Multiple Internet Connection - No, but there is no site to site transfer of data.
Multiple NIC - There will be, I was planning on dual NICs.
Multiple PSU - Won't be installed, but I plan on keeping one on site.
Multiple Power Sources - No backup generator, if the utility company drops the ball it is lights out.
APC - I have one for sure.
Offsite backups - That is being planned, it will be done via external storage and kept to user profile folders. Not a huge data footprint there.
1 GB switch - No.
RAID card - No. Any recommendations?
.
It's simple to setup, just point it to the second location:
http://blogs.technet.com/b/yungchou/archive/2013/01/10/...
I can't really recommend a RAID controller. I don't purchase them separate from the hardware.
But you can't keep a weeks worth of backups for file restore or mountability. It's only good for immediate failover. But if your server runs two live PSU's, two Live NICs and a RAID5, there's less room for failure (and less troubleshooting if you can monitor those components without tearing the system apart and replacing components)
m
0
l
I think one of the reasons why the "Go Pro Server" line is getting hammered so hard here is because it is industry best practice, and that is why it is being stressed. For years I did custom stuff myself, and for desktops and workstations I still prefer custom, but I've found in comparing server systems that the HP offerings I worked with were just as good quality (better in most cases) and had much better options and features. On top of that I knew that they had completely tested their HP SmartArray controller in that server, or completely ensured that this registered RAM was the right stuff for that server.
NOW, that being said, I know EXACTLY where you are coming from about having to get the cost down. Every customer that I work with has about half the budget for server and network infrastructure that they actually should be investing. And yes, in those cases we have also looked at custom solutions. However I still looked primarily at Supermicro barebones systems which, for the most part, were all built and tested for compatibility with a set group of hardware while still offering lower price-point and optional additions outside of just a specific brand. Besides the two HP ProLiant DL360e G8 servers that I have for myself to work with I also still have two Supermicro servers that are about four years old now still chugging along.
So, yes, while it is industry standard to look at the big name brands (because they do have the names, they do have the support, they have the testing and software and features behind their name) it's not impossible to have a quality server system if you go a different route, just so you are aware that the support falls upon your shoulders. And again, there, I can also sympathize with your thinking. When we roll out servers for customers, no matter what the server is, I am the support behind it. The only difference there is if I have an issue I have one company to call to help me.
Let's get back to your needs, though! I would definitely recommend keeping some spare hardware on hand if you can squeeze it in the budget. It can save you days worth of down time if you have a replacement power supply right there instead of waiting for one to ship in or anything. And if you can work out to do replication to another available server, that decreases the effects of downtime even further! Not much has yet been said about RAID controllers. Don't use the onboard controller. This isn't a hardware solution, it's just software. It's software run within the motherboard, not the OS is all. But a hardware RAID controller will offer much greater performance, reliability, and additional options that can be very nice to have! For example, the HP P410 RAID controller I've used in many servers now can configure multiple individual logical volumes, plus there is a software utility you can install to manage all of your arrays, check their status, rebuild or verify data, etc. all from within Windows without having to do it all from some basic GUI interface during bootup. This means your server, in many situations, can continue to work and operate while you are checking the status of your arrays.
I've used HP P-series Smart Array controllers in many HP and even non-HP servers with good luck. They last a while, they are nice performance, and if you are buying spare bulk stock or even some refurbished units the P410 series is dirt cheap! The newest generation is the P420, which is more expensive, but also higher performance. Unfortunately I haven't had as much luck with the P420 in non-HP servers. Other than that I have also used LSI and Adaptec brand RAID controllers and both were good quality for the price. I'd recommend looking for something that supports mini-SAS internal connectors (two mini-SAS will connect to a fan-out cable or another mini-SAS connection on the hard drive backplane for up to eight hard drives.) And also get something with onboard cache memory. It doesn't have to be outrageous, but 512MB of cache can be nice for greater throughput performance when you need it.
Just to give you an example. We set up an HP ProLiant ML310e G8 server for a customer recently and, just to test, first set up WS2012 R2 in a RAID 1 array with two 2 TB SATA 7k hard drives just using the B120i RAID controller. For SATA disks these still did pretty good, HDTune registering an average of 140 MB/s throughput, with a max of 160 MB/s and minimum of 110 MB/s. Not bad, really. However, we then put in a P420 RAID controller with 1 GB of cache, created a new RAID 1 array with another identical install of WS2012 R2. This time HDTune read an average of 180 MB/s, with maximum of just over 200 MB/s and minimum never going below 140 MB/s. That's a nice boost in performance for SATA disks just with some help from RAID controller and cache.
Go with as many NICs as you can! If you've got a lot of data to replicate, you may even consider a directly connected 10 GbE connection, but that I doubt will be necessary for your situation. Buying refurbished hardware for a business server is risky, but when you can buy an Intel quad-port gigabit PCI-Express NIC card on ebay for around $100, you can always buy a second to keep on hand! You can build multiple trunks within Server 2012 R2 for improved throughput where needed, and eliminate your network from being a huge bottleneck in your server capacity.
The last thing I will say right now is BE CAREFUL of Windows Backup and Recovery!!!! This is just my personal experience but I do NOT trust this as a backup solution for most situations. Windows Backup and Recovery makes an entire snapshot of your system and data, which can be nice to have and easy to keep ahold of, but half of the time when I have had to use it, the recovery fails. Windows won't recognize the backup and just won't recover. Also, Windows Backup and Recovery is hardware-dependent. That means if you are backing up all a user's computer, and that system gets corrupted, burnt up, or teleports to another dimension, then you have to recover to the same type of hardware for it to succeed. Different hardware/chipset/processor manufacturer or hard drive configuration, and you probably won't be able to recover. Additionally, at least from my use with it, Windows Backup and Recovery requires you to completely recover your entire computer system to get access to files backed up within. In other words, if someone saves a single document on their desktop, and the next day they miraculously find a way of destroying just that file, you'd have to recover EVERYTHING to get that one file back. Perhaps they have already made changes or created new files since then, and now those aren't available.
Perhaps this wasn't your actual intention, but I've seen a lot of people utilize this sort of thing for backing up actual data files (documents, spreadsheets, photos, etc.) and I definitely advise against it. Find a utility that copies the files in their original form to another destination, not compressing everything into one proprietary file or archive. This way all of the files can just be copied and moved to ANY computer anywhere and have access without any need for recovery procedures, specific hardware or software needs, etc. Again, I've found Uranium Backup to be incredibly nice at this, and the best thing is it is free!
NOW, that being said, I know EXACTLY where you are coming from about having to get the cost down. Every customer that I work with has about half the budget for server and network infrastructure that they actually should be investing. And yes, in those cases we have also looked at custom solutions. However I still looked primarily at Supermicro barebones systems which, for the most part, were all built and tested for compatibility with a set group of hardware while still offering lower price-point and optional additions outside of just a specific brand. Besides the two HP ProLiant DL360e G8 servers that I have for myself to work with I also still have two Supermicro servers that are about four years old now still chugging along.
So, yes, while it is industry standard to look at the big name brands (because they do have the names, they do have the support, they have the testing and software and features behind their name) it's not impossible to have a quality server system if you go a different route, just so you are aware that the support falls upon your shoulders. And again, there, I can also sympathize with your thinking. When we roll out servers for customers, no matter what the server is, I am the support behind it. The only difference there is if I have an issue I have one company to call to help me.
Let's get back to your needs, though! I would definitely recommend keeping some spare hardware on hand if you can squeeze it in the budget. It can save you days worth of down time if you have a replacement power supply right there instead of waiting for one to ship in or anything. And if you can work out to do replication to another available server, that decreases the effects of downtime even further! Not much has yet been said about RAID controllers. Don't use the onboard controller. This isn't a hardware solution, it's just software. It's software run within the motherboard, not the OS is all. But a hardware RAID controller will offer much greater performance, reliability, and additional options that can be very nice to have! For example, the HP P410 RAID controller I've used in many servers now can configure multiple individual logical volumes, plus there is a software utility you can install to manage all of your arrays, check their status, rebuild or verify data, etc. all from within Windows without having to do it all from some basic GUI interface during bootup. This means your server, in many situations, can continue to work and operate while you are checking the status of your arrays.
I've used HP P-series Smart Array controllers in many HP and even non-HP servers with good luck. They last a while, they are nice performance, and if you are buying spare bulk stock or even some refurbished units the P410 series is dirt cheap! The newest generation is the P420, which is more expensive, but also higher performance. Unfortunately I haven't had as much luck with the P420 in non-HP servers. Other than that I have also used LSI and Adaptec brand RAID controllers and both were good quality for the price. I'd recommend looking for something that supports mini-SAS internal connectors (two mini-SAS will connect to a fan-out cable or another mini-SAS connection on the hard drive backplane for up to eight hard drives.) And also get something with onboard cache memory. It doesn't have to be outrageous, but 512MB of cache can be nice for greater throughput performance when you need it.
Just to give you an example. We set up an HP ProLiant ML310e G8 server for a customer recently and, just to test, first set up WS2012 R2 in a RAID 1 array with two 2 TB SATA 7k hard drives just using the B120i RAID controller. For SATA disks these still did pretty good, HDTune registering an average of 140 MB/s throughput, with a max of 160 MB/s and minimum of 110 MB/s. Not bad, really. However, we then put in a P420 RAID controller with 1 GB of cache, created a new RAID 1 array with another identical install of WS2012 R2. This time HDTune read an average of 180 MB/s, with maximum of just over 200 MB/s and minimum never going below 140 MB/s. That's a nice boost in performance for SATA disks just with some help from RAID controller and cache.
Go with as many NICs as you can! If you've got a lot of data to replicate, you may even consider a directly connected 10 GbE connection, but that I doubt will be necessary for your situation. Buying refurbished hardware for a business server is risky, but when you can buy an Intel quad-port gigabit PCI-Express NIC card on ebay for around $100, you can always buy a second to keep on hand! You can build multiple trunks within Server 2012 R2 for improved throughput where needed, and eliminate your network from being a huge bottleneck in your server capacity.
The last thing I will say right now is BE CAREFUL of Windows Backup and Recovery!!!! This is just my personal experience but I do NOT trust this as a backup solution for most situations. Windows Backup and Recovery makes an entire snapshot of your system and data, which can be nice to have and easy to keep ahold of, but half of the time when I have had to use it, the recovery fails. Windows won't recognize the backup and just won't recover. Also, Windows Backup and Recovery is hardware-dependent. That means if you are backing up all a user's computer, and that system gets corrupted, burnt up, or teleports to another dimension, then you have to recover to the same type of hardware for it to succeed. Different hardware/chipset/processor manufacturer or hard drive configuration, and you probably won't be able to recover. Additionally, at least from my use with it, Windows Backup and Recovery requires you to completely recover your entire computer system to get access to files backed up within. In other words, if someone saves a single document on their desktop, and the next day they miraculously find a way of destroying just that file, you'd have to recover EVERYTHING to get that one file back. Perhaps they have already made changes or created new files since then, and now those aren't available.
Perhaps this wasn't your actual intention, but I've seen a lot of people utilize this sort of thing for backing up actual data files (documents, spreadsheets, photos, etc.) and I definitely advise against it. Find a utility that copies the files in their original form to another destination, not compressing everything into one proprietary file or archive. This way all of the files can just be copied and moved to ANY computer anywhere and have access without any need for recovery procedures, specific hardware or software needs, etc. Again, I've found Uranium Backup to be incredibly nice at this, and the best thing is it is free!
m
0
l
2Be_or_Not2Be
April 3, 2014 11:52:00 AM
I'll add my final say to this also. Most business stakeholders will appreciate getting a machine with a whole "company" behind it. They understand that they will pay a bit more to have the machine warrantied for 4-5 years. They understand it because the company will replace *any* hardware component on it that might fail, and they get a SLA that even specifies the response time.
You keep talking about the business owner being very cheap and still having the final say on matters. If the business owner is that cheap, then you should personally think how connected you want to be to the "server" you're attempting to build. If you're going to be directly responsible for it, then you might as well buy an identical machine to what you going to build. That's the only way to know you have the parts on hand for replacement. So at that point, you've paid more than having a Dell/HP server, companies who keep warehouses of all of their system components. It really isn't worth it, and you don't want to be a scapegoat for a business owner who doesn't value his computing resources enough.
However, you keep ignoring professionals here with years (even decades) of experience. A lot here have both IT and "business" experience; they've seen what can happen when someone tries to go their own route. It's not impossible, but you should also have more experience with servers if you're going to go that route. So far, you have expressed that you don't have that experience.
So if you don't have that experience, take everyone's advice & go with the server from HP/Dell. When you get more server experience yourself, you might choose to do more yourself. You might also realize that sometimes, for specific types of business stakeholders, you go with a solution that takes away direct blame from yourself. After all, with an owner that cheap, you likely will be looking for a different job soon, if you're not already doing so.
Above all else, you might also gain the wisdom that comes from experience to realize that you don't want to add any complications to your life that aren't necessary.
You keep talking about the business owner being very cheap and still having the final say on matters. If the business owner is that cheap, then you should personally think how connected you want to be to the "server" you're attempting to build. If you're going to be directly responsible for it, then you might as well buy an identical machine to what you going to build. That's the only way to know you have the parts on hand for replacement. So at that point, you've paid more than having a Dell/HP server, companies who keep warehouses of all of their system components. It really isn't worth it, and you don't want to be a scapegoat for a business owner who doesn't value his computing resources enough.
However, you keep ignoring professionals here with years (even decades) of experience. A lot here have both IT and "business" experience; they've seen what can happen when someone tries to go their own route. It's not impossible, but you should also have more experience with servers if you're going to go that route. So far, you have expressed that you don't have that experience.
So if you don't have that experience, take everyone's advice & go with the server from HP/Dell. When you get more server experience yourself, you might choose to do more yourself. You might also realize that sometimes, for specific types of business stakeholders, you go with a solution that takes away direct blame from yourself. After all, with an owner that cheap, you likely will be looking for a different job soon, if you're not already doing so.
Above all else, you might also gain the wisdom that comes from experience to realize that you don't want to add any complications to your life that aren't necessary.
m
0
l
crackerstastic
April 3, 2014 1:53:56 PM
Thank you 2Be_or_Not2Be. I appreciate the input on the subject. In hindsight of where this thread kind of went I should have came out from the beginning and mentioned that quotes for branded servers were in the works and that I was specifically after custom hardware related info. My two goals from the beginning was to get some advice on a VM/Replica setup and to kick around ideas for hardware. The whole point is to show my boss a well-rounded proposal with several options. I plan on going over the pros/cons of both branded and custom servers. We will end up talking about having company backing or solo backing, tech backup vs. manufacturer warranty, and all the stuff in between. I didn't intend to ignore anyone, it is just that I already had professional companies looking into branded server solutions already. One of them has already returned a quote to me on a couple of Dell PE T310 servers (that will need to have stuff added to them). What I wanted was some objective information on the custom server hardware instead of a dismissive remarks. I was hoping for a "Well although I advise against that big time, here's some tips that may help you out......" and instead what I got was more of a "Huge mistake, there is no hope for you.....didn't you hear me before....." attitude. That fact that I knew that I there were several professionals here was some of the motivation for even bringing the subject up. I just didn't realize how passionately people would be against it. On the flip side, it also sounds like some folks here work with multiple companies that do not want the burden of a service agreement by yourselves. On that angle, I can't agree more for pushing a name brand. It is well within in your best interest. In that environment I would be doing the same. I'm a single IT guy working for a single company that is already supporting over a dozen PC's on site. To me a couple of servers to manage on top of that is nothing.
As for my boss - yes he is cheap - but he is also black and white on that. I have seem him one moment go "What the hell did we spend $79.82 for?" to "Well, if $5000 is what we have to spend to do business then that is what we have to spend". He's a weird guy when it comes to money. Sometimes I think he does it on purpose just to be unpredictable. It is really going to come down to how much money he thinks is worth spending for a particular setup. Hell for all I know he might hop on Tiger Direct and see some Lenovo server for $1000 with Windows Essentials 2012 and say that is enough for us.
As for my job, I'm not worried about losing it nor and I looking for a new job. I have been here for almost 15 years now and I know the guy personally as well as professionally. I do so much more around here besides IT that I am pretty much fired if I get myself fired for being a terrible employee. It is going to take a lot more than a busted server (branded or built) for me to lose my job. I'm not too worried about direct responsibility falling on me for a failure anyway. He gets the final say because he likes to be decision maker on these kinds of things, technology especially. It is my job to come up with courses of action and his job to choose which one we take. To put it in perspective: He could tell me to build two custom servers and three years from now a motherboard in one burns up. He is going to care more about whether or not the other server is going to carry the burden while the dead one is repaired. If he's in a mood he may try to pester me about what happened (actually I'm positive he will) and I'll have to stand firm on the "It's hardware....it doesn't last forever". Of course if he sees a Dell tech on site he will do the exact same thing and I will also say "It's hardware....it doesn't last forever". I can assure everyone though that I am not getting myself into any hole that I can't come out of with a custom build.
On that note, I think I have quite a bit of information. I thank everyone for your input. I am going to have a fun time simplifying all of this and putting it in my proposal. I appreciate the comments, the feedback, suggestions, and even the light sparring.
Again, my thanks to everyone!
As for my boss - yes he is cheap - but he is also black and white on that. I have seem him one moment go "What the hell did we spend $79.82 for?" to "Well, if $5000 is what we have to spend to do business then that is what we have to spend". He's a weird guy when it comes to money. Sometimes I think he does it on purpose just to be unpredictable. It is really going to come down to how much money he thinks is worth spending for a particular setup. Hell for all I know he might hop on Tiger Direct and see some Lenovo server for $1000 with Windows Essentials 2012 and say that is enough for us.
As for my job, I'm not worried about losing it nor and I looking for a new job. I have been here for almost 15 years now and I know the guy personally as well as professionally. I do so much more around here besides IT that I am pretty much fired if I get myself fired for being a terrible employee. It is going to take a lot more than a busted server (branded or built) for me to lose my job. I'm not too worried about direct responsibility falling on me for a failure anyway. He gets the final say because he likes to be decision maker on these kinds of things, technology especially. It is my job to come up with courses of action and his job to choose which one we take. To put it in perspective: He could tell me to build two custom servers and three years from now a motherboard in one burns up. He is going to care more about whether or not the other server is going to carry the burden while the dead one is repaired. If he's in a mood he may try to pester me about what happened (actually I'm positive he will) and I'll have to stand firm on the "It's hardware....it doesn't last forever". Of course if he sees a Dell tech on site he will do the exact same thing and I will also say "It's hardware....it doesn't last forever". I can assure everyone though that I am not getting myself into any hole that I can't come out of with a custom build.
On that note, I think I have quite a bit of information. I thank everyone for your input. I am going to have a fun time simplifying all of this and putting it in my proposal. I appreciate the comments, the feedback, suggestions, and even the light sparring.
Again, my thanks to everyone!
m
0
l
jeff-j
April 4, 2014 5:05:47 AM
Crackerstatic maybe it might be better if went along the way of asking your question as, I am looking to replace my current server, with hyper-v and replication. I have to come up with multiple proposals for my boss to consider. One proposal I have with a pre-configured Dell server "and list the specs" the other will be a custom build box.
Here are the specs for the custom built box.......
Any thoughts?
Here are the specs for the custom built box.......
Any thoughts?
m
0
l
crackerstastic
April 4, 2014 5:42:31 AM
Thanks Jeff, as I mentioned above already, in hindsight that is how I should have approached it. In my defense though, I didn't realize there would be such negativity towards the idea. There was a lot of biased hammering of the same concept here. If we had a larger IT operation going on (site to site transfers, 30+ users, etc) I would have likely leaned towards pre-built servers exclusively. However the primary motivation there still would have been because of the service warranty, not because I couldn't piece quality components together myself.
Either way, the conversation is done and I have everything I need. As I've said, thanks to all for your input!
Either way, the conversation is done and I have everything I need. As I've said, thanks to all for your input!
m
0
l
RackMountProcom
April 15, 2014 11:32:21 AM
In this case, it's not necessary to buy a expensive pre-built system, all you need is a Xeon E3-1220V3 3.1ghz quad coupled with a Supermicro X10SLL-F ( the -F stands for IPMI remote control ) , 8GBs of ram , a pair of 3TB enterprice HDDs and a cheap $100 rackmount case, this build costs no more than $850, all components already come with 3 years manufacture's warranty ( 5 years for Seagate, WD & HGST HDDs ), whilea a dell system with the same components will probably cost $1500.
A LSI raid card for parity management is recommended if you want to add more drives in the future and use raid 5/6 , as for raid 0/1/10, the on board raid is find.
A LSI raid card for parity management is recommended if you want to add more drives in the future and use raid 5/6 , as for raid 0/1/10, the on board raid is find.
m
0
l
Read discussions in other Business Computing categories
!