Sign in with
Sign up | Sign in
Your question
Solved

Need Advice for Server Upgrade

Last response: in Business Computing
Share
May 6, 2014 2:55:20 PM

The University Library where I work is going to be upgrading our catalog server (which is mission critical) in the next fiscal year (8/1/14 - 7/31/15). We currently have a Dell PowerEdge 2800 which runs only our Library Management System plus a separate FOG server. While the Dell could be replaced by a newer one the Dean of Libraries is prepared to spend $8000 - $10,000 to purchase the best system possible. Currently no decisions have been made so I am trying to gather information on what we need and what is possible within our budget prior to discussing technology issues.

Here are our needs:

1. A new server (obviously)

2. A backup solution. We currently have a DAT 72 tape drive which backs up most of the data, including program files, on the server. It has recently been replaced. Every week a tape is taken to a safe in another building for disaster preparedness. However in the event of a catastrophic hardware failure our only strategy is to buy a new server, install Windows, and then restore from backup. Nothing like that has ever been attempted because there is only one physical box, which makes me nervous. On top of all that local storage consists of 6 73GB hard disks in a RAID5 array which I believe has already been rebuilt once (or something along these lines - this was before I arrived); there is only 1 spare disk. We are also limited in how much can be backed up unless we want to pay ~$3000 for an LTO tape drive.

This is my thinking:

Instead of simply replacing one box with another I would like to virtualize the catalog server as well as possibly the FOG server. Right now my preferred setup would be several rackmounted servers with an attached NAS unit for backups (not sure about configuration, but the Inverted Pyramid of Doom is on my mind as something to avoid). We would still need some form of off-site backup, which would have to come in the form on begging the campus IT folks to help us since the library has only one location. I have been looking into using XenServer as a hypervisor.

The big advantages, from my point of view, are flexibility and scalability. Currently the only way for us to implement any sort of server - no matter how lightweight - is to repurpose an old computer. Thus our FOG server is running on an 8 year old Gateway PC. I'd like to give us some room for growth.

My main concern is that a XenServer maybe more than we need. We really can get away with only a tower server, but I don't like limiting myself. There is also complexity to worry about although XenServer seems fairly easy to use.

Does anybody have any advice to offer about which approach (or another) would work for us, given our budget? At this time I am primarily considering strategy although I am open to hearing advice about which vendors are preferable.

More about : advice server upgrade

May 6, 2014 3:15:58 PM

Here is what I would do.

1) Go with Server 2012R2 for a Hypervisor. You can use a free version, but those are tricky to work with because you don't have a GUI. Also, sconfig is your friend for initial setups.
2) You need some form of VM backup utility. There are plenty that allow you to restore backups as a VM. Something like Veeam would be good.
3) Physical hardware: There are two ways of doing this. Either get a single server and hope nothing major fails, or get two lower end servers. If it were me, I would spend max 4K and purchase two physical servers. You need Server 2012R2 license and Veeam licenses but he remaining 2K should get you most of the way there.

Another option would be to go towards VMware, but that adds an extra license cost.

Once you have your host or hosts up and running, I would P2V your FOG server to the host first then the production server over a weekend. Basically, you run software that creates a VMDK or some other type of virtual disk image from the physical server and transfers it to the new server. You then make a new virtual server, point it at the disk image and there you go.

Once you are done with that, re-purpose the old server as a "backup" server and move it to a neighboring building connected to the same LAN that you are on and move your backups to that server.

That's what I would do. This is not a small project so make sure you plan it out and test first so you don't end up goofing anything up. I strongly recommend multiple hosts so that you have the ability to grow the environment if needed and so that you can migrate to the other host in case one of them starts doing something funny.

Don't do anything too crazy with hardware and don't try to be too fancy. Set up something you will be able to support.
m
0
l
May 6, 2014 5:03:01 PM

mjmacka said:
Here is what I would do.

1) Go with Server 2012R2 for a Hypervisor. You can use a free version, but those are tricky to work with because you don't have a GUI. Also, sconfig is your friend for initial setups.
2) You need some form of VM backup utility. There are plenty that allow you to restore backups as a VM. Something like Veeam would be good.


What advantages does Windows Server 2012 have over Xen (other than being part of a familiar ecosystem)? What about VMWare? Our budget does not really allow for a lot of continuing maintenance fees. Case in point is our ILS vendor, SirsiDynix (maker of the software that runs our catalog - this would take quite a bit to explain). They want us to go SaaS and have our catalog server hosted by them, but the money isn't there which brings me here. The University has a volume license agreement with Microsoft so we can have as many Windows OS installations as we want as long as they are on campus. They also have VMware products although those may be licensed per socket or per host. And is there a difference between using a VM backup up utility and simply taking snapshots and moving them to storage?

mjmacka said:

3) Physical hardware: There are two ways of doing this. Either get a single server and hope nothing major fails, or get two lower end servers. If it were me, I would spend max 4K and purchase two physical servers. You need Server 2012R2 license and Veeam licenses but he remaining 2K should get you most of the way there.


Definitely. I am thinking at least two primary servers with a third as network storage. We may be able to afford a fourth and stick it in another building thus solving our disaster preparedness issue. The rest would be needed for ancillary equipment like a rack and KVM switch. Needless to say I am looking to supplement with grant funding if at all possible.

mjmacka said:

Once you have your host or hosts up and running, I would P2V your FOG server to the host first then the production server over a weekend. Basically, you run software that creates a VMDK or some other type of virtual disk image from the physical server and transfers it to the new server. You then make a new virtual server, point it at the disk image and there you go.


There is a new version of FOG which needs to be installed from scratch anyway and this is a perfect opportunity to do so. The production server will be handled by SirsiDynix (thankfully) although I don't know if they will let us use our own equipment. Last time we bought a server through them for a "discount" on our migration fees. So there is a lot still up in the air.

mjmacka said:

Once you are done with that, re-purpose the old server as a "backup" server and move it to a neighboring building connected to the same LAN that you are on and move your backups to that server.


Interesting idea. It would be cheaper than buying a fourth server.


mjmacka said:

That's what I would do. This is not a small project so make sure you plan it out and test first so you don't end up goofing anything up. I strongly recommend multiple hosts so that you have the ability to grow the environment if needed and so that you can migrate to the other host in case one of them starts doing something funny.


I like your thinking. All the folks at Spiceworks had to say was "buy used equipment" or "rent my used equipment". This will be a big project which is why I am starting more than a year out.

mjmacka said:

Don't do anything too crazy with hardware and don't try to be too fancy. Set up something you will be able to support.


My biggest fear. I tend to dream big ... sometimes a little too big for my own good. Thank you for your advice.
m
0
l
Related resources
May 6, 2014 5:27:43 PM

It might help to know that our current production server runs Windows Server 2003 so it won't be a migration so much as a clean install with only the catalog software moving over.
m
0
l

Best solution

May 6, 2014 5:35:08 PM

Nearly all of the small server rollouts that I have done have been done utilizing Server Hyper-V virtualization. This is mostly for my personal preference. I've just found Hyper-V to be much easier to work with and, in the even that someone did have to step in and work on something besides me, wouldn't take months to learn how to do something since it is built on the familiar Windows system. It also allows me to install additional software as needed on the physical host, such as backup utilities, monitoring software, etc. which cannot be done on an ESXi system.

First off you may need to establish the limits of your "mission critical" status as well as seeing what all you may need to integrate with this into the future. For example, does the library also have other servers doing other things currently, such as domain services, content filtering or DNS, or some other application servers? If so, then it makes more sense in the long run to start migrating to a uniformed platform where all of your services are running on a couple performance servers with shared storage instead of a bunch of independent hardware that is all disimilar and has to be backed up and managed independently.

What is an acceptable level of downtime if the server goes down? Do you need to be back up and running within an hour or two? A few minutes? Seconds? The less downtime, the more expensive your solution is going to be of course.

There seems to be possibly two ways that I would approach this. The first is to use Hyper-V Replication within Server 2012, which would require at minimum two servers, or a full high-availability cluster, which would require at minimum two servers plus one form of shared storage.

So, option #1 Replication: With this solution, you would set up two server systems running Server 2012 (R2) with the Hyper-V role. Both servers need to be joined to your domain. On Server1 you could create a virtual machine for your catalog system, and have all of the local storage running directly on a hardware RAID array on your physical server but all storage and systems would be virtualized. There's a couple steps that you follow to set up replication then, which basically makes regular copies of your virtual machine data to a second server. Every few minutes the data that has changed is written to the other server as well. On Server2 you would see the virtual machine for that catalog system but it would be turned off. In the even that your primary server went down, you would have to manually go in and tell it to failover and start up the virtual machine on the secondary server. At most you might lose a few minutes worth of data, from the last time that it synchronized the data. However, this does require you manually going and starting it up. This means the downtime you could experience may be a matter of a few minutes or however long it takes for someone to initiate the failover. Once the virtual machine starts up on Server2, though, everything should be back up and going as before for you to address the issues with your primary server.

Likewise, you can also install another virtual machine (such as the separate FOG server) on your Server2 and have it replicate to your Server1 system. This way you are load-balancing your workload across both servers on your normal daily usage instead of just running everything on a single server and leaving the "backup" doing nothing at all. This solution is also expandable as you can install multiple virtual machines on each physical host server and have it replicate to the other server. You just have to be sure that there is enough hardware capabilities in your servers to run all of the load in the event you do need to run everything on a single host. In this way, if you do have other services or servers in use, you could migrate everything off to a single set of hardware servers that are replicated for ideal performance and fault tolerance.

Option #2 Cluster: This is a much more complex and much more expensive solution really, but offers the greatest in high-availability. With this solution you would have two or more "nodes' which are basically your computing servers that actually runs your virtual machines. However, all of the actual data for those virtual machines is stored on a separate shared storage device or SAN. There are a LOT of options for different SANs out there but almost none of them are cheap. Some pre-built SAN units work with iSCSI so you can leverage familiar and cost effective ethernet networking to make it all work. You an also make a SAN using a standard server running Server 2012 with iSCSI targeting or another performance OS with iSCSI target capabilities. There are also SAS connected SANs which connect directly to your hardware, which offers great performance but also is limited in the number of physical computers that can connect to it.

The way this works is if one physical node goes down for some reason, the computer network will detect this right away and shift all of the running virtual machines automatically over to the second node server, continuing right from where it left off. Downtime is highly dependent on many factors including the amount of data, the hardware performance capabilities, and the network infrastructure to fail over, but can be a matter of a second or two or upwards of a few minutes. But it is all done automatically. The downside of this solution is, just as you mentioned, the possible inverted pyramid of doom. That's why I have seen a lot of places who need this sort of failover clustering but on the cheap go with more of a standard server which can be replicated or duplicated to a second server instead of highly specialized SAN units which can be more expensive and complex to try and duplicate.

Backup solutions again is a little hard to recommend as I don't know much about the rest of your environment or needs. I personally like external hard drives because they are a cheap but effective backup media. After all you can get 3 TB or 4 TB hard drives for a couple hundred bucks which will store a ton of back up data and doesn't require any special hardware to utilize. There are also a lot of ways of approaching the backup system, which is something I'm still always playing with and trying to learn more about. Windows has a very nice Windows Server Backup utility built in to Server 2012 which backs up all the data and system state automatically to a separate drive, separate volume on a drive, or even a network location. I've had good luck with it restoring from these backups, and tends to not use up a terribly large amount of space either. However, I don't rely solely on it. I always will first prefer to implement backups of the original crucial data in its original form. My reasoning here is if something happens to all the hardware years down the road and you can't recover from the backup utility for whatever reason, if you at least have all of the raw data in its original form you should be able to plug that data into ANY computer and have access to it and can then move to dissimilar hardware and start again if necessary. For this purpose I have been using Uranium Backup lately. This handy free utility can schedule backups at multiple times such as a daily backup to individual day's folders which are then overwritten the next week, giving you a full duplicate of all data for a full week's time. There's also weekly, monthly, custom, and everything in between. You can also set up emailing reports where it will email after the backup has completed with information about when the backup ran, what files were backed up, how long it took, etc. or if the backup didn't complete what errors there were.
Share
May 6, 2014 7:36:53 PM

choucove said:
Nearly all of the small server rollouts that I have done have been done utilizing Server Hyper-V virtualization. This is mostly for my personal preference. I've just found Hyper-V to be much easier to work with and, in the even that someone did have to step in and work on something besides me, wouldn't take months to learn how to do something since it is built on the familiar Windows system. It also allows me to install additional software as needed on the physical host, such as backup utilities, monitoring software, etc. which cannot be done on an ESXi system.

First off you may need to establish the limits of your "mission critical" status as well as seeing what all you may need to integrate with this into the future. For example, does the library also have other servers doing other things currently, such as domain services, content filtering or DNS, or some other application servers? If so, then it makes more sense in the long run to start migrating to a uniformed platform where all of your services are running on a couple performance servers with shared storage instead of a bunch of independent hardware that is all disimilar and has to be backed up and managed independently.

What is an acceptable level of downtime if the server goes down? Do you need to be back up and running within an hour or two? A few minutes? Seconds? The less downtime, the more expensive your solution is going to be of course.

There seems to be possibly two ways that I would approach this. The first is to use Hyper-V Replication within Server 2012, which would require at minimum two servers, or a full high-availability cluster, which would require at minimum two servers plus one form of shared storage.

So, option #1 Replication: With this solution, you would set up two server systems running Server 2012 (R2) with the Hyper-V role. Both servers need to be joined to your domain. On Server1 you could create a virtual machine for your catalog system, and have all of the local storage running directly on a hardware RAID array on your physical server but all storage and systems would be virtualized. There's a couple steps that you follow to set up replication then, which basically makes regular copies of your virtual machine data to a second server. Every few minutes the data that has changed is written to the other server as well. On Server2 you would see the virtual machine for that catalog system but it would be turned off. In the even that your primary server went down, you would have to manually go in and tell it to failover and start up the virtual machine on the secondary server. At most you might lose a few minutes worth of data, from the last time that it synchronized the data. However, this does require you manually going and starting it up. This means the downtime you could experience may be a matter of a few minutes or however long it takes for someone to initiate the failover. Once the virtual machine starts up on Server2, though, everything should be back up and going as before for you to address the issues with your primary server.

Likewise, you can also install another virtual machine (such as the separate FOG server) on your Server2 and have it replicate to your Server1 system. This way you are load-balancing your workload across both servers on your normal daily usage instead of just running everything on a single server and leaving the "backup" doing nothing at all. This solution is also expandable as you can install multiple virtual machines on each physical host server and have it replicate to the other server. You just have to be sure that there is enough hardware capabilities in your servers to run all of the load in the event you do need to run everything on a single host. In this way, if you do have other services or servers in use, you could migrate everything off to a single set of hardware servers that are replicated for ideal performance and fault tolerance.

Backup solutions again is a little hard to recommend as I don't know much about the rest of your environment or needs. I personally like external hard drives because they are a cheap but effective backup media. After all you can get 3 TB or 4 TB hard drives for a couple hundred bucks which will store a ton of back up data and doesn't require any special hardware to utilize. There are also a lot of ways of approaching the backup system, which is something I'm still always playing with and trying to learn more about. Windows has a very nice Windows Server Backup utility built in to Server 2012 which backs up all the data and system state automatically to a separate drive, separate volume on a drive, or even a network location. I've had good luck with it restoring from these backups, and tends to not use up a terribly large amount of space either. However, I don't rely solely on it. I always will first prefer to implement backups of the original crucial data in its original form. My reasoning here is if something happens to all the hardware years down the road and you can't recover from the backup utility for whatever reason, if you at least have all of the raw data in its original form you should be able to plug that data into ANY computer and have access to it and can then move to dissimilar hardware and start again if necessary. For this purpose I have been using Uranium Backup lately. This handy free utility can schedule backups at multiple times such as a daily backup to individual day's folders which are then overwritten the next week, giving you a full duplicate of all data for a full week's time. There's also weekly, monthly, custom, and everything in between. You can also set up emailing reports where it will email after the backup has completed with information about when the backup ran, what files were backed up, how long it took, etc. or if the backup didn't complete what errors there were.


The catalog server does not simply contain the database of all the library's materials. It runs our Integrated Library System, SirsiDynix Workflows. This piece of software manages all functions of the library including cataloging, circulation (i.e. checking out books), serials, acquisitions of new materials, running reports to gather statistics and perform various functions, and a lot more. Think of it as a POS system, back end database, and search engine all rolled into one. It would take a long time to fully explain everything, but for us mission critical means that the circulation staff have to be able manage checkouts and returns, patrons (and the reference librarians who search via the public interface on our website) must be able to search our holdings, and the catalogers have to be able to enter information into the system. Acceptable down time is usually a few hours although during peak times (like finals week) is more like a few minutes to an hour.

Replication is the preferred solution. We don't have the budget or the expertise to manage a SAN. All the core IT functions (DNS, Active Directory, CMS, etc) are handled by the campus IT folks. The only two servers the library has are the production machine we are replacing and an older Gateway E-4610D repurposed as an image server running FOG in Debian 7.4. I would like them both to be virtualized on one machine (or set of machines in this case) as opposed to totally separate devices. It makes managing them easier.

Now for the wrinkles. The library is not, more ever will be, on the campus domain. When our campus IT office was outsourced the library was not covered under their contract and some how ended up off the domain. Our public computers are in a workgroup, and along with the FOG server, are on their own vlan segmented from the rest of the network. However I believe the catalog server is a domain machine.

Don't worry about backup recommendations yet. Currently we use a DAT72 tape drive to backup most of the data on the server and once a week take one tape to a safe across the street. We are in a region (Mississippi) prone to tornadoes, floods, and hurricanes so that extra step is important. Still I want to move away from tapes and am still formulating a viable strategy to do so. I like the idea of using Uranium or Windows Server Backup in conjunction with an onsite backup server and a repurposed old catalog server as an emergency backup. Keep in mind that we have a pair of brand new 1 TB Western Digital REs, one in the current FOG server and the other still in the shrink wrap. They could conceivably be put to use in a backup server.

m
0
l
May 6, 2014 7:57:02 PM

For ease of management, I would highly recommend that you look into setting up a domain environment, even if this is a separate domain from the rest of your campus. You will have much greater control and flexibility over setting up public use computers and staff computers.

One of the customers for my business is our local library. It's not very big, but we are one of the main libraries for quite a large area so we see a decent amount of people coming in from out of town or traveling who need access to internet and computer systems, which has meant we had to ensure we had the right equipment to meet that need.

In this case, we set up four separate VLANs to further segment and secure the network. There is one VLAN for the public use computers, a separate for the staff computers, another for the public wireless network, and finally another VLAN just for management/administration of computers and network devices. I don't know if this is something that you can do since you are integrated with a campus-wide network infrastructure, but at the very minimum I would suggest looking into setting up a domain, of not two separate domains, for use for public and private networks. The nice thing is domain controllers usually require very little resources to run for a smaller network, so you can plan on implementing and running these right on the same hardware you are already planning to purchase and implement as discussed above.
m
0
l
May 6, 2014 8:16:45 PM

Choucove I will look into that. We would only need one since the faculty and staff machines are still managed by the campus IT folks. It would solve a few problems (better group policy management) but possibly create more. For example we can't have a DHCP server since it would conflict with the campus wide unit. Also I can't figure out how one would set up a domain inside a domain. And institutional politics being what they are this might not happen. But thank you for suggesting it; I am intrigued.
m
0
l
May 6, 2014 8:17:40 PM

So, since you have the EDU discount on licensing you can go with Hyper-V or VMware ESXi. With server 2008r2, 2012, and 2012r2, Hyper-V is a role that basically converts your server to a hypervisor. Management is much easier than ESXi because you are in the Windows environment and almost everyone knows that environment.
ESXi is an option here too, but since you don't have a SAN in the environment, it's not worth the trouble since Windows is easier to administer.

I like the idea of two hosts. Either a Raid 0 or Raid 5 setup should suffice for both hosts. I would set up a primary host and a secondary host. The primary should have production stuff and the secondary should be used as a fail-over or for testing. Then install Server 2012r2 on the hosts, set up the Hyper-V role, set up a VM (without a disk), and P2V the old server data over to the new VM.

The SAAS model for your library software might not be a bad idea. With that, you won't be responsible for hosting the server in your environment, basically all you keep up is the network connection and the SirsiDynix is responsible for the rest. You might be able to move the application support-laod off to them to. It also removes the need to have a backup.

A child domain is also recommended in this type of environment or a new library domain.
m
0
l
May 7, 2014 12:59:11 AM

i think you over reach , taking something simple and make it complex is not a good idea (but only my opinion).
your goal as you describe it
1. consolidate server hw / sw
2. replace backup solution
i think with the budget you have you cannot avoid 1 single point of failure. so backup's is more important then HA.
now let me make some suggestions :
1. buy 1 good server with enough cpu , memory , disks and run HYPER-V with 2 vm or 3 if you intend run in domain enviroment.
2. maybe cloud services is for you ? maybe not hosted by sw manufacture try get quotes from other hosts as well.
3. buy a good backup sw , you did not mention how much data storage you need for backup's
4. make a nas from old server and host it somewhere else at the campus and dump backup's onto this server , also you can use the tape to take daily / weekly backup of the nas.
5. if you want to overshoot :)  look at a solution like dell vrtx.
6. if you run in domain env. make sure you take in account all the overhead admin stuff you have to do.


good luck :bounce: 
m
0
l
May 7, 2014 7:10:40 AM

@mjmacka SaaS is too expensive for us. That may be a blessing in disguise since it allows us to upgrade our server infrastructure and do some other things. I like the idea of our own domain. This will require some planning. The library has our own security policies so our own instance of Active Directory with a separate domain may be in order, but that might be more complicated than necessary.

@Cjar Agreed. The simplest solution is the best. We don't have money for cloud services so any backups have to on campus somewhere. I am leaning towards putting a NAS in another building and combines that with replication and maybe an onsite backup server.
m
0
l
May 7, 2014 7:42:01 AM

Okay, so from the conversations we've had so far this is how I see the environment:

1) A single Hyper-V host. The host OS is server 2012R2. University Volume licensing should allow you to purchase a Server 2012r2 Standard license or Datacenter license for ... $300.00-$500.00. I paid $300 for a Server 2008R2 Datacenter license at Michigan State University while working on an upgrade project. The Datacenter license should allow you to install up to 4 virtual instances of Server 2012R2 without needing to purchase more licenses. If there isn't much of a cost difference, go with that. If not, stick to the standard license. This should be a single time cost.
2) The VM's on the host are the Fog server and SirsiDynix. SirsiDynix will be P2V'd and the Fog server will be on some flavor of Linux. I don't think there is any extra costs here.
3) Backup solution: We haven't really given much info on a backup solution. Do we need a file level backup or a VM level backup. How important is the ability to perform a bear-metal backup restore? How much data is being backed up? I feel like I need more info before I can make a good suggestion.
4) Hardware: Judging by previous conversations, we want a single server. Do we want a rack-mount or a tower model of a server? Are there any special issues we should take into account? Does SirsiDynix have any special requirements? I took a look at the Dell site and the a PowerEdge R720 with upgraded CPU's, 32 GB of ram, Raid 5 (4 1T 7.5K drives), a dedicated Raid controller, rails, and dual power supplies comes in right around 7K. I didn't include changes to the NIC's and you may be able to survive with less HDD space, RAM, or 6 core CPUs. I am just estimating here to give you an idea of what to expect for your budget. This estimate also may be completely overkill if you were running on a PowerEdge 2800.
If you don't mind doing some of the work yourself, I have seen refurbished R710's for 40% less than new through Dell. However you have to jump quickly and know what you want.

It would also be prudent to invest in a 5 year warranty instead of the standard 3 year. The default warranty is 3 year NBD onsite.


Just to wrap things up. We need application information (requirements), information about how much data is being backed up, and any other important information about the application/environment.
m
0
l
May 7, 2014 8:39:47 AM

mjmacka said:
Okay, so from the conversations we've had so far this is how I see the environment:

1) A single Hyper-V host. The host OS is server 2012R2. University Volume licensing should allow you to purchase a Server 2012r2 Standard license or Datacenter license for ... $300.00-$500.00. I paid $300 for a Server 2008R2 Datacenter license at Michigan State University while working on an upgrade project. The Datacenter license should allow you to install up to 4 virtual instances of Server 2012R2 without needing to purchase more licenses. If there isn't much of a cost difference, go with that. If not, stick to the standard license. This should be a single time cost.
2) The VM's on the host are the Fog server and SirsiDynix. SirsiDynix will be P2V'd and the Fog server will be on some flavor of Linux. I don't think there is any extra costs here.
3) Backup solution: We haven't really given much info on a backup solution. Do we need a file level backup or a VM level backup. How important is the ability to perform a bear-metal backup restore? How much data is being backed up? I feel like I need more info before I can make a good suggestion.
4) Hardware: Judging by previous conversations, we want a single server. Do we want a rack-mount or a tower model of a server? Are there any special issues we should take into account? Does SirsiDynix have any special requirements? I took a look at the Dell site and the a PowerEdge R720 with upgraded CPU's, 32 GB of ram, Raid 5 (4 1T 7.5K drives), a dedicated Raid controller, rails, and dual power supplies comes in right around 7K. I didn't include changes to the NIC's and you may be able to survive with less HDD space, RAM, or 6 core CPUs. I am just estimating here to give you an idea of what to expect for your budget. This estimate also may be completely overkill if you were running on a PowerEdge 2800.
If you don't mind doing some of the work yourself, I have seen refurbished R710's for 40% less than new through Dell. However you have to jump quickly and know what you want.

It would also be prudent to invest in a 5 year warranty instead of the standard 3 year. The default warranty is 3 year NBD onsite.


Just to wrap things up. We need application information (requirements), information about how much data is being backed up, and any other important information about the application/environment.


Why only a single host? I'm somewhat more comfortable with having two with replication, but that does add complexity since a second server is needed. Also we would need to make sure our license agreement with SirsiDynix allows for it. They charge for having a test server and I am not sure If there will only be one physical server than tower would be fine. We have enough room for it. Racks only make sense when you start talking about multiple servers.

As far as backups go, file level is fine. Right now we only back up to that level. However I do want to have a VM level backup in case of a hardware failure. As with replication we can live without it, but it gives us that extra level of safety and convenience since the system can run even if one server is down completely.

Unfortunately I don't have minimum specs for our ILS. The vendor handles that aspect completely so use the current system as a baseline. At the moment we back up around 70GB of data each day. That may increase slightly in the future. FOG is not regularly backed up although I did take an image of it with Clonezilla, but since that server receives little use it can be backed up once a month; currently it has 90GB of data. There are no other applications that run on either server.

Over all I find your solution to be a good fit and will include it in my planning. I will also probably include an alternate scenario with two servers and replication. That becomes more desirable if we add a domain controller to the mix (which is a separate, but related discussion).
m
0
l
May 7, 2014 8:45:09 AM

mjmacka said:

1) A single Hyper-V host. The host OS is server 2012R2. University Volume licensing should allow you to purchase a Server 2012r2 Standard license or Datacenter license for ... $300.00-$500.00. I paid $300 for a Server 2008R2 Datacenter license at Michigan State University while working on an upgrade project. The Datacenter license should allow you to install up to 4 virtual instances of Server 2012R2 without needing to purchase more licenses. If there isn't much of a cost difference, go with that. If not, stick to the standard license. This should be a single time cost..


A correction - the Datacenter Edition of Windows Server allows for unlimited virtual guests, basically limited only by server hardware. The older, discontinued Enterprise Edition (Server 2008 and 2008 R2) allowed for 4 licensed virtual guests.
m
0
l
May 7, 2014 8:52:08 AM

mjmacka said:

4) Hardware: Judging by previous conversations, we want a single server. Do we want a rack-mount or a tower model of a server? Are there any special issues we should take into account? Does SirsiDynix have any special requirements? I took a look at the Dell site and the a PowerEdge R720 with upgraded CPU's, 32 GB of ram, Raid 5 (4 1T 7.5K drives), a dedicated Raid controller, rails, and dual power supplies comes in right around 7K. I didn't include changes to the NIC's and you may be able to survive with less HDD space, RAM, or 6 core CPUs. I am just estimating here to give you an idea of what to expect for your budget. This estimate also may be completely overkill if you were running on a PowerEdge 2800.


Also, 7200rpm SATA drives (I think is what you meant by 4 1T 7.5K drives) are not recommended for mainline storage, especially when they are running several virtual guests. They would be better for nearline storage, such as the backup server, where capacity is more important than performance. For your main storage, I would recommend an array of 15k SAS drives w/a SAS RAID controller (for Dell, something like the PERC 710) or even an array of SSD drives (roll your own if you're price-sensitive).
m
0
l
May 7, 2014 9:04:12 AM

If I'm understanding the way that your services are working, then it looks like you are needing to run at least one, perhaps two, database systems which will be making random access reads and writes to small chunks of data. For this, I don't recommend RAID 5. RAID 5 not only suffers many drawbacks in throughput performance but also build time failures that can occur if a drive does fail increasing the likelihood of losing all your information. RAID 10 would require a minimum of four hard drives but would offer up to twice the throughput capabilities as RAID 5 and is more widely recommended for database environments where you don't need a bunch of capacity.

I ran across this with one of my other customers looking at a similar situation, comparing one server to two. With a single server you must ensure you have hardware resources to run ALL of your services plus additional room for growth. You may also consider having on-hand spare hardware if necessary to replace defective hardware instead of waiting for replacements. This could mean a replacement power supply (if you don't get redundant units) or additional hard drives. If you decide to go with two physical servers, each server doesn't have to be quite as powerful as a single stand-alone server because in general they are each going to be running half of your workload instead of everything on one. This can also mean more efficient performance in some cases as you have two full physical systems to load balance your services across. However, this can be more expensive than just purchasing one more powerful server to put everything on. For the customer I have been working with, they chose the cheap option which already is showing limitations. Any sort of maintenance work that has to be done on the one server takes them completely offline. This also means any updates, changes, testing, or other maintenance has to be planned and done after hours which can be more difficult or costly in the end to consider. With replication or even with just having two physical servers at least to work with, we could have simply moved all running virtual machines with Live Motion from one host to the other, and everything would continue to operate without problem while we do what is needed on the first server.

Do you currently have a place for a rackmount cabinet or no? If you do, it can be very nice to have as it does give you more room to expand and add if you need into the future (such as throwing in a rackmount NAS, switches, etc.) but if not, it is one less expense. Pedistal servers will work just as well as the similar rackmount system.
m
0
l
May 7, 2014 9:05:11 AM

Thanks for that correction 2Be_or_Not2Be.

In the "guess" build I choose those drives because I didn't know how important storage was. Going up from there added a significances cost to the build. Three SSD drives add at least $1000 on to the build and I didn't want to inflate the price too much. I only saw a few options for SAS drives, most of the drives were SSD. The dedicated RAID controller in the build I made was a PERC 310. The 710 is within $200.00, so that's an option.
In new servers, I tend to avoid add your own drives because Dell support can be a bit touchy if you run into issues with those drives.

In my opinion, dual hosts is a better way to go, but I don't think you have the budget for that. Especially if you want a more robust host.

The R720 is a rack mount server, so we might want to stick with the T (tower) line of Dell Servers. Those also tend to be a bit less expensive...
m
0
l
May 7, 2014 9:45:38 AM

@choucove There is a server rack currently full of network switches. I'd prefer to buy a dedicated one. 24U racks aren't that expensive and some come with wheels. The library has a dedicated server room with more than enough space for a rack.

@mjmacka The R720 is one of Dell's higher end units. Weirdly it doesn't seem to want you to use hardware RAID. I priced out an R320 with most of what we need for half the price. And IBM and Cisco have attractive priced options as well (HP seems expensive). Right now I'm concerned about gathering the merits of various strategies so hardware specifics are less important.
m
0
l
May 7, 2014 10:03:19 AM

Michael Paulmeno said:
@choucove There is a server rack currently full of network switches. I'd prefer to buy a dedicated one. 24U racks aren't that expensive and some come with wheels. The library has a dedicated server room with more than enough space for a rack.

@mjmacka The R720 is one of Dell's higher end units. Weirdly it doesn't seem to want you to use hardware RAID. I priced out an R320 with most of what we need for half the price. And IBM and Cisco have attractive priced options as well (HP seems expensive). Right now I'm concerned about gathering the merits of various strategies so hardware specifics are less important.


How about this for an implementation strategy: buy one server, run two virtual guests on it (catalog system & FOG system), and then buy a bunch of SATA drives for the older PowerEdge 2800 to turn it into your backup server. Backup each VM to it each night. This setup should be doable within $10k.
m
0
l
May 7, 2014 11:30:42 AM

2Be_or_Not2Be said:

How about this for an implementation strategy: buy one server, run two virtual guests on it (catalog system & FOG system), and then buy a bunch of SATA drives for the older PowerEdge 2800 to turn it into your backup server. Backup each VM to it each night. This setup should be doable within $10k.


That is my plan. We even have 2 WD RE 1 TB hard drives which were bought last year for use with our current FOG server. I just have to check to make sure our server has SATA connections as it currently runs 10K SCSI disks. Once finals are done I'll pop the case and take a look.
m
0
l
!