Sign in with
Sign up | Sign in
Your question
Solved

Help me spend 20.000$ on allround VDI/Cloud/Hyper-V cluster setup. Rate my setup.

Tags:
  • Business Computing
  • blade
  • hosting
  • Servers
  • cluster
  • VDI
  • SSD
  • Cloud Computing
Last response: in Business Computing
Share
May 28, 2014 1:02:05 PM

I want to spend 20.000 dollar max. on hardware to be able to start offering some VDI and cloud services to my customers. Here is my post from another forum, to give you guys an idea about what im planning.

I already have a (semi-succesfull) IT company, I have a customer base, advertising, cashflow, connections, etc.

Fort starters I will be using one colo location, the DC's in my country are one of the most reliable in the world. And so is the infrastructure. My DC is one of the bigger ones, and I will either go with a private rack or with 2 seperate ones shared with my clients. (So I will be the only one placing hardware). There are no natural disasters or things like that, and DC's going down is extremely rare, and only happens to with startups/smaller shops.

The 20.000$ is purely for hardware. I already decided to go with the Windows Server/Hyper-V platform, because of the new technologies that make "decent" low-budget setups possible. I cannot afford a SAN so I will need to use a different storage solution that utilizes the server2012r2 capabilities.

I might have been a little unrealistic regarding my requirements, and they are more of goals anyway. Let my try again;

My main goal will be virtualization and delivering VDI. (zero-clients) Second to that I would like to be able to deliver some cloud-services (backup, hosting, sharing, remote-desktop, etc.) to eventually offer a all-in-one package.

I understand that with my budget I will not be able to do immediately be able to offer all these things at maximum performance and reliability. However, I should be able to lay a solid foundation for future investments, and probably even be able to offer VDI at a small scale?

Some things to take into account before sharing your advice;

These services will not be offered publicly. I will start with my current customerbase and work up from there.

Most of my customers have minimum storage requirements. So I'm okay with making concessions on the amount of storage I can offer, as long as i can easily add more storage in the future.

Most clients have very small bandwidth (8-60mb download) that's why I want to build my architecture around the zero-client model. That way I can at-least offer everyone of my clients my services.

The downside of this is of course that i am very dependent on latency, which is why I think local SSD's (combined with HDD's) could be my answer.
I am aware about the SSD lifespan, but I will be choosing only the drives that have proven to sustain some heavy-duty punishment. (Check THIS out, Hardcore SSD edurance test, 600TB so far, and not one has failed, not even TLC based).

If each of the consumer grade SSD's in that test can sustain over halve a petabyte of non-stop writes without breaking a sweat, (except for the TLC maybe) and probably make it to a petabyte without dying, they will serve my cause just fine.

Now I've made some decisions so far; The E3 does not offer enough ram for future expansion, so I will be going for E5 platform. Since DP MB's are not that much more expensive then the single ones, I will probably go with dual socket. Very flexible in terms of expansion, and not that more expensive compared to single socket.

Now the big question for me at this point is;
a
Can i use the DP e5 nodes for storage as well, by adding local storage in every node, and use clustering for redundancy and sharing? What are the con's versus dedicated storage servers?

Would it be better to use separate nodes just for storage? if so, why? This would cost me allot more money, and the all-in-one option would be allot more efficient.

If i'm going the all-in-one route, I will be probably be going with 10gb/e aswell, and put each node in at least a 2u form-factor for more drives, expansion possibilities, and ease of management.

I am aware that I will have a smaller number of nodes in total, but because each node serves so many purposes i still think this would be the best route. Especially if you consider that eventually I will need to step up to 10gb/e anyway, and in the future I can always buy more nodes for extra redundancy.

And I can start with nodes that are only halve-filled, and work up from there, and if one of the nodes does go down, I could temporarily move the CPU, RAM, etc. to the remaining nodes to give them some more power until i fix the problem with the node that has failed.

So, that is my plan as of now, feel free to criticize me on anything you would do different. If something in my setup is impossible, or impractical, please say so, and tell me what would be a better alternative.

Many thanks for the responses so far,

Regards,

Cloudbuilder

More about : spend 000 allround vdi cloud hyper cluster setup rate setup

May 29, 2014 1:42:46 PM

I think the biggest problem with your plan is your funding. $20k can easily go into one loaded out server.

If you want to start small, I would advise you look at the Dell R720xd. It's a 2U, DP server with up to 24 2.5" hot-plug bays. If you think storage requirements will be light, you can probably save $ by going with 10k SAS drives rather than SSDs. Get as much memory as you afford.

If you're concerned about latency, Internet connections will have a bigger impact than likely the performance of your storage subsystem.

Also, you haven't addressed redundancy. If your single server goes down, where is your backup? You'll need to address that as well.
m
0
l

Best solution

May 29, 2014 8:16:51 PM

I'm a little concerned with your goals here to limit initial budget, but provide several services, high availability, and utilize leading edge hardware platforms like enterprise SSDs and 10GbE infrastructure. Not even figuring in any software costs, $20k is just not enough.

SSDs: The real benefit of an SSD is if course the throughput capabilities, but often times in a datacenter environment you have to consider what other bottlenecks you might be facing. For example, if you are planning to put in place SSDs for use in a server for remote users doing VDI, do you actually have enough concurrent user sessions on that server to warrant that throughput? And as for general file storage a large disk array on a good controller will offer tons of throughput and performance but with far greater data density and lower cost, so flash doesn't make sense there.

I'm not an expert in creating datacenters, I deal with small businesses and usually one or two server systems in their offices, but the big thing with anything of course is create your foundation first and you can grow from there. Build the basics that you need to start with, and if you can get that running right and bringing in revenue, then you can afford to build out. So, with that I would suggest looking at what services you want to offer first. Offering everything from VDI to remote data storage/backup is hard because really there are two different goals here. VDI needs quite a bit of processing and memory resources but doesn't require a ton of storage space. Storage services requires much less processing but has to be highly expandable. Instead of having individual server nodes with onboard hard drives for storage, I'd suggest looking into a "headed" configuration where you build a server that runs your OS and manages the arrays, and utilize additional hardware RAID or HBA cards connected to external JBOD or DAS disk arrays where you can connect many more hard drives than your internal server can support. For example, you can run a single HP DL380p G8 server with a single six-core processor loaded with a decent amount of RAM, and then include a single HP P421 giving you two external SAS connectors going to a MSA D2600 supporting twelve 3.5" hard drives. As you need more storage you add more MSA D2600 connected to existing available SAS ports or additional controllers. The down side to this is you don't have full fault tolerance of hardware. To do this requires a miminum of three external JBOD with dual-controllers and a minimum of two storage servers. All of that alone would take of the majority of your $20k budget if not more.

If you are going to manage responsibility and upkeep of all hardware, then you can also consider the Supermicro route, where you build your own. There are many 4U chassis that support a large number of 3.5" hard drives to create one very large storage server. Again, this doesn't offer you high availability though in the event that chassis or server system goes down.
Share
!