Top Tips on How to Squeeze More From Your IT Systems?

jpishgar

Splendid
Overlord Emeritus
Hey all!

We've got a big feature article coming up based on the above title, and we'd like to address a few questions the IT Pros in the crowd have and share a bit of your own insights with the community.

Every business has IT systems, whether they are racks for HPC computing or ranks of SAN boxes. You might have one server in a closet, or a whole data center humming along in the desert. One thing remains consistent - everyone out there is constantly searching for more ways to wring more out of their IT dollar.

Virtualization is one way (and we cover this here on the forums in Business Computing), but that's a pretty big generalization. Surely, some approaches to virtualization prove more effective than others.

We also hear a lot of buzz about converged architectures, where IT has "piles" of compute, storage, and networking infrastructure that can be dynamically allocated to the organization as needed. What we want to know:

• What has been your experience with converged architecture?

• Has it been helpful?

• Do you need more info on the subject?


If you've got some pearls of wisdom and advice, we encourage you to share some of your top tips. If you find you are lacking on advice, tell us what you'd like to hear about within this context. This is your change to help shape the Tom's coverage, so we can in return offer you better, more actionable information.

Thanks in advance!

Yours,
William Van Winkle, Contributing Writer
Joe Pishgar, Senior Community Manager
 

williamvw

Distinguished
Feb 4, 2009
144
0
18,680
Let me see if I can goose this discussion a bit with some more specific questions. Please feel free to address any of the following:

1. How do you go about determining if your current infrastructure is sufficient for the business you anticipate conducting 12 to 24 months forward?

2. How do you go about planning flexibility into your infrastructure? For example, say you made a big infrastructure investment six months ago, and today you have a need to change things up – you need more storage bandwidth or a change in application types demands higher compute capabilities. Did you plan for such eventualities, and, if so, how?

3. Do you use cloud resources in order to help manage workloads? If so, can you describe how in general terms?

4. There is this idea of “silos” in data centers, that “this app runs over here” or “this segment of the business runs over there.” Converged infrastructure, which shares compute, storage, and networking resources from a centralized pool as needed, is increasingly mentioned as a fix for the inefficiencies of “siloing.” Is it? Have you witnessed these inefficiencies in your own operations? And have you tried some sort of converged infrastructure as a remedy? If so, did it work?

5. What areas of your IT infrastructure now benefit most from virtualization?

Thanks for taking a couple minutes to consider this!
 

dmacvittie

Reputable
Jun 28, 2014
2
0
4,510
I work with companies, helping them adopt cloud and devops. So these answers are a conglomeration of experience.

I've helped companies use VMware, OpenStack, AWS, etc. Network virtualization, storage virtualization, server virtualization, automation. Overall, server virtualization - and the network virtualization required to support it - has good adoption and helps IT adapt to changing business needs. Storage virtualization is a bit of an odd duck, with most orgs simply using FC or iSCSI with thin provisioning.

Best pearl of wisdom I've run into? Probably "If you have too much money you use VMware, if you have too much time you use OpenStack". Completely true. VMware costs a lot relative to other solutions, but "just works" and has stellar support. Openstack takes a ton of time to implement successfully, but costs nothing (unless you get RHOSP - which has better support).

Virtualization also helps with automation... and at the start, devops is automation. So beyond agility to respond to business needs, it is possible to gain standardization and speed deployments.

On the negative, it does seem that having massive amounts of computing power to be carved up creates more requests (so-called "virtualization sprawl") - meaning more hardware - over time than if a project had to justify the expense of hardware. And those servers have to be secured and managed, meaning gains from automation are often spent on increased responsibility before they're realized.
 

ehall

Reputable
Jul 2, 2014
1
0
4,510
Cost for storage and for CPUs has been very cheap for several years. We keep unused servers in a rack just to have them around. If we need extra capacity for a server due to a new project or seasonal demand, we can allocate the hardware and go. Also if a system goes down or starts acting strangely we just clone and go. Its very cheap insurance. Getting to this point required us to bring existing systems into order was expensive and difficult but the payoff has been worth it.

 

williamvw

Distinguished
Feb 4, 2009
144
0
18,680


I hope I don't sound ignorant by asking this, but what makes storage virtualization so odd? I thought that FC architecture in particular was quite expensive. What's the barrier(s) to virtualizing storage?



What if you don't have enough time or money? What's the "poor man's" solution? (Or is there one?)



So what are one or two good strategies to prevent this sprawl? I was under the impression that such resources were bought as-needed, not overbought and kept on hand for if and when they're eventually needed. How are you measuring utilization of these resources and -- maybe even more importantly -- what are your thresholds for knowing when to add more?

 

williamvw

Distinguished
Feb 4, 2009
144
0
18,680


Any chance you might be able to quantify that? It seems bizarre to me that letting redundant hardware gather dust is ultimately the choice strategy for meeting these needs, but maybe that's true under certain conditions.

Also, I'm very curious what specifically about your preparation process was "expensive and difficult." What did you have to do?

Thanks!
 

1jf

Reputable
Jul 2, 2014
1
0
4,510
Awesome quote about VMware vs OpenStack.

My question is, does VMware actually have a cloud stack? In other words, is there API-based launching of instances? API – based requests and fulfillment for storage? The VMware users that I have spoken to who say that they have a VMware cloud generally are referring to a big bucket of virtualization, which does not yield the same benefits that Cloud type of technologies do. Maybe they haven't implemented parts of VMware's solution?

Virtualization sprawl is largely taken care of by an actual Cloud stack. For example, when an app is able to scale up by itself, and request that new server instances be created, and then can destroy those instances when the load goes down, all of a sudden, you don't have sprawling virtual machines all over the place.

My two cents, having done several enterprise projects with Cloud Stack, open stack. I will admit that I have not dug into VMware lately. So enlighten me. :)
 
• What has been your experience with converged architecture?

Currently where I work we have around 10-12 servers all with this architecture and in general has been good. You don't have to be worry about run out of space in the servers and keep cleaning "old" information, but that is also a negative point. Since your server can expand (until hardware gives you) space storage, you will have a lot of information but could not be all that information useful for your business.

• Has it been helpful?

In general has been helpful, you know that always will be enough space in your IT servers.

• Do you need more info on the subject?

I think that the current information is there, the problem is that not all people makes use of it. You can find anything in Google, sometimes just takes more time of the necessary to find something that can help you or solve your question or problem.
 
What has been your experience with converged architecture?

One of our customers has recently undergone an extremely renovation that replaced separate network / storage infrastructure with a converged 10gbe one. This comes with some pro's and cons'. One pro is less administrative man hours to maintain and support the infrastructure. FCoE allows your to maintain your High Availability VMWare Cluster without needing to use the messiness that is iSCSi. Also the planning is a bit cleaner as you don't have to try to predict storage bandwidth one to two years in advance, instead you just factor in total bandwidth and ensure your links are wide enough to accommodate that growth. Another pro is that it's cheaper, you don't have to purchase two complete sets of fabric. The down side is that converged fabric is slower then dedicated storage fabric. A FCAL fabric goes at 16g and has smaller headers and no IP overhead, there is less latency and the pipes can be bigger. It can also be slower on the ethernet part due to the concentrators having to handle storage data, which depending on your organization can be immense.

Has it been helpful?

Overall yes, the net result saved money and allowed more flexibility for future expansion.

Do you need more info on the subject?

I'll be glad to answer any more questions you have. The decision to go converged really depends on the organization and what their needs are. It offers maximum flexibility and lower administration costs but at the expense of absolute performance. There may be solutions that require that absolute performance, and in those circumstances it might be better to use some form of local high speed storage, possibly a disk array cabled directly into the devices that need it.
 

dmacvittie

Reputable
Jun 28, 2014
2
0
4,510
I hope I don't sound ignorant by asking this, but what makes storage virtualization so odd? I thought that FC architecture in particular was quite expensive. What's the barrier(s) to virtualizing storage?

You don't sound ignorant... There's a lot going on out there, none of us is an expert in everything. FC virtualization has been around forever... But adoption is generally low, because it drops the ability to track where files are, and loss of a virtualization head is a serious risk to data loss.

What if you don't have enough time or money? What's the "poor man's" solution? (Or is there one?)

I like Eucalyptus for internal clouds... That's the source of that quote is their CEO. Most people would tell you KVM is the poor man's cloud, but it too requires work.

So what are one or two good strategies to prevent this sprawl? I was under the impression that such resources were bought as-needed, not overbought and kept on hand for if and when they're eventually needed. How are you measuring utilization of these resources and -- maybe even more importantly -- what are your thresholds for knowing when to add more?

The sprawl problem is pretty straight-forward. Control. Watch out for resources that are committed but the app isn't being utilized. Periodically review (or ask business owners to review) the status of existing instances.
Most orgs I deal with over-buy. There is little benefit to agility if resource constraints stop you. Resource utilization is generally handled via CPU, memory, and disk usage, measured as a percentage. Different companies set different thresholds, but 60% CPU, 75% memory, and 60% disk utilization are good starting numbers.

I was at a 100% Automated Meter Reading Utillity that saw so much data throughput that 40% disk utilization started throwing warnings... Just depends on the scenario.
 

williamvw

Distinguished
Feb 4, 2009
144
0
18,680
These answers are excellent! Thank you, everyone. This is helping immensely. So this is one side of the coin: helping to answer these up-front questions. The other side is what YOU want to know. Are there certain areas within the converged field that even experts like yourselves are left needing more information about? If so, what are they? What are your biggest pain points either within your converged infrastructure deployments or your preparations for implementing them?
 

Santanu_Saha

Reputable
Sep 1, 2014
11
0
4,510
Tips for All Systems

1. Don’t let your computer’s boot drive get too full:
Make sure to leave about 20% of your computer’s main hard disk free for system tasks and virtual memory operations. This is crucial to maintain system speed. If your main hard disk gets more than 80% full, it is time to go out and buy a second hard disk, or either get rid of some files. External USB and FireWire drives are more affordable than ever. Internal drives are even less expensive! While you’re at it, buy an extra drive just for backing up!

2. Get more RAM:
Your operating system can use up to 1GB of RAM all by itself. On a DJ computer, you’ll want to have more than that so your power-hungry applications have all the resources they need. 2GB is a great place to start. If you’ll be using lots of virtual instruments, samplers, and other sound generators, you’ll want to get even more… Think 3 or 4GB. Is there such thing as overkill? At this point, yes, there is. While many of today’s computers can accommodate 8GB or more RAM, even in virtual instrument and sample-heavy projects, it’s almost impossible to use up that much RAM. For the most part, the only time you’ll need 8GB of ram is in video and 3D-modeling applications.

3. Place your library on a second hard disk.
While even the 5,400 RPM drives in most laptops can handle recording eight or even 16 simultaneous tracks, you can really improve system performance by dedicating a 7,200 or 10,000 RPM drive to your recording projects. Certain files on your computer change all the time, such as your email, internet search history, and bookmarks. Other files stay more or less the same. These include music and photo libraries, and large audio files. Your system will perform better and won’t have to work as hard if you get a second drive to house your library. This will result in less fragmented drive space, and faster loading and writing of large files. This is also makes things easier when it comes to backing up your files.

4. Use the best ports on your computer for recording devices.

About Firewall: If you have a desktop or a tower, be sure to connect your audio interface to a port that is on the back of the computer. Generally, ports on the front of the computer, on the monitor, or keyboard don’t perform as well as those on the back. This can help prevent noise, dropouts and connection issues. Not all FireWire ports are considered equal. If the port on your computer performs poorly, or your device has problems being recognized, consider purchasing a new FireWire card.

About USB and USB 2.0: If you use a USB hub to connect your device, use a hub that has its own AC power adapter. If necessary, upgrade to a premium USB cable. If you are in an area that has a high level of radio or electrical interference, and you experience noise or hum when using USB audio, upgrade your USB cable to a premium cable with extra shielding and a ferrite bead (a cylindrical bump on one end of the cable), which can filter out some external noise.

Optimizing Your Computer for Audio