Clearly, virtualization is the “hot, new technology” facing many IT organizations. Yet it seems to be the space where currently we see the “just because you can, doesn’t mean you should” problems rearing their ugly heads most prevalently.
As with everything in IT, it is critical that all technical decisions be put into a business context so that we understand why we choose to do what we do. We shouldn’t blindly attempt to make our decisions based on popular deployment methodologies or worse, myths.
Virtualization itself should be a default decision today for those working in the x64 computing space. Systems should be deployed sans virtualization only when a clear and obvious necessity exists, such as specific hardware needs, latency sensitive applications, etc. Baring any specific need, virtualization is free to implement, with solutions from many vendors. It offers many benefits both today and in future-proofing the environment.
That being said, what I often see today are companies deploying virtualization not as a best practice but as a panacea to all perceived IT problems. This it certainly is not.
Virtualization is a very important tool to have in the IT toolbox. It’s one we will reach for very often, but it does not solve every problem. It should be treated like every other tool that we posses, and be deployed only when appropriate.
I see several things recurring when virtualization discussions come up as a topic. Many companies today are moving towards virtualization not because they have identified a business need. No, instead it’s deployed because it’s the currently trending topic. People feel that if they don’t implement virtualization that somehow they will be left behind or miss out on some mythical functionality.
This is generally good as it is increasing virtualization adoption. But it’s bad because good IT and business decision-making processes are being bypassed. What often happens in the wave of virtualization hype is that IT departments feel that not only do they have to implement virtualization itself, but may feel pressured to do so in ways inappropriate for their business.
Four Factors Tied to Virtualization
There are four things that I often see tied to virtualization, often accepted as virtualization requirements – whether or not they make sense in a given business environment. These are 1) server consolidation, 2) blade servers, 3) SAN storage and 4) high availability or live failover.
Consolidation is so often vaunted as the benefit of virtualization that I think most IT departments forget that there are other important reasons for doing implementing it.
Clearly, consolidation is a great benefit for nearly all deployments (mileage may vary, of course). It is nearly always able to be achieved simply through better utilization of existing resources. It is a pretty rare company that runs more than a single physical server that cannot shave some amount of cost through limited consolidation. It’s not uncommon to see datacenter footprints decimated in larger organizations.
In extreme cases, though, it is not necessary to abandon virtualization projects just because consolidation proves to be out of the question. These cases exist for companies with high utilization systems and little budget for a preemptive consolidation investment.
But these shops can still virtualize “in place” systems on a one-to-one basis to gain other benefits of virtualization today. They can look to consolidate when hardware needs to be replaced tomorrow or when larger, more powerful servers become more cost effective in the future.
In short, it’s important to not rule out virtualization just because its most heralded benefit may not apply at the current time in your environment.
Blade servers are often seen as the choice for virtualization environments. Blades may play better in a standard virtualization environment than they do with more traditional computational workloads. Yet this is both highly disputable and not necessarily applicable data.
Being a good scenario for blades themselves does not make it a good scenario for a business. Just because the blades perform better than normal when used in this way does not imply that they perform better than traditional servers – only that they have potentially closed the gap.
Blades needs to be evaluated using the same harsh criteria when virtualizing as when not. And, very often, they will continue to fail to provide the long-term business value needed to choose them over the more flexible alternatives. Blades remain far from a necessity for virtualization and often, in my opinion, a very poor choice indeed.
One of the most common misconceptions is that by moving to virtualization one must also move to shared storage such as SAN. This mindset is the obvious reaction to the desire to also achieve other benefits from virtualization, which, if they don’t require SAN, benefit greatly from it.