Cloud computing is rapidly gaining enterprise adoption, yet many IT professionals remain skeptical, for good reason. Issues like security and standards continue to challenge this emerging technology.
ALSO SEE: Are SaaS/Cloud Computing Vendors Offering Questionable Contracts?
Theres been at least as much healthy skepticism about cloud computing as there has been optimism and real results. And there ought to be, especially as cloud computing moves out of buzzword territory and becomes an increasingly powerful tool for extending IT resources.
To that end, heres a rundown of ten key things both creators and users of cloud computing should continue to bear in mind.
The good news is that the very nature of the cloud may be compelling more real thought about security on every level than before. The bad news is that a poorly written application can be just as insecure in the cloud, maybe even more so.
Cloud architectures dont automatically grant security compliance for the end-user data or apps on them, and so apps written for the cloud always have to be secure on their own terms. Some of the responsibility for this does fall to cloud vendors, but the lions share of it is still in the lap of the application designer.
A cloud computing-based solution shouldnt become just another passive utility like the phone system, where the owners simply puts a tollbooth on it and charges more and more while providing less and less. In short, dont give competitors a chance to do an end run around you because youve locked yourself into what seems like the best way to use the cloud, and given yourself no good exit strategy. Cloud computing is constantly evolving. Getting your solution in place simply means your process of monitoring and improving can now begin.
3) Client incomprehension
Were probably past the days when people thought clouds were just big server clusters, but that doesnt mean were free of ignorance about the cloud moving forward. There are all too many misunderstandings about how public and private clouds (or conventional datacenters and cloud infrastructures) do and dont work together, misunderstandings about how easy it is to move from one kind of infrastructure to another, how virtualization and cloud computing do and dont overlap, and so on.
A good way to combat this is to present customers with real-world examples of whats possible and why, so they can base their understanding on actual work thats been done and not just hypotheticals where theyre left to fill in the blanks themselves.
4) Preventing bottom-up adoption
Cloud infrastructures, like a lot of other IT innovations, dont always happen as top-down decrees. They may happen from the bottom up, in a back room somewhere, or on an employees own time from his own PC.
Examples of this abound: consider a New York Times staffers experience with desktop cloud computing. Make a sandbox space within your organization for precisely this kind of experimentation, albeit with proper standards of conduct (e.g., not using live data that might be proprietary as a safety measure). You never know how itll pay off.
5) Ad-hoc standards as the only real standards
The biggest example of this: Amazon EC2. As convenient as it is to develop for the cloud using EC2 as one of the most common types of deployments, its also something to be cautious of. Ad-hoc standards are a two-edged sword.
On the plus side, they bootstrap adoption: look how quickly a whole culture of cloud computing has sprung up around EC2. On the minus side, it means that much less space for innovators to create something open, to let things break away from the ad-hoc standards and can be adopted on their own. (Will the Kindle still be around in ten years?) Always be mindful of how the standards youre using now can be expanded or abandoned.
6) Over-utilization of capacity
Few things are more annoying to customers than promising something you cant deliver. The bad news is that in many industries, thats how things work: overbooking on airlines, for instance.
It might also become like that for cloud providers, who may be forced to sell more capacity than they can actually provide as a way to stay competitive with
well, everyone else doing the same thing. Reuven Cohen of Enomaly has speculated that Amazon EC2 might be doing this right now. With any luck theyre not doing it in lieu of better testing and saner quota allotments.
Testing should always be standard practice. Robust, creative, think-out-of-the-box testing doubly so. Consider the way MySpace used 800 EC2 instances to test itself and see if they could meet anticipated demand for a new streaming music service. Their example involved using the cloud to test their native infrastructure, but theres no reason one couldnt use one cloud to generate test demand for another, and determine what your real needs are. And not just once, but again and again.
7) Under-utilization of capacity
Just as over-utilization is both bad planning and bad business, so is under-utilization. In fact, having a good deal of idle capacity youre paying to support and not generating revenue from may well be worse than the opposite scenario.
This sort of things easier to deal with if youre the one buying the service, but what if youre the one selling it? Thats another reason why metrics and robust load testing are your best friends when creating cloud services. Also consider the possibility youre not selling enough kinds of services: is there room in your business plan for more granular, better-tiered service that might draw in a wider array of customers?
8) Network limitations
One word: IPv6. If youre deploying systems, using infrastructure or writing applications that arent IPv6-aware now, youre building a time bomb under your chair.
IPv4s days are more numbered than ever, and tricks like NAT or freeing up previously-unallocated blocks arent going to buy enough time to get us through the decade. Cloud computing, with its world of hosts that can appear by the thousands at once, will all but guarantee we need IPv6s address pool and technical flexibility.
Think forward on every level, and encourage everyone building on top of your infrastructures to do the same thing.
Latency has always been an issue on the Internet; just ask your local World of Warcraft raiding guild. Its just as much of an issue in the cloud.
Performance within the cloud doesnt mean much if it takes forever for the results of that performance to show up on the client. The latency that a cloud can introduce doesnt have to be deadly, and can be beaten back with both an intelligently planned infrastructure and smartly-written applications that understand where and how theyre running.
Also, cloud-based apps and the capacity of cloud computing itself are only going to be ramped up, not down, in the future. That means an arms race against increases in latency is in the offing as well. Just as the desktop PCs biggest bottlenecks are more often storage and memory, not CPU, the true source of cloud latency must be targeted and improved.
10) The Next Big Thing
The cloud isnt an endpoint in tech evolution, any more than the PC or the commodity server were final destinations. Somethings going to come after the cloud, and may well eclipse it or render it redundant. The point isnt to speculate about what might come next, but rather to remain vigilant to change in the abstract. As the sages say, the only certainty is uncertainty, and the only constant thing is the next big thing.