Cloud-Based Disaster Recovery Tools

Monday Mar 19th 2012 by Jeff Vance
Share:

Is protecting your data with a disaster recovery solution accessed over the cloud a good choice for your enterprise?

As with so many other enterprise-focused technologies (CRM, ERP, collaboration), the cloud is upending the disaster recovery (DR) status quo. For end users, this is good. Prices are dropping – putting DR in reach for much smaller organizations than in the past – and choices are multiplying.

However, the cloud is no panacea, and for many organizations, cloud-based DR may even be the wrong, or at least, an incomplete choice.

To help you craft a cloud strategy that will match your organization’s goals, consider these five questions:

1. Which data will be of the highest priority after a disaster?

As CIOs start planning for disaster recovery, their DR plans quickly butt up against Big Data realities. Employees have more storage than ever at the fingertips – on PC hard drives, mobile devices, cheap thumb drives and online repositories like Dropbox – and the cost of storage has dropped through the floor. A GB of storage that cost nearly $10 in 2000 costs less than 10 cents today. The result: people now save everything.

In the age of Big Data, clearly you can’t backup and recover every bit of data, at least not in the initial recovery phase. It wouldn’t be practical, and it’s not necessary. For construction firm Graniterock, having their Enterprise Resource Planning (ERP) software back online as soon as possible after a disaster is a must. ERP helps them dispatch trucks, coordinate work crews at high-priority construction sites like airports and even make sure that concrete doesn’t set before it’s delivered.

The company previously had an on-premise ERP solution from Oracle, but the on-premise solution was too much of a burden for the Graniterock’s small IT staff. Graniterock switched to the hosted ERP solution from Velocity. Part of the advantage of a cloud-based ERP system is that disaster recovery can be a feature, instead of a long, involved, resource-intensive special project.

“ERP is our priority, but we hope to have other applications rolled in eventually,” said Steve Snodgrass CIO of Graniterock. To achieve that goal, the company recently replaced a mixed-vendor storage environment (EMC, NetApp, Data Domain and Buffalo) with a SAN from Nimble Storage.

As of now, the Nimble SAN backs up data locally, but operations manager Ken Schipper hopes to have a remote disaster recovery site online soon, which would offer true DR capabilities for Exchange and a variety of virtual machines and databases.

2. What does your region’s disaster profile look like?

When companies turn to cloud-based storage, regional exposure is often one of the big oversights. If you don’t sign up for data recovery services to a different region, a major event, such as Katrina or Fukushima, could unravel your DR plans.

“So many people forget that for true disaster recovery, you need to get out of the facility, and out of the region. People just assume their cloud provider is doing this when typically they aren’t – unless you pay for it,” said with Ginnie Stouffer, a Master Certified Business Continuity Professional with IDC Partners, a business continuity firm in King of Prussia, PA.

“Katrina was great teacher in this regard,” she added. “So many businesses had data backed up off-site, but off-site in New Orleans. Many banks even passed audits that approved of these plans, and we saw how that worked out.”

Graniterock, for instance, is based in Watsonville, CA, about 45 minutes south of San Jose. This is, of course, an area with a high risk for earthquakes. In fact, local IT infrastructure in Watsonville and data stored with Velocity in Seattle are both in regions with high earthquake risks. Now, one earthquake probably wouldn’t knock both offline, but a series of earthquakes associated with volcanic activity could.

It’s a low-probability event, but so was Katrina. So was Fukushima.

“We’re now backing up data in real-time from Velocity’s data center in Seattle to one in Denver,” Snodgrass said. “Moving data to a lower-risk region limits our exposure significantly.”

If an event knocks out San Jose, Seattle and Denver? Well, recovering data will be the least of Snodgrass’ worries. Fending off aliens or surviving the zombie apocalypse will take precedence.

3. Have you signed up for disaster recovery or just data replication?

Many services that people consider – and vendors even sell as – disaster recovery are really just data replication services. Data replication helps, certainly, but it doesn’t give end users the ability to mirror their infrastructures. Users replicate data, but not the systems that data resides in.

Operating systems, applications and user settings are not replicated. If a disaster happens, servers and databases must be rebuilt before data can be recovered, which translates into much more downtime.

Data replication may be just fine for your organization, but it’s important to know that it’s not a full-blown DR service. Eventually, as HTML5 and the browser-based, silo-free Nirvana we’ve all been promised comes to fruition, data replication will be disaster recovery.

Until then, make sure you’re getting the service levels you require to rebound after a disaster.

4. Are there complementary tools you need to make your plan workable?

One reason cloud-based data replication is so much more popular than cloud-based DR is that sending the amount of data necessary for true DR over the public Internet is prohibitively expensive, requiring, for instance, the addition of an expensive MPLS connection. This is a major reason that so many businesses still handle DR physically, with tape backups.

For cloud-based DR to really work for many companies, the storage and mirroring side of the service is not enough. Even if you’ve hosted your applications with a cloud provider that fact doesn’t mean it’s affordable to mirror your data from a site in one region to another.

How do you move those huge chunks of data over the Internet without breaking the bank? What many organizations have found is that complementary technologies like CDNs or WAN optimization are critical to their DR planning.

“We’ve offered WAN optimization services to our customers for years, but for many it was still too expensive,” said Jon Beck, SVP of Worldwide Channels and Alliances for OpSource, a provider of enterprise cloud and managed hosting services. Traditional WAN optimization requires the installation of hardware at each site, and often at branch offices and other remote locations, and the costs really add up.

To help drive down costs and offer data replication and disaster recovery to a broader segment of their customer base, OpSource turned to the cloud-based WAN optimization solution from Aryaka. “[At OpSource] we’re big believers in the SaaS model, and Aryaka is the only WAN optimization vendor I know of that delivers the service on the SaaS model,” Beck said. As a result, all the DR spending for OpSource itself and for its customers is on actual operating expenses, rather than on up-front equipment costs.

5. Do you have an offline plan B in place?

Even the best-laid disaster plans may not hold up in the face of a major disaster. For Graniterock, after a disaster there’s a good chance their construction trucks could be diverted to help with cleanup and rebuilding, but how would they coordinate all of that?

“If it’s a major disaster and the WAN is down in your region, you won’t even be able to get to your backed up data,” Snodgrass said.

Although Graniterock does its billing and payments online, the company keeps a stock of paper checks on hand just in case. They keep paper dispatch tickets for trucks, and they have plans in place for offline computers to do manual matching for complicated concrete delivery.

“If your people are prepared, pencil and paper will work just fine if you don’t have power, Internet and phones,” Snodgrass added.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved