Disaster Recovery Forecast: Cloudy with Scattered Virtual Machines

First there was the dedicated, physical server. Then came virtualisation to help organisations mix and match over different servers on their sites. After that came cloud computing with more virtualisation (and multi-tenancy thrown in). However, organisations typically still did their virtualisation between machines in close physical proximity, even if they were using cloud services. Now the next step is to see how well virtual machines and their data can be transferred between racks of machines not just separated by a few feet, but by hundreds of miles – or at least far enough to be out of range of the next tsunami.

One enterprise that has worked on the question is the American clothing company, Colombia. The firm has data centres in different parts of the world. The tsunami that hit Japan in 2011 spared the data centre that Columbia had in Tokyo, but caused severe disruptions to the power supply. The company’s data was on tapes, making it difficult to transfer reliably from the Tokyo data centre because of the time needed and the frequent power outages. Since then, Columbia has been fine-tuning an approach to transfer virtual machines to a distant data centre that already contains the company’s replicated data.

The concept works: Columbia has tested it and is using it today, with continuous replication of data between sites. Now the company is looking for a more cost-efficient solution with a goal of keeping outages perceived by users to as little as 15 or 20 seconds, despite the considerable geographical distances to back up data centres. It’s pioneering stuff. On the other hand, given today’s compressed timescales between early adoption and mass market usage, this kind of long-haul VM disaster recovery may even become the norm in a few years’ time.