In disaster recovery, technology is often a neutral element – neither good nor bad, in itself. Some technologies are better suited to specific needs or offer relative improvements to existing solutions. What determines whether an organisation benefits or suffers is the application of technology. When it is used unthinkingly and incorrectly, the horror stories start. Worse still, many technology-related disaster recovery failures are repeats of catastrophes that were already happening decades ago. What have we learnt since then – or what should we have learned?
Take the case of storage area networks (SANs) that provide a solution to increasing needs to store, handle and make data available. The amount of data involved and the complexity means that automation is often a major feature too. The problem comes when organisations assume that there will automatically be no problem. But if a SAN is accidentally connected to the wrong server that automatically reformats the SAN, that in turn automatically replicates itself onto a remote backup SAN, a company can quickly grind to a halt. This automated replication risk also exists with RAID systems and disk drives in general. It was a problem even when businesses were still using floppy disks.
The answer is to combine solid processes and checks to make sure technology is continually channelled towards business advantage instead of business disruption. Smart organisations avoid disasters like this by assuming that failure will strike, somehow, somewhere, and planning in advance to head it off or contain it. Techniques include proper business impact analysis and disaster recovery testing with a little healthy paranoia to help keep data and applications safe and sound. That way, IT staff can avoid waking up to a situation of technology and automation out of control, which is when the real nightmare starts.