Ever since Frederick Taylor’s ideas on system engineering were shown to have a fundamental lack of appreciation of the human factor, businesses have been coming to terms with the messiness and at the same time the potential of human beings in the disaster recovery process. Taylor’s precept was that workers were too stupid to understand what they were doing: by carving up overall processes into chunks, and assigning one chunk to each worker who then mindlessly repeated the same task, Taylor forecast great improvements in efficiency and productivity. Today, such an approach seems crude, even laughable – but is DR doing any better?
Disaster recovery can be defined in different ways, covering different areas. We use the term to refer to the recovery of IT systems, recognising that business continuity, of which DR is a part, covers a wider range. IT systems would have appealed greatly to Taylor. After all, there is nothing quite so stupid as a binary system founded on the crudest of concepts; that of just two possible states, “0 or 1”.
What human beings brought to this basic notion was the richness of combinations of those zeroes and ones, and the techniques to build (relatively) huge and complex blocks of them that could be represented meaningfully to a human audience; for instance, text in a word-processing document for an instruction manual, or numbers in a spread-sheet for forecasts. Not to mention a whole slew of software and hardware bugs.
In the end, anything constructed by human beings still requires human beings to sort it out, especially when it stops working. That means that the human factor has to be part of disaster recovery, somehow, somewhere – even if it’s as basic as making sure that staff are sufficiently well shielded, physically and psychologically, from disaster, to be able to mount that vital backup tape to get operations running again. How well does your business continuity and disaster recovery handle the human factor?