As the saying goes, half a loaf is better than no bread, and the same could be said of business continuity. Although you’d like to have an organisation that can weather any storm and survive any setback, if it’s a choice between sub-optimal performance or catastrophic failure, then naturally the first one wins. However, catastrophic failure is just around the corner for too many operations that have become extremely “lean” or “just in time”, or heavily automated.
A number of companies depending on semiconductor supplies from Japan were faced with this business continuity problem during the recent Japanese earthquake. They had fine-tuned their supply chain to extract a maximum of efficiency, but unfortunately for them with a risk of disruption that turned into reality. Similarly, as automation pervades more and more organisational operations, two main effects can be seen. Firstly, human operators lose their awareness of what’s really going on, because there’s less and less for them to do. Secondly, when things start to go wrong, they go seriously wrong much faster.
Yet neither supply chain efficiency nor automation has to be sacrificed in order to avoid catastrophic failure. It depends on what you design in. Internet and its predecessor the DARPA (Defense Advanced Research Projects Agency) network took as their starting point that in the event of the failure or destruction of part of the network, what was left had to be operational, even if performance was degraded. Similarly, supply chains designed with resilience allow companies to maintain business continuity by switching suppliers in the event of partner failure.
So business continuity management needs to include putting resilience into the overall organisation and improving surveillance to detect, or better to predict, failures as early as possible, leaving more time to react and a longer “time to impact”. Internet-style interoperability might be a model to help increase graceful degradation in the future.