Ideally, business continuity means no discontinuity.
Interruptions are prevented or avoided, and the business keeps ticking, no matter what the circumstances.
But as savvy business people know, such perfection is rarely achievable and even if it is, the costs can be astronomical.
Excellence may be a better goal, but does this mean that the occasional BC imperfection is acceptable – and if so, to what degree?
Business continuity managers could perhaps take a leaf out of the book of modern IT security thinking, where cast-iron protection is no longer the objective, and detection and repair, post-problem, are increasingly accepted.
The pursuits of perfection, total protection, and unbroken continuity are all subject to the law of diminishing returns.
The more resources you spend, the smaller the marginal benefit you reap. There comes a point at which the additional advantage is so small that it is either financially absurd or useless to continue. Of course, this point will likely be different for, say, an accounting firms compared with a hospital emergency ward.
However, resources may be better allocated to the accurate detection and speedy remediation of any problems that do manage to slip through.
In IT security, compared to business continuity, data analytics and artificial intelligence techniques such as machine learning help to spot anomalies and abnormal system behaviour, even going as far as then automatically constructing step-by-step solutions for organisations to put things right.
Conventional protection systems still have a role to play in warding off known threats and risks: the two approaches of prevention and detection/repair complement each other for greater total protection and continuity.
While business continuity vendors and providers have yet to catch up with IT security solutions providers in this sense, the logic of using a similar two-pronged approach for better overall continuity at lower overall cost seems attractive.