Virtualization is a business continuity answer to the vulnerabilities and foibles of physical servers. By spreading applications virtually and horizontally across vertical stacks of computing power, service can be ensured even if one stack goes down and the same application elsewhere picks up the slack. In principle, that’s fine – as long as IT administrators remember they’re dealing with virtual machines and manage them correctly. War stories grow daily of catastrophes or near misses concerning faulty perceptions and handling of virtualisation. The following can help you conserve business continuity and avoid the need for disaster recovery.
The first potential problem is the backup routine that accesses logical units of storage in the same way it does for externally physical disk drives, and offers administrators the option to format them. This is similar to the potential slip-up in deleting virtual machines instead of backup snapshots. Misconfigured reboot routines can also wreak havoc if for example an operating system or service is set up by default to install itself automatically on any machine restarting on the same network. These error situations can lead to downtime and data loss, either immediately because of the wrong action or as a time bomb because of incorrect configuration.
But uncontrolled replication of virtual machines without management is also a problem. Because virtual machines are easy to install and to clone, VMs can proliferate without IT management being aware. Besides being unprotected in terms of data backup, they eat up resources and offer potential entry points for intruders to then move into databases and other assets via local network links. The solution is firm IT management that ensures virtual machines are only installed with the correct user permissions, that they are tracked from creation to (possible) deletion, and that in between times they are covered by a robust, error-free process for data backup.