In Defense Of Hardware
Cloud technology is becoming synonymous with virtualization, but does it have to be? As layers upon layers of complexity are heaped upon our servers, maybe it is time to take a step back and ask ourselves, “what is the simplest, most efficient, and easy to manage system we can build with what we have?” For some applications, the answer might just be bare metal.
Virtualization software is amazing. I will be the first in line to sing the praises of Xen and it’s ilk. However, the capabilities of virtualization come at a cost, and that cost must be carefully considered when planning a new environment or an upgrade to an old one. The ability to spin up fifteen new virtual machines at the drop of a hat is great, except that then you have fifteen new virtual machines. Each new VM needs management and consumes resources. Some resources the VM requires are redundant, like duplicate operating system or application files, while others, like memory overhead, are necessary evils. Adoption or continuation of a virtualized environment needs a big picture view. What are you actually supporting?
One advantage to virtualization software is hardware independence. If one piece of hardware dies, theoretically your virtual machine will automatically move to another. However, the VM does not move seamlessly in an outage. The VM reboots, and depending on your application, this could be a big deal. The real issue here though is why we are addressing high availability at the operating system level. In a Windows environment this might make sense, but in an open source web environment it does not. High Availability should happen at the application layer, preferably in a load balancer. Independent, redundant nodes ensure that no one node needs to know about any other. Hardware independence is nice for migrating from old hardware to new, but a properly configured Chef or Puppet management system installed, new hardware and a new OS is far less of an issue.
Hardware will always perform better than virtualization. Virtualization depends on hardware, and with virtualization there is more software that needs to be processed, more instructions for the CPU to handle than there is on bare metal. Fully virtualized guests have an abstracted view of the CPU, RAM, and disk, which means that any access to these resources must be managed by the hypervisor. Managed access means the CPU needs to be used. In bare metal, there is nothing between your application and the full horsepower of the machine. Given, if the only thing your machine is doing is hosting CIFS file shares or authenticating user access, that might not be a big deal. However, if you are pegging your processor compiling PHP, every ounce of performance matters… a lot.
Virtualization is perfect for pre-production environments. Areas where developers and sysadmins can test out new ideas, proof-of-concepts, and patches before pushing out to production. However, just because you are building the next new cloud platform doesn’t mean that you abandon the best performing solution. In fact, if you are concentrating on a single application, bare metal makes more sense than ever. The cloud is a hot term for an old idea, and one that many businesses are latching onto to sell their wares. All that is well and good, but in my book, the simplest and most powerful answer is still the best.