OpenNode - A Standards Based Cloud Platform
Since we have been looking at FreeBSD, OpenVZ, and ProxMox, it seems only right to mention the other open source player in this market: OpenNode. OpenNode, like ProxMox, is a management layer built on top of OpenVZ containers and KVM virtual machines. Unlike ProxMox, which is built on Debian, OpenNode is similar to CentOS and Scientific Linux in that it is built off of Red Hat Enterprise Linux. A good fit, since the stable OpenVZ kernel is also released for RHEL.
There are a few other interesting distinctions between ProxMox and OpenNode.
Scripting layer: ProxMox implements quite a lot of the automation of OpenVZ with perl scripts. Not a problem if you are familiar with perl, but it can be a hurdle for those new to the language. Perl is powerful, but it is not always the easiest thing to read. OpenNode implements their scripting and automation layer using Python and libvert.
OpenNode exposes several management layers corresponding to the layers of the system. It provides local hypervisor tools vzctl and virsh at the base level, if you need it. Most users will probably be satisfied with the curses based OpenNode TUI, a console that abstracts the differences between KVM virtual machines and OpenVZ containers. I'm less interested in local management than I am in having a single pane of glass. OpenNode provides libvirt, saltstack and an OpenNode Management Server (OMS) which is provided as a VM appliance. Unlike ProxMox, it looks like the OMS is more like VMware's vCenter management server approach, where ProxMox allows web based management from any node in the cluster.
OpenNode supports the Open Virtualization Framework, which allows them to package containers and virtual machines with additional metadata. OpenNode developers feel optimistic that the metadata support will allow for complex deployments and additional management opportunities. I'm a fan of fine grained management, especially when your goal is to eke out as much performance as possible from your hardware.
OpenNode clustering uses the standard Red Hat Cluster Suite with OpenVZ support bolted on. Depending on how you feel about RHCS, this may or may not be a benefit. Personally, I'm not a fan. My opinion is that any clustering or high availability should be done at the application layer, not the OS. But, if you want it, it's there. If I remember correctly, I believe this is the same way ProxMox does clustering.
Like many other open source projects, the differences between ProxMox and OpenNode largely come down to a matter of philosophy. "Which way to manage containers and virtualization is best?" From our tests, it looks like OpenNode is not quite as mature as ProxMox, so we will, for now anyway, probably stick with ProxMox for the next big project. However, I'm going to be keeping a close eye on the progress of OpenNode. If nothing else, it is good to have somewhere to turn if things do not go well with ProxMox. Competition is always good.