Diving Into OpenVZ

by Ostatic Staff - Feb. 26, 2013

A few months ago I wrote an article about the conceptual superiority of FreeBSD jails compared to full virtualization platforms like VMware or Xen. In the article, I mistakenly thought that the concept of building jails into the operating system was a philosophical difference between Linux and BSD. However, as is most often the case when one claims something doesn’t work on Linux, the real answer is: “of course it does”.

We have recently been struggling with deploying a web application based on PHP that is very hard on resources. We run Linux in VMware for our application servers, and bare metal for the databases. For the first time we saw both our bare metal servers and the VMware hosts max out the CPU during a particularly intense spike in traffic, the charts were enough to make everyone’s jaw drop. Obviously, we need more hardware, and since we need more hardware, we thought that this might be a good time to rethink our architecture.

VMware, Xen, KVM, and other virtualization platforms are great for testing new applications, consolidating low-use servers, and spinning up new environments quickly. However, the capability comes at the cost of additional overhead, both in terms of management and physical capacity. If you are dedicating significant resources to a single application, it makes sense to forgo virtualization and revert back to bare metal. But, at times it makes sense to have a logical separation between application instances, the ability to have identical applications running simultaneously, but keeping them individualized for management. That’s easy to do in virtualization, just keep the separate instance in their own virtual machine, but harder to do on bare metal. I knew how to solve that problem with FreeBSD, but what about Linux? Enter OpenVZ.

OpenVZ is container-based virtualization for Linux. OpenVZ creates multiple secure, isolated Linux containers (otherwise known as VEs or VPSs) on a single physical server enabling better server utilization and ensuring that applications do not conflict. Each container performs and executes exactly like a stand-alone server; a container can be rebooted independently and have root access, users, IP addresses, memory, processes, files, applications, system libraries and configuration files.

OpenVZ, very much like FreeBSD jails, gives each container direct access to the hardware, providing the power of bare metal with the separation of virtualization. It also simplifies management. With virtualization, each virtual machine creates another install of Linux that needs to be fed and cared for. Disk space monitoring, memory use, local users, password policies, firewalls, etc… all the little things that need to be done. Using OpenVZ, we have the opportunity to rethink a lot of that. For example, one of my design goals is to only have the port needed for the application open on the container IP. No SSH, no nrpe, nothing other than Apache or nginx. All of the management can be done at the host level. If access is needed to one of the containers, the user can ssh to the host and use the vzctl tool to get a shell. Likewise, Nagios monitoring could be simplified. If I’m mainly concerned with system load, disk space, and CPU, RAM, and swap use, I can monitor all of those from the host, and write new checks to monitor the containers.

So far I’m very excited about the opportunity to reimagine our architecture. I’m sure that there will be many pitfalls and setbacks along the way, but conceptually I think that the OpenVZ based system is cleaner than traditional virtualization. I would also be interested to hear if any of our readers have experience running web applications with OpenVZ. If you have any comments, questions, or suggestions, please feel free to drop me a line in the comments below.