Patching and Docker
Keeping Linux servers up to date in a production environment hasn't gotten any easier, but the importance of the process has certainly been illustrated recently. As reported by Ars Technica, Cisco, ZDNet, and thousands of other sites around the Internet, Linux has had a bit of bad press lately relating to security. It is easy to point at the sysadmins who allowed their servers to stay online for nearly two years without patches and say that they are causing harm to the Internet, but to come to a sysadmin's defense, they are rarely the ones saying that patches shouldn't be put on.
There are three things that are always on a sysadmin's mind: uptime, stability, and security. Unfortunately, two of the three are almost always at odds with the third. Uptime and stability often go hand in hand, but the ever changing arena of security introduces a system of constant change into the environment. Sysadmin's are being asked to build systems that are bullet proof, highly available, and reliable... but also to have the latest cool feature or application that the developers want to take advantage of. Systems need to be spun up at the drop of a hat, and they need to be the same as systems that have been working in production for the past several years. Any changes, especially in large "enterprise" environments are carefully scrutinized, tested, and scheduled far in advance, and if need be, the scheduled changes are scrapped in the name of continued uptime. Until it is their name in the paper, businesses rarely put security at the top of their list of priorities.
And why should they have to? Most are in business to make money, after all, and if they wisely chose Linux over proprietary Unix or Windows they are looking for stable, supported platforms. Perhaps the real answer to keeping Linux servers up to date is to upgrade the system each and every time the developers push new code to the server. In traditional server setups, this would be impossible, but with Docker and Linux containers a system for security starts to take shape.
If one of the commands to build a Docker container is "yum update -y" or "apt-get upgrade", that container is going to get all the freshest patches every time it is built. Assuming that a system is put in place to ship the entire application as a container, and each push to production is a new container, that container would always have the latest code. Using a system like this, the Docker containers have the latest and greatest features, as well as the latest patches, while the underlying system remains on a strict upgrade schedule, completely separate from the application container. Flexibility and stability. The impact of Docker hasn't really been felt in the Enterprise yet, but I have a feeling that it is going to be huge.
Using Docker containers to keep systems up to date makes the process of patching a server nearly obsolete. Of course, the base system will need to be kept up to date as well, but since it should be inaccessible from the Internet, the attack vector is much smaller. Hopefully, this should keep the programmers happy, and let the sysadmins get a good night's sleep.