Thinking Small With Tiny Core Linux
I recently had the need to build a virtual appliance, a small Linux server that did one thing, and required no interaction. And by small, I mean really small, tiny. After considering the options and searching around a bit, I found the Tiny Core Linux, and when they say tiny, they mean it. The Tiny Core download is only 12MB.
Tiny Core Linux is meant to be a minimalist desktop operating system. The main download includes a window manager, a text editor, and thats about it. The desktop includes a Mac-like dock at the bottom of the screen, and in the dock is an application to download and install more applications. However, for my purposes, I did not need the GUI, all I needed was a server. So I downloaded and installed the tc-install tool, launched it and installed the OS into a very small virtual machine.
Actually, I installed it several times, and the first couple of times I tried to install I could not. That is because VMware automatically chooses SCSI as the hard drive interface, but Tiny Core only supports IDE. During the install I chose the “Core Only” text interface, and chose the “Installer Application” to be able to install OpenSSH and any other applications I would need.
If you are familiar with command line Linux administration, you might feel a bit lost when you start looking around at Tiny Core. The developers made some interesting concessions in name of size and, presumably, security. By default, no data is retained between reboots. So, spend a little time getting your shell environment the way you like it, spend a little more getting the server you need set up, give it a quick reboot and all the changes you just made are gone. I started looking around at the filesystem after my first reboot, and that is when I realized that things had somewhere gotten weird. Time to read the documentation.
Tiny Core is actually only two files, one for the kernel and one for userland applications. Both files are loaded into RAM during boot, so very little actually exists on the drive. The normal mount points are all ramdisks, so everything is lost and reloaded on reboot. Applications, which Tiny Core calls extensions, are stored on the drive and mounted as read-only loopback devices in /tmp/tcloop. The binaries are then symlinked to /usr/local/bin or wherever the rest of the system expects to normally find them. It is an interesting concept, every time the system boots the applications are read fresh from the loopback again.
So, the applications had to live somewhere, so where were they? Tiny Core mounts the hard drive as /dev/sda1, and in the hard drive there are two directories: /boot (ah, there it is!), and /tce. Files stored on the hard drive persist between reboots, and inside the /mnt/sda1/boot/extlinux directory is a file named extlinux.cfg, where you can define your boot parameters. Three boot parameters I was interested in were “cron” to start the cron daemon, “opt=sda1”, and “home=sda1”. These parameters tell Tiny Core to use the hard drive to store the contents of /opt and /home to persist between reboots. However, there is no way to tell Tiny Core to save the /etc or /usr/local/etc/ directories, so to save any settings between reboots you must copy any files you need into your home or opt directories. I did add one more option to the boot parameters, “noautologin”, which booted Tiny Core to a login prompt instead of straight to a shell.
The final piece of the puzzle is the /opt/bootlocal.sh file. This file is executed during the boot process, and providing that you are persisting /opt between reboots, gives you a way to copy the files that you need from your home directory back into place. For example, /etc/shadow to keep passwords, /usr/local/etc/ssh for the OpenSSH daemon, and any cron files.
It was a fair bit of work to get my virtual appliance working the way I wanted, but it was also an interesting look at an alternative concept for building a Linux system. I now have a downloadable virtual appliance that weighs in at right around 27MB, zipped. I am also considering using this system for other servers, at least for testing. It might be interesting to see what kind of load the appliance can take, especially running something like NGINX. If you have experience building this type of system, I would be interested to hear about your experience in the comments.