Even though my goto operating system for servers is Red Hat Enterprise Linux, lately I have been working with Canonical’s Ubuntu Server 10.04 and I will admit that it has so far been a great experience. Just like what is expected of a server operating system, it is not intended for the general user base and focused more toward an experience Linux user; especially when by default there exists no GUI. That is one of the best parts in my personal opinions. Another great thing about the OS relates to its simplified installation process and how everything is automatically installed and to an extent configured should you choose to configure the server as a LAMP, DNS, etc. A couple of years ago, I had reviewed an older 8.10 release here and here and wasn’t impressed. Now, I can see things have changed for the better. Unfortunately I will not be discussing this. But before I get any deeper into this article, I wish to share my experiences with 10.04.
I am currently using Ubuntu Server 10.04 in two ways:
- Development platform in VirtualBox - The simplified installer allows me to get a base system up and running in a VM guest. I usually decline to install any additional packages but the base and when I am up and running, I will invoke apt-get to grab anything else I was too lazy to customize from the installer. The advantages are that I have a very light weight system. No GUI, just CLI and all the standard packages I need to test my developed device drivers and applications.
- To host my much needed services - I had an older Dell PC collecting dust in my office. So I decided to revive it and install Ubuntu Server 10.04 on it. It is currently configured to run and host Apache (Bugzilla), MySQL, a git repository, FTP and a bit more. After spending some time to secure MySQL, Bugzilla, ssh and my iptables firewall rules, the PC has been up since and I am able to function more productively; especially since I enabled the node to be accessible outside of my local network.
In both scenarios, the operating system has been a pleasure to work with. Although I do have one complaint and that is when you get to the point of package selection during the installation process, the interface kind of sucks. I am sure that I am not the only one who feels that way but it is what it is.
Now that I finished sharing my personal experiences with the operating system I will continue on to the main topic of this article and that relates to Canonical’s release cycles for their server catered OS. I for one know from experience that the IT industry doesn’t like change. That includes operating systems. The mentality is always that “if it ain’t broke, why fix it.” With that in mind I can see why Red Hat takes many years to release a new official stable release of their flagship OS while continuing to support the current with almost service pack like updates (via a new release such as 5.x or through yum). Truth be told, when a facility deploys and manages an operating system, they usually get comfortable with the release and do not want to stray far off it until one of three conditions are met:
- It is not supported anymore and they are forced to move on.
- A new release comes out and it introduces [a] much needed feature(s).
- Or new hardware is acquired.
So…why do I mention this? I understand the concept of 6 month release cycles and an LTS release every two years. Ideally, you would want to attract potential customers at every LTS release; but those inbetween releases seem like a waste of time and effort ONLY for the server series. Let me explain.
A lot of the general public are still somewhat confused by the whole Long Term Support (LTS) concept and what that truly means. For both desktop and server operating systems, with Canonical, everything inbetween is considered a concept or feature testing release in preparation for the next LTS OS. While this may not be an official explanation coming from Canonical, it has always been general perception. In a non-LTS release you saw the introduction of the Software Center, Upstart, Plymouth, etc. (as is evident with 8.10, 9.04 and 9.10) and now btrfs support is planned for 10.10 (also a non-LTS release) among other features.
For the desktop OS, this is all fine but when we get to the server side of things, why is it still necessary to maintain the same release cycle? Nobody wants to deploy a non-LTS release in a production environment. Especially when the support for that release is not as long as an LTS one. And if something needs to be tested, it can always be tested in the desktop non-LTS releases. This is where I feel that companies such as Red Hat, Novell, Sun/Oracle to even Microsoft got it right. The server editions of their operating system are not meant to see dramatic changes and or additions so often. This is why it is easy for a system administrator to transition from Microsoft Server 2000 to Server 2003 and again 2008. Don’t misunderstand me, I know from experience that a Microsoft Service Pack or upgrade can cause some damage but generally the operating system are focused on stability and standardizing the environment until the next major release.
If I were to suggest a recommendation for Canonical, it would be: “Maintain the 6 month release cycle for the desktop (and netbook) releases with an LTS appearing every two years but as for the server OS, drop the 6 month cycle and adopt an every two year LTS release.“