Archive

Archive for July, 2010

Opinion: Re: Canonical release cycle for Ubuntu Server

July 29th, 2010 3 comments

Even though my goto operating system for servers is Red Hat Enterprise Linux, lately I have been working with Canonical’s Ubuntu Server 10.04 and I will admit that it has so far been a great experience. Just like what is expected of a server operating system, it is not intended for the general user base and focused more toward an experience Linux user; especially when by default there exists no GUI. That is one of the best parts in my personal opinions. Another great thing about the OS relates to its simplified installation process and how everything is automatically installed and to an extent configured should you choose to configure the server as a LAMP, DNS, etc.  A couple of years ago, I had reviewed an older 8.10 release here and here and wasn’t impressed. Now, I can see things have changed for the better. Unfortunately I will not be discussing this. But before I get any deeper into this article, I wish to share my experiences with 10.04.

I am currently using Ubuntu Server 10.04 in two ways:

  1. Development platform in VirtualBox - The simplified installer allows me to get a base system up and running in a VM guest. I usually decline to install any additional packages but the base and when I am up and running, I will invoke apt-get to grab anything else I was too lazy to customize from the installer. The advantages are that I have a very light weight system. No GUI, just CLI and all the standard packages I need to test my developed device drivers and applications.
  2. To host my much needed services - I had an older Dell PC collecting dust in my office. So I decided to revive it and install Ubuntu Server 10.04 on it. It is currently configured to run and host Apache (Bugzilla), MySQL, a git repository, FTP and a bit more. After spending some time to secure MySQL, Bugzilla, ssh and my iptables firewall rules, the PC has been up since and I am able to function more productively; especially since I enabled the node to be accessible outside of my local network.

In both scenarios, the operating system has been a pleasure to work with. Although I do have one complaint and that is when you get to the point of package selection during the installation process, the interface kind of sucks. I am sure that I am not the only one who feels that way but it is what it is.

Now that I finished sharing my personal experiences with the operating system I will continue on to the main topic of this article and that relates to Canonical’s release cycles for their server catered OS. I for one know from experience that the IT industry doesn’t like change. That includes operating systems. The mentality is always that “if it ain’t broke, why fix it.” With that in mind I can see why Red Hat takes many years to release a new official stable release of their flagship OS while continuing to support the current with almost service pack like updates (via a new release such as 5.x or through yum). Truth be told, when a facility deploys and manages an operating system, they usually get comfortable with the release and do not want to stray far off it until one of three conditions are met:

  1. It is not supported anymore and they are forced to move on.
  2. A new release comes out and it introduces [a] much needed feature(s).
  3. Or new hardware is acquired.

So…why do I mention this? I understand the concept of 6 month release cycles and an LTS release every two years. Ideally, you would want to attract potential customers at every LTS release; but those inbetween releases seem like a waste of time and effort ONLY for the server series. Let me explain.

A lot of the general public are still somewhat confused by the whole Long Term Support (LTS) concept and what that truly means. For both desktop and server operating systems, with Canonical, everything inbetween is considered a concept or feature testing release in preparation for the next LTS OS. While this may not be an official explanation coming from Canonical, it has always been general perception. In a non-LTS release you saw the introduction of the Software Center, Upstart, Plymouth, etc. (as is evident with 8.10, 9.04 and 9.10) and now btrfs support is planned for 10.10 (also a non-LTS release) among other features.

For the desktop OS, this is all fine but when we get to the server side of things, why is it still necessary to maintain the same release cycle? Nobody wants to deploy a non-LTS release in a production environment. Especially when the support for that release is not as long as an LTS one. And if something needs to be tested, it can always be tested in the desktop non-LTS releases. This is where I feel that companies such as Red Hat, Novell, Sun/Oracle to even Microsoft got it right. The server editions of their operating system are not meant to see dramatic changes and or additions so often. This is why it is easy for a system administrator to transition from Microsoft Server 2000 to Server 2003 and again 2008. Don’t misunderstand me, I know from experience that a Microsoft Service Pack or upgrade can cause some damage but generally the operating system are focused on stability and standardizing the environment until the next major release.

If I were to suggest a recommendation for Canonical, it would be: “Maintain the 6 month release cycle for the desktop (and netbook) releases with an LTS appearing every two years but as for the server OS, drop the 6 month cycle and adopt an every two year LTS release.

Categories: Linux, Ubuntu Tags:

Re: Apple. Will history repeat itself?

July 22nd, 2010 3 comments

I have been thinking about this for a short time now. I have been spending some time studying the computing market at various levels and across varying technologies; most recently the focus was more on the mobile computing industry. But before I dive into some of my personal opinions I want to revisit some brief events throughout history:

From the late 70′s to the 80′s Apple markets personal computers with a proprietary operating system tied to its proprietary hardware. They charge high prices in exchange for a feature rich and an evolving simplified UI. During this time period Microsoft is providing their software solutions as a software only company.

Originally built on MS-DOS (with the first stable release in 1985), Microsoft distributes Windows for the Intel architecture. Over time, they pushed a radical idea of providing an operating system that was not tied to specific proprietary hardware. This enabled many hardware manufacturers to install and distribute licensed copies of the operating system. Although not as advanced as Apple’s OS UI, it was just good enough to get most people to become more productive.

Cheaper hardware + Hardware independent software = Cheaper PCs = Microsoft’s success of the desktop market

Truth be told. You did (and still do) get what you pay for. Microsoft’s applications and series of operating systems were never necessarily well known for stability and security. Overall, their approach to business made sense for its time.

But what do you have now? The focus has shifted to mobility. A lot of applications are now provided services over the web (i.e. the cloud) and our mobile devices provide us access to these services. For the past decade Apple has made a huge comeback and found itself a market which has been leading to its recent success. Although, they continue to push their proprietary model on all their products.

While other companies are competing with Apple, the most noteworthy is Google (indirectly via its ad-based model) and specifically the Android operating system. Google has taken more of an open approach to how Android is presented but in the end, similar to Microsoft with Windows, it is designed to run on varying hardware platforms. With a nice UI (maybe not always as crisp and clean as the iPhone’s OS) and a constantly growing Market with tens of thousands of applications to choose from, Android has proven itself to be a very worthy competitor. Its market growth numbers have reflected this and Android is significantly catching up to the power players of the smart phone industry.

My question is: Is Apple doomed to repeat its own history? Should we continue to expect Apple market share growth? Or will this plateau as more and more Android devices flood the market offering more affordable and feature rich mobile computing experiences?

Categories: BSD, Linux, Microsoft, UNIX Tags:

OpenSolaris and/or Solaris Next?

July 4th, 2010 Comments off

After spending a week on the forums at opensolaris.org, it would seem that a great number of individuals are waiting for Oracle to make it official: “that OpenSolaris will be no more.” If this ends up being the case then it would be very upsetting as OpenSolaris is a very stable and robust UNIX operating system. During this time I came across this blog posting which may shed some light. It discusses the Oracle directed Solaris Next label svn_140. Solaris Next is the development name given to Solaris 11.

Initially Oracle had publicly announced that it will stay involved in the OpenSolaris community and continue to support it. This was more than a few months ago, and they have been silent since. If they do decide to kill the OpenSolaris project to shift all focus on Solaris Next alone, then I am curious as to if they will kill the OpenSolaris binary distribution project (opensolaris.com) only or the binary plus the community (opensolaris.org). The community can still live on without the binary. They will instead focus on the Solaris Next builds as opposed to OpenSolaris.

Categories: OpenSolaris, Solaris Tags:

Are we to ever see OpenSolaris 2010.1H?

July 1st, 2010 1 comment

I am still unsure about Oracle’s promise on the future of OpenSolaris. Phoronix has just posted an article showing that what was once supposed to be 2010.02 and then 2010.03 to eventually 2010.1H has not been released at the end of the first half of the year. It would be a shame if nothing came out of this. OpenSolaris is such a great platform and ZFS brought some much needed features which helped increase my development and productivity. Should we cross our fingers for a 2010.2H release?

I guess what it boils down to for Oracle is, do they see a profit from this open source project? What does OpenSolaris bring to Oracle and how does it help them solidify their newly acquired Solaris platform? If we ask for my personal opinion, I feel the advantages outweigh the disadvantages. A working example can be seen with Red Hat and the Fedora Project. The more open platform (in this case OpenSolaris) defines the bleeding edge technologies, stabilizes them and is a testing ground for all that will be ported to the official stable release (Solaris). In fact that is how it was under Sun Microsystems.

Although as the months pass by, the more dim OpenSolaris’ future gets. Fortunately enough it is an open source project and can easily be forked into a new community driven distribution. So even in the darkest of hours not all is lost.

Categories: OpenSolaris, Solaris Tags: