Archive for October, 2009

Apple discontinues port of Sun’s ZFS file system.

October 27th, 2009 1 comment

On 23 October, 2009 it was announced on MacOSForge that Apple had decided to discontinue any and all development on the porting of the ZFS file system. I know that I am not the only one to say this but I am not surprised. Supposedly there were legal reasons behind this action but in the end, who cares? They are the ones losing out to continue with an out dated and still limiting file system.

Now Apple has recently been hiring file system developers to develop a next generation file system to replace the traditional HFS+ but (as Robin Harris has previously stated) how long will it take before it becomes stable and accepted by the general public? Traditionally it takes 5+ years before a file system is considered somewhat stable and ready for production use. It wasn’t until recently that ZFS was starting to make its impact in the enterprise scene. Though my question is, to whom will this next generation file system cater to? I am to assume that it will be for the general end user utilizing Mac devices that “don’t require the weight of the ZFS features and functionality” ; or so it has been said regarding the topic of Apple abandoning the ZFS project. If that is the case and is the primary focus of the new file system, how will this impact their server market share? We already know that there is no such thing as a perfect file system that will perform ideally in every arena it is thrown into. Some will excel more than others and is entirely dependent on its implementation and workload.

In past posts, I have always stressed the importance of the file system and what is integrated within the file system. I routinely point out the numerous drawbacks and limitations of the NTFS driver. Sure, Microsoft compensates for the “lack of features” with applications, services and additional APIs to fill in all those gaps. A good example is VSS (shadow copy). This can impact performance as it is taking file system concepts out from kernel mode and into user land and consuming user mode resources. All these feature should and need to be incorporated into the file system driver. That way we can ensure that there is stability and consistency with all functions the file system performs. Even the general layout is not ideal for traditional computing over large storage media; as the fragmented large seeks between the MFT and the file data can put a lot of stress on the magnetic device. Going back to HFS+ and sort of on the same topic (although the concept is a bit different), the same could be said about Apple’s Time Machine and it running as an application on top of the driver.

One thing that I hold to heart when it comes to file systems is the ability and flexibility to tune it even without taking the mounted device(s) off-line. Most modern UNIX and Linux file systems offer a lot of tunable features (built into the driver!). For instance (through the ZFS character device node) I can dynamically alter file system variables (man 1 zfs). For this example I will focus on access times. Let us say I am using an SSD and decide that it would be more cell friendly and better performing to disable file access times on the root mount.

atime=on | off
Controls whether the access time for  files  is  updated
when  they  are  read.

To view current settings and disable this feature you would type the following in the command-line terminal:

petros@opensolaris:~$ pfexec zfs get atime rpool/export/home
rpool/export/home  atime     on     default
petros@opensolaris:~$ pfexec zfs set atime=off rpool/export/home
petros@opensolaris:~$ pfexec zfs get atime rpool/export/home
rpool/export/home  atime     off    local

I just hope that Apple is prepared for the journey they are about to embark on. They obviously have file system development experience, and I have no doubts that they have the talent. Do they have the patience and time to invest?

Categories: BSD, File Systems, OpenSolaris, Solaris, UNIX Tags:

nixCraft article: Linux Tuning The VM (memory) Subsystem

October 16th, 2009 Comments off

Linked off of, today I found this interesting article on “Linux Tuning The VM (memory) Subsystem.” The author also offers some suggestions for a more efficient computing environment.

Categories: Linux, System, Virtual Memory Tags:

The H Open Source article: Sun releases Solaris 10 10/09

October 12th, 2009 Comments off

According to the H Open Source website, Sun just recently announced the availability of Solaris 10 Update 10/9.

“The latest release includes a number of bug fixes, feature updates and expanded support for new processors.In addition to several efficiency and performance improvements, Solaris 10 10/09 includes new updates for Solaris ZFS which integrates the ability to use solid-state drive (SSD) technology for data caching and high volume transactional applications. Administrators can now set usage limits, such as by individual file system, user or group.”

Read more here and here. You can download the OS release here.

Categories: Solaris, UNIX Tags:

FlexTk article: NAS Performance Comparison

October 8th, 2009 Comments off

Linked from, I found an interesting article posted on FlexTk regarding NAS Performance Comparisons between Linux, Windows and OpenSolaris. The results are very interesting. Under each category, comparisons are drawn between:

  • Red Hat Enterprise Linux 5.3 (64-bit)
  • Ubuntu Server 9.04 (64-bit)
  • OpenSolaris 2009.06 (64-bit)
  • Windows Server 2003 (64-bit)
  • Windows Server 2008 (64-bit)
  • Windows Storage Server 2008 (64-bit)

I assume that each operating system is utilizing the default file systems with default settings for that specific release. Red Hat and Ubuntu should be using Ext3-fs, Windows obviously uses NTFS while OpenSolaris is built on top of ZFS. The CIFS/NFS exported share(s) in turn are running on top of these defaulted file systems. Either way, with average overall performance, OpenSolaris seemed to really shine. It also did well in some of the other categories which made sense when knowing the design of the ZFS file system.

New Article: Data Backup

October 6th, 2009 Comments off

Earlier this month Linux+ Magazine released their 4th quarter issue (4/2009) containing my article on The Linux RAM Disk. You can find my article on Data Backup for small-to-medium sized business environments in the next issue: 1/2010.

Categories: Linux, Storage Tags:

LWN article: Log-structured file systems: There’s one in every SSD

October 2nd, 2009 Comments off

Yesterday I came across this excellent article on log-structured file systems and their implementation on SSD technologies. It is worth the read.

Categories: File Systems, Linux, Storage, UNIX Tags:

Opinion: On pramfs and RAM based Linux file systems

October 2nd, 2009 10 comments

A few days ago I received the latest issue of Linux Journal Magazine. I must admit that one of the sections I look forward to reading is diff -u. This section summarizes the latest updates and discussions of the Linux kernel development community. It becomes much easier to read a summary as opposed to signing up for the mailing list because you will just get bombarded with e-mails which can be overwhelming the majority of the time.

While reading I came across a Montavista developed project called pramfs. In summary pramfs is a non-volatile RAM based file system, similar to your ramfs and tmpfs with a few differences to distinguish it from the others and in turn adapted for an embedded environment. Two obvious differences are that it is persistent like a traditional disk-based file system and does not reside in volatile DRAM. Pramfs is not new. It was originally announced back in 2004. It is designed to be a simplified file system that does not carry the same weight of the journal-based file systems.

Apparently there had been some problems with the patch being merged into the Linux kernel for a number of reasons. (1) Montavista was attempting to patent some of the concepts and algorithms used in the file system (in 2004) and (2) even after the dropped the idea of patenting their code, there was some discussion on the redundancy of having yet another file system implemented into the Linux kernel (in 2009). What that means, is that the Linux kernel already has two commonly used RAM file systems and a large number of other file systems. So why was there a need to write another one? Why couldn’t Montavista patch already existing code? (3) It is also not a full featured file system in that it does not support symbolic links.

I agree with this logic. Please do not misunderstand me. Montavista is a very respectable company that has done an excellent job in supporting embedded Linux. I am also glad to see them contribute to the kernel and in turn the community. But truth be told, tmpfs was build on top of the ramfs code. Why couldn’t pramfs follow the same course of development. The GPL makes it easy to not have to re-invent the wheel.

The two most noteworthy goals achieved for pramfs (1) is to work with NVRAM and (2) provide and interface that does not utilize the kernel page caching mechanism. By utilizing the DIRECTIO flag available in the 2.6 kernel, Montavista claims that I/O performance is increased significantly to an already high performing interface. Pramfs also allows the user to specify regions of memory for file system usage.

mount -t pramfs -o physaddr=0x1e000000,init=0x2000000 none /mnt/pramfs

With it working in non-volatile memory, the data contents will remain intact even after an expected/unexpected power cycle.

This concept got me thinking a bit. How difficult would it be to add some of these features in Ramfs? Ramfs offer some similar functionality as in it does not use the kernel’s page cache for file I/O.  Tmpfs was designed to offer that functionality along with additional file system control and limitations. Ramfs also has a slightly similar general file system layout. Sure a few structures and routines need to be redefined but that isn’t a big deal in the grand scheme of things.

I mention this in the light of some of the latest headlines circulating through the internet regarding Linux Torvalds’ comments on the kernel being bloated. Does the kernel leave room for additional “bloat” or would it be wiser to add on top of current features/functionality? I would love to read some of your opinions.

For more blog posts relating to RAM-based file systems and RAM Disk device drivers, you can find them posted here, here and here.

Categories: File Systems, Linux, Storage Tags: