I am just writing to announce the official release of my SCSI Bus Analyzer Module (scsitrace) 1.0 Beta for the Linux 2.6 kernels. A patch has been released for the Linux 188.8.131.52 kerne
I am just writing to announce the incremental release of my DrvAdm Linux 2.6 kernel specific storage administration utility. It is licensed under the GPL v3 and currently in version 10.09-2, the utility provides the capabilities to:
- add/remove/rescan for devices in the Linux 2.6 SCSI Subsystem
- reset the SCSI host, target, bus or all
- retrieve a list of all SCSI devices with detailed information
- receive a list of all SCSI hosts with detailed information
- view device partition tables, geometry / size
- dynamically modify parameters such as queue depth to timeout
- and even send Fibre Channel Loop Initialization Primitives (LIP) to specified hosts.
More can be read here: http://wiki.petroskoutoupis.com/index.php5?title=DrvAdm
I came across this interesting article today. It would appear that LSI is becoming one of the rising number of vendors to manufacture and distribute Flash-based SSD technologies in the use of high speed storage caching; traditionally referred to as Accelerator technologies.
While faster than the magnetic hard disk drive (HDD), I am still not convinced that Flash-based SSD technologies is the best type of technology for high-speed caching. I am speaking primarily of the bottlenecks found in the Flash storage concept; specifically the limited cell life on top of the latencies found in the cell read/erase/write model. Sure, vendors have found ways to work around the limited cell life with wear-leveling to the cell read/erase/write performance hit with write-combining, over-provisioning to even the newly implemented TRIM command. But will this be enough? Realistically, caching large amounts of data will not reflect the ideal environment to which products such as LSI’s accelerator card has been benchmarked with 4K transfer sizes. I would prefer to see performance data more realistic, in which data transfer will vary between 512 bytes to as high as 4 MB. I know from experience, that it is the numerous number of small files that will truly hurt performance on the Flash-based SSD.
For those who are unfamiliar with what I am speaking about, the traditional Flash-based SSD operates a lot differently when it comes to write operations. While there is very little latency in seeking performance (sequential and random), as there are no movable components, write operations take a huge hit on the second iteration of data writes over cell regions, especially with smaller I/O transfer sizes, when typically the flash medium erases/rewrites a 128K page at a time. That is, they will an read 128K minimum (number may vary by vendor) page into memory, modify a 4K minimum chunk (even if the data to be written is 512 bytes, erase the physical cells that cover that 128K page and write the newly updated 128K page back to Flash (not necessarily in that order). Amplify that now with multiple small I/O writes that don’t align well to the 4K chunks. This will significantly impact overall performance.
For those of you interested, the latest issue of Linux Journal Magazine (Issue 201) has just hit the shelves at your local bookstores. This month’s issue seems to be very good one as it covers topics System Administration which include virtualization, Storage Area Networks (SAN) with ATA over Ethernet (AoE), to even an article from yours truly on creating live backups using the Snapshot feature of LVM2 and more! I obviously had to sneak in some self promoting.
I just came across this article late in the day yesterday: The Benefits of 16Gbps Fibre Channel. It also references this more detailed article with some interesting performance numbers: 16GFC standard doubles Fibre Channel speed. With the increased bandwidth, we are less likely to suffer from the bottlenecks in the recent rise in usage of SSD technologies. Exciting stuff is a coming!