Linux vs. Solaris file IO performance

Karl J. Runge runge at karlrunge.com
Thu Oct 31 23:15:53 EST 2002


I think you have a few options here to speed things up.

As far as I understand, the Linux ext2 filesystem performs the disk 
I/O asynchronously and so defers, among other things, the writing of 
the fs metadata (if the available RAM permits).  This is fast, but of 
course is dangerous because there is a larger window of time where
the filesystem is left hanging not being in sync with the harddisk.
Solaris's ufs filesystem does not, to my knowledge, perform the operations
asynchronously and this is likely the cause for the slow performance
you see when creating or deleting many files over a short period of time.


To speed things up, one thing you could try is to turn on logging for
the ufs partition.  E.g. put the mount option "logging" in /etc/vfstab 
(or run "mount -o remount,logging /path/to/fs" manually to experiment).
See mount_ufs(1M) for more info and additional options to play with (e.g.
noatime and dfratime).
 
Logging (aka journalling file system) is evidently able to buffer up a
number of transactions before going to the harddisk and this speeds up
performance.  Having a journalling file system is also good of course
for data integrity and avoiding fsck(1)'s.  So it is a pleasant surprise
to get a performance improvement as well: on a machine at work I see
about a 25X speedup in file creation and deletion by turning on logging.


Another option, dependent, however, on the application(s) and process 
you are doing, is to use a tmpfs filesystem instead of ufs.  E.g. /tmp
(or create your own of type tmpfs in /etc/vfstab) is tmpfs and that fs
delays the disk I/O.  I find it has file creation/deletion performance
comparable to Linux ext2.  If your situation allows that the large
file creation/deletion activities can be done on tmpfs then do it!
(simple example: inside /tmp unpack a tarball then do configure; make;
make install.  Should be the fastest way to do this)


You may also want to set the Solaris kernel parameter: set dnlc_dir_enable=1
in /etc/system.  This should allow some directory caching and should help.
However, this is evidently enabled by default so you probably already
using it, but you might want to make sure it is on.  In general, you 
can find a discussion of the Solaris kernel tuning parameters at:

        http://docs.sun.com/db/doc/806-4015/6jd4gh8ek

and there are also some performance tuning "blueprints" at:

        http://www.sun.com/solutions/blueprints/browsesubject.html

(BTW, many of the blueprint docs there are good reads for doing things 
on Linux boxes as well as Solaris)


The final option (that I know of) is to use the "fastfs" tweak by
Casper Dik:

        ftp://ftp.isc.org/isc/inn/unoff-contrib/fastfs.c

this opens up the disk partition device and performs an ioctl(2) FIODIO to
put the device into a deferred I/O mode.  The primary use of this is to
allow fast filesystem restores from backup, and so... it is not clear to 
me how safe it is to use this continuously on a production filesystem...

On the same machine at work, I found fastfs to give about the same speedup
as logging for file creation, and about a 50X speedup for file deletion.

HTH,

Karl


On Mon, 28 Oct 2002, Tom Varga <tvarga at lsil.com> wrote:
> 
> I've been a linux user for about a decade and have always been amazed at how
> much faster file IO is on my linux box than on Solaris boxes that I have to use
> at work.
> 
> For example, if I have a large directory structure on a local partition with
> say thousands of files that I need to delete, I do the following :
> 
> rm -r directory
> 
> On my linux box, this happens nearly instantaneously whereas on a Solaris box,
> it can take minutes or more.  I can hear the disk head going crazy as if each
> and every file needs to be individually deleted.
...



More information about the gnhlug-discuss mailing list