Tuning ext3 for a FC SAN?
Kenny Lussier
klussier at gmail.com
Tue Dec 9 10:58:02 EST 2008
On Tue, Dec 9, 2008 at 10:35 AM, mark <prgrmr at gmail.com> wrote:
> The problem is more likely with the RAID group and LUN layout than with the
> Linux file system. You should also verify that you have the latest Qlogic
> drivers for the fiber cards and that they have been certified by both the
> SAN vendor and Brocade to work with the versions of those products you are
> using.
Done that. I have the latest drivers for the QLogic cards, and I have
verified that the cards and the driver version is supported by Brocade
and the SAN vendor. I have also checked with Brocade and the SAN
vendor to make sure that they are fully compatible and supported by
each other. I have also made sure that my multi-path setup is optimal
and I am using the latest dev-mapper-multipath available from RedHat
for RHEL 4.
> However, if there is a database involved (particularly if it is
> Oracle), all bets are off and you need to first test IO transfer rates
> outside of the db.
We are testing the I/O outside of the DB now. We haven't seen the
performance from raw testing yet to give us any indication that the
database will perform up to our requirements. Currently, I am seeing
better performance from an iSCSI volume on an Equallogic then I am
from the SAN. The database performance is another set of tests that
someone else will be doing, but we are holding that off until we see
devent raw performance. The database isn't Oracle, it's worse...DB2
.
> Also, do you have software on the SAN to measure IO transfer rates that you
> can compare with the iostat output on the Linux box?
We are using their CLI and their system stats collector.
Thanks,
Kenny
> mark
>
> On Tue, Dec 9, 2008 at 10:27 AM, Kenny Lussier <klussier at gmail.com> wrote:
>>
>> Hi All,
>>
>> I am not a filesystem/performance expert by any means, so I am hoping
>> that I can pick up some tips and pointers here. We are currently
>> evaluating a SAN, and the performance is less then stellar. We have
>> simulated our production environment, which is:
>>
>> RHEL 4 (x86_64) u7 running the 2.6.9-67.ELsmp kernel
>> Dual quad-core Xeon 3.16GHz CPUs
>> 16GB of RAM.
>> 15K SAS internal drives RAID1
>> Two single port FC QLogic HBA's (4G/s model)
>> Two Brocade FC SAN switches
>> SAN from <un-named vendor> with 96 400GB 10K drives
>>
>> The problem that we are seeing is that the descrepancy in I/O
>> performance between local disk and the SAN throughput seems way too
>> high. Using IOZone, we are getting 80MB/sec throughput for rewrites of
>> 16-32MB files, with a 4K block size on the local disks. When running
>> the exact same tests against the SAN, we are seeing 14MB/sec
>> throughput. I expect to see a difference between local and remote
>> storage, but that seems to be outside the realm of normalcy.
>>
>> So, my question is, is there something that I need to do to tune the
>> ext3 filesystem? Are there options that I should use when I create the
>> file system to optimize it, or mount options that should be in fstab
>> to increase performance? Are there any pointers anyone may have for
>> optimizing a system for use as a database server connected to a FC
>> SAN?
>>
>> TIA,
>> Kenny
>> _______________________________________________
>> gnhlug-discuss mailing list
>> gnhlug-discuss at mail.gnhlug.org
>> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
>
> _______________________________________________
> gnhlug-discuss mailing list
> gnhlug-discuss at mail.gnhlug.org
> http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/
>
>
More information about the gnhlug-discuss
mailing list