[nmglug] filesystem optimization on software raid 5
Jason Schaefer
js at jasonschaefer.com
Thu Jun 14 00:26:25 PDT 2007
It appears that the main things to be concerned with are filesystem
block size (default 4096bytes, which is the maximum size for ext3),
chunk size (mdadm default is 64k) and stride which is found by dividing
raid chunks by filesystem blocks (64/4 = 16). So in this case stride=16.
The chunk size is the minimum amount of data that can be written to the
filesystem. So larger files will write faster with larger chunk sizes. I
am working with large and small files but I want to optimize for larger
files, I might raise from the default 64k to 128k. The block size is
similar to chunk, therefore a larger block size will reduce fragments
(across multiple blocks) and time to retrieve files. So I am going to
stick with 4k which is the maximum.
Then there are some other tweaks I have found for mkfs.ext3. The -O
dir_index will speed up lookups in large directories. I will use this
since it will hold many large directories.
The -T largefile4 option allows 4mb to be stored per inode, which is a
good option since most of the files will be larger than 4mb, so I won't
be wasting space.
I am also thinking of changing the reserved root percentage to 0%
(mkfs.ext3 -m 0 ) since this will he /home and if it fills up it won't
affect the system operation.
Anyone have any experience or suggestions on this matter. Have I made
any mistakes above? Any other optimizations I might have missed?
Oh, and one other thing... I was thinking of using LVM, but realized if
I want to dynamically add to my partition, I would have to build a new
raid (md1), then add that to the existing LVM on md0. This is lame and I
would rather backup and rebuild my raid when the time comes to upgrade.
Anyone want to convince me otherwise?
Don't forget, meeting today, the 14th!
-Jason
P.S. Just to be complete, my full commands to do the above:
#mdadm --create /dev/md0 --level=5 -c 128 --raid-devices=4 /dev/sda1
/dev/sdb1 /dev/sdc1 /dev/sdd1
#mkfs.ext3 -O dir_index -T largefile4 -E stride=32 -m 0 /dev/md0
Andres Paglayan wrote:
> what's your bandwidth?
> may be you are tackling something which doesn't need to be addressed
> beyond a regular standard,
>
>
>
> Andres Paglayan
>
> --"Harmony is more important than being right"
> Bapak
>
>
>
>
> On Jun 13, 2007, at 6:32 PM, Jason Schaefer wrote:
>
>> So, back to "filesystem optimization" ??
>>
>>
>>
>> Ken Long wrote:
>>> On 13 Jun 2007 at 7:55, Aaron wrote:
>>>
>>>
>>>> RAID5, in general, is a bit dangerous. Software RAID5, even more so.
>>>> As one disk dies, it tends to produce some bad data, which in turn,
>>>> corrupts the other drives. Some data is often lost.
>>>> I'm not an expert at the recovery tools... but theoretically,
>>>> you should only loose a little data if you can re-build the rest OK.
>>>>
>>>
>>> People use raid 5 in their servers because it is more reliable. If
>>> you lose one drive you can keep on running with no loss of data
>>> while you schedule the drive replacement. That's the whole point.
>>>
>>> After the failed drive has been replaced no one is the wiser.
>>> (Except the poor system admin who has grown several more grey hairs
>>> over the whole incident.)
>>>
>>> Physical security, power protection and daily backups are also part
>>> of the overall strategy.
>>>
>>> Ken
>>>
>>>
>>> _______________________________________________
>>> nmglug mailing list
>>> nmglug at nmglug.org
>>> http://www.nmglug.org/mailman/listinfo/nmglug
>>>
>>>
>>
>> _______________________________________________
>> nmglug mailing list
>> nmglug at nmglug.org
>> http://www.nmglug.org/mailman/listinfo/nmglug
>
>
> _______________________________________________
> nmglug mailing list
> nmglug at nmglug.org
> http://www.nmglug.org/mailman/listinfo/nmglug
>
More information about the nmglug
mailing list