<HTML><BODY style="word-wrap: break-word; -khtml-nbsp-mode: space; -khtml-line-break: after-white-space; ">I posted something on the topic,<DIV>for which Nick Frost gave me lots of info to deal with,</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>You'll need to run the math,</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>but after seeing the rants about raid cards going bad,</DIV><DIV>I'd be more inclined to the JBOD hardware setting along with soft RAID</DIV><DIV>looks like if running hardware RAID and you look for fail-proof setup,</DIV><DIV>you'll also need a spare RAID card, due to the proprietary issues with the card</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>I don't know how much of the CPU the soft RAID will use,</DIV><DIV>but it looks like it is not too much, and that whichever CPU will always be</DIV><DIV>far more than the chip in the raid card</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><A href="http://linux.yyz.us/why-software-raid.html">http://linux.yyz.us/why-software-raid.html</A></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>The point is you might want to use a card to plug multiple devices,</DIV><DIV>(inexpensive JBOD kind of card)</DIV><DIV>but you might to run those under software RAID</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>We always bought a spare drive along with the live ones,</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>one of the servers we have has it already plugged in the card,</DIV><DIV>so if anything goes poof then it will re-construct automatically,</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>what I am dealing with now is the best way on having, i.e., a terabyte</DIV><DIV>and distribute that storage use among other servers, i.e. vmware appliances</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV>the OpenFiler distro looked fine, but I know nobody's experience on using that or other,</DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><BR class="khtml-block-placeholder"></DIV><DIV><BR><DIV><DIV>On Nov 9, 2006, at 11:03 AM, Tim Emerick wrote:</DIV><BR class="Apple-interchange-newline"><BLOCKQUOTE type="cite"><DIV style="font-family:times new roman, new york, times, serif;font-size:12pt"><DIV>I've been reading this article</DIV> <DIV> </DIV> <DIV>Build a Cheap and Fast RAID 5 NAS Tom's Networking</DIV> <DIV><A href="http://www.tomsnetworking.com/2006/08/01/cheap_fast_diy_raid_5_nas/index.html">http://www.tomsnetworking.com/2006/08/01/cheap_fast_diy_raid_5_nas/index.html</A></DIV> <DIV> </DIV> <DIV>One of the reasons the author decided to use a hardware raid5 was that in the event of failure the ability to rebuild is in the hardware bios. It is basically OS independant. He also notes that it appeared difficult to recover from a failure using linux software raid and couldn't easily figure out how to rebuild the raid in event of failure.</DIV> <DIV> </DIV> <DIV>My small file server at work is running debian, samba, software raid1 (mirror), and I use webmin to manage it all. I had a disk failure last week and it seemed pretty trivial to remove the damaged drive from the array using webmin. When I received my replacement drive in the mail a few days later I just popped it in, powered up the PC, started webmin and added the new drive back to the array where it immediately started to rebuild it. Took maybe 1/2 to rebuild the 80gig mirror.</DIV> <DIV> </DIV> <DIV>I have 2 250gig drives on my server at home that are not raided and I have had failures. Luckily using SMART and knoppix type tools I was able to retrieve my data to another drive before the damage became irretrievable. Now I'm looking at making a terrabyte NAS at home and of course want to go on the cheap.</DIV> <DIV> </DIV> <DIV>So, given my experience and ease in rebuilding an array with webmin I have a couple of questions.</DIV> <DIV> </DIV> <DIV>Is the advantage of using a hardware raid over a software raid as the author notes all that great? </DIV> <DIV> </DIV> <DIV>I have a stack of PentiumII-400 machines that I could turn one into a RAID5 NAS but is the processor up to speed. </DIV> <DIV> </DIV> <DIV>Is 128MB of ram sufficient for my small 5 user network? </DIV> <DIV> </DIV> <DIV>Should I go ahead and spend the extra $100 to get an IDE RAID card like the article suggests or try the software solution? </DIV> <DIV> </DIV> <DIV>Any particular OS/distro better for making an NAS? BSD, Debian, Ubuntu, etc.</DIV> <DIV> </DIV> <DIV>Would be curious to hear your opinions.</DIV> <DIV> </DIV> <DIV>Tim</DIV></DIV><BR><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">_______________________________________________</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; ">nmglug mailing list</DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><A href="mailto:nmglug@nmglug.org">nmglug@nmglug.org</A></DIV><DIV style="margin-top: 0px; margin-right: 0px; margin-bottom: 0px; margin-left: 0px; "><A href="http://www.nmglug.org/mailman/listinfo/nmglug">http://www.nmglug.org/mailman/listinfo/nmglug</A></DIV> </BLOCKQUOTE></DIV><BR></DIV></BODY></HTML>