Raid 5

Nicholas Leippe nick at byu.edu
Mon Dec 5 09:18:39 MST 2005


On Friday 02 December 2005 05:22 pm, Shane Hathaway wrote:
> While your point stands, the calculation is imprecise.  By this logic
> (1/1000 * 2 = 1/500), flipping a coin twice guarantees the coin will
> land on both sides, which observation disproves:
>
>    1/2 * 2 = 1
>

Yes, my stats couldn't be rustier.  I'm glad it was at least a close 
approximation.  I'm surprised Steve Meyers didn't catch me first. ;)

> However, your method yields a close approximation for small fractions.
> A simple but correct way to compute the reliability of a RAID 0 array is
> to subtract the probability of each drive failing from 1, yielding the
> probability of each drive surviving, then multiply those probabilities
> together to figure out the probability of the set surviving, then
> subtract from 1 again to figure out the probability of the set failing.
>   IOW:
>
> 1 - (1 - 1/1000) ^ 2 = 1999 / 1000000
>
> ... which is really close to 1/500.  The difference matters when you
> talk about a longer period of time than the 1/1000 estimate implies.
>
> BTW, has anyone tried RAID 6?  It's in recent Linux kernels.  It claims
> to survive the loss of any two drives in a set.
>

I have been very curious to know more about raid 6 performance.  From my 
limited knowledge of it, it can't perform better than raid 5 for writes. (I 
wouldn't think that reads would be impacted any significant amount).  But 
with the added fault tolerance, it may be well worth it.

> If anyone's interested, I've written about storage reliability in my
> blog, although I'm only discussing theory, not practice.
>
> http://hathawaymix.org/Weblog/2005-10-26

That's a nice write up. Thanks.

-- 
Respectfully,

Nicholas Leippe
Sales Team Automation, LLC
1335 West 1650 North, Suite C
Springville, UT  84663 +1 801.853.4090
http://www.salesteamautomation.com



More information about the PLUG mailing list