Hard Disk IDs in Linux

Dan Egli ddavidegli at gmail.com
Sat Mar 16 03:04:28 MDT 2013


> For a home server I recommend RAID1 or RAID10 over RAID6.

Really? I guess between RAID6 and RAID10 it's not much different, but what
about someone who has say six or eight disks in the server? I'm curious why
you'd still recommend RAID10? Hypothetically speaking, let's assume I
wanted to have a server big enough to hold 1 year of downloaded data from
the net, downloading at approx 5Mbps (with TCP overhead, that comes to
approx 1 MB every 2 seconds) 24/7/365. That's nearly 16 TB. A RAID6 could
handle that with 6 drives, 4TB each. A raid10 would need 8 drives. I admit
each is possible to throw into a full tower case, but why spend the extra
money on two more drives, making the two raid10s? I am genuinely curious.





> What kind of chassis? Most good chassis will do hardware RAID and

> export the volumes. Though there are some bare disk arrays that simply

> export devices as SCSI LUNs. Either way you need a chassis with a power

> supply.



Well, that's not really an issue because I finally realized I could break
my boss down by using some basic math. I showed him using basic
multiplication how long it would take to fill the 120TB array he wanted
(more than eight years to reach 25% capacity) and he FINALLY agreed that we
could do it much cheaper and easier by building a full tower PC and filling
it with Hard Disk Drives. So we're going to order the parts soon. Thank
goodness for that. I'm still not sure which chassis he wanted. I think he
was thinking of going to a company like Aberdeen or someone. I have
insufficent experience to state whether or not that was a good idea, but
thankfully it's a moot point now. I imagine we can fit about 10 disks in a
large case (I have to do some research on cases to find the one that will
let us hold as many hard disks as we can), and make a raid out of them.





> I'm not sure how I feel about ZFS... ZFS is not a supported Linux file

> system. It's third-party and licensing conflicts means you have to

> compile the modules yourself everytime a kernel is updated. Though

> this is largely automated these days with the dkms system that many

> distros use. And maybe there are binary repositories.



> I feel that ZFS in and of itself is stable and production-ready (I used

> it for years on Solaris without issue). But I'm not sure of the status

> of the zfs-on-linux project.



So what would you use? Be aware that he's REALLY keen on using a file
system that includes journaling and data-deduplication. I don't know how
easy it's going to be to change his mind. It took near a week of arguments
before I got him to abandon the rackmount server idea. I'm well aware of
many of the advantages of file systems like Ext4 and JFS. But try
convincing my boss on that. He's one of those people who hears about some
new idea, likes it, and wants it implemented, despite not knowing how it
works internally or what would be involved in the implementation.



> I get the impression your boss thinks the large disk array idea is on the
same

> order of complexity as throwing disks in a box and setting up a software
RAID.



EXACTLY. That's PRECISELY what he thought. He's all "Linux supports
software Raid! You've shown yourself that it can handle the number of
drives in the server that we'll have. So why not just throw the drives in,
connect them to the various controller cables, and build the raid?" and
didn't want to listen to ideas that it would likely be MUCH more
complicated than that. But now that he's finally agreed to a full tower I
just have to be sure we order either a motherboard with enough controller
ports on it, or order a separate controller card to handle whatever the
motherboard can't.



What kind of an interface would you guys recommend? Now that he's resigned
to a PC tower case he's thinking standard SATA. While I'm sure that would
work, is there a better idea?


Thanks!


More information about the PLUG mailing list