Hard Disk IDs in Linux

Dan Egli ddavidegli at gmail.com
Tue Mar 19 00:38:36 MDT 2013


*Correct. The idea was to create a large storage point where everything
could be archived and reference back to the projects as needed. My boss was
convinced that the projects were much larger than they are (He was thinking
in the high hundreds of gigabytes per project, when in fact it's usually
closer to low tens) which is why he wanted the huge server (and boy did it
take some doing to get this information out of him. Pulling an angry
alligator's's teeth would have been simpler!) and why  it took so much work
and finally some math to convince him we didn't need one that big. But
either way, considering it's going to be very rare (less than 1% chance)
for an individual project to change once it's archived we really don't need
snapshots. *

* *

*At this point he's fairly convinced that we can do what we need with an
AMD based motherboard (since they have, if memory serves, 8 SATA ports) and
an extra PCIe controller to handle the extra drives. It's still not the
solution I would have preferred, but it's far more acceptable than the huge
8U/32 4TB hdd setup he wanted at first.*

* *

*I still wonder what file system most people would recommend for their
media server though. Once I get back to Salt Lake I'm going to setup a
media server. I don't plan on using the Pi (although it would work) simply
because I plan on it having about four 3TB hdds. I do want some failover
protection since a lot of the media will be downloaded movies and games or
other software that I buy from various distributors and I don't want to
take the time to re-download them again. It's going to become the PXE boot
host for the rest of my network, so anything that any computer on the
network downloads gets stored on the server. Yes, a good backup can solve
this issue too. But that would require external hard disks or some really
creative use of compression and re-writable Blu-Ray discs. Even Dual-Layer
DVDs only go up to about 9GB each. I don't recall exactly how large BD-R
discs were, but they hold a lot more than 9GB. Yes, offsite backup would
work too, but then I'm back to the time of downloading everything off the
'net again, even if it is from a central location. That's something I'm
trying to avoid in my home server/media server setup.*

* *

*If anyone has any other suggestions for good backup methods for up to 6TB
of data I'm all ears, as well as listening on what would be the best file
system. For me data dedpuplication isn't an issue. I don't know about Btrfs
to say anything about it. The only experience I've had with it is that I've
heard of it. I suppose XFS or Ext4 would work too. Since I am rarely going
to upgrade the kernel then I also don't have a problem with compiling the
ZFS modules into the kernel to use ZFS. Others here undoubtedly have more
experience on this than I do, so I bow to your wisdom and ask advice. :)*

* *

*--- Dan*


On Tue, Mar 12, 2013 at 9:35 PM, Michael Torrie <torriem at gmail.com> wrote:

> On 03/12/2013 12:28 AM, Dan Egli wrote:
> > The thing I was trying to accomplish with the larger array was the large
> > amount of disk space available. I.e. if I fill the enclosure with 32 4TB
> > drives and make a single raid 6 I wind up with 120TB of available
> storage.
> > On the other hand, if I was to make four equal array of 8 drives each in
> > raid 6 then I loose not just 8TB total but 8TB per array, or 32TB total,
> > leaving me with 96TB, which is still a good amount of storage, but also
> > quite literally only 75% of the disk capacity. If anyone knows a good way
> > to increase the disk capacity availability to raise it above 75%
> > (preferably back up to the 90% range) I'm all ears.
>
> The point of RAID is not to give you a large amount of storage, first
> and foremost.  I use it knowing I'm giving up raw storage for this
> security.  So forget about 90%.  You simply can't get 90% usable storage
> while still having any amount of realistic redundancy and security RAID
> provides.
>
> Personally, if I had that many disks, I would do RAID10.  Basically
> that's striping across pairs of RAID1 disks.  Or use sets of 4 disks in
> a RAID-6 setup, and stripe across them.  Same efficiency as RAID10, but
> you can lose up to 2 out of any 4 disks.  Either way, 50% efficiency,
> but I'd sleep way better at night.
>
> With 32 disks the failure statistics mean you will be replacing disks on
> a fairly regular basis.
>
> Now I'm not sure why you need 120 TB in your home.  Or is this disk
> array of yours going to be for work?  Are you going to put the disks in
> a fibre channel array?  iSCSI?  Are you going to use some sort of fabric
> to connect the drives to your computers?  What file system do you plan
> to use?
>
> /*
> PLUG: http://plug.org, #utah on irc.freenode.net
> Unsubscribe: http://plug.org/mailman/options/plug
> Don't fear the penguin.
> */
>


More information about the PLUG mailing list