File Compression methods

Dan Egli ddavidegli at gmail.com
Tue Oct 15 02:43:49 MDT 2013


On October 13, 2013, Nicholas Leippe wrote:

> How much of the 600GB is required by each user, each time they sit down
and

> log into one of these cloned machines?



The machines would be servers (or at least, automated processing stations).
Users wouldn't be logging into them normally. In fact, the only user
accounts they'll have on them are the root user and one standard user who
can login to watch log files and do other non-administrative tasks (and of
course system users for programs that want to setuid after they startup).
They'll be completely headless unless there is a need to login locally.
Even normal maintenance will be done remotely. They're more "set it and
forget it" boxes really. The actual workstations that people will use are
already setup as diskless workstations using NFS, and they have a much
smaller data set which is why the NFS setup could actually work.


On Sun, Oct 13, 2013 at 3:53 AM, Nicholas Leippe <nick at leippe.com> wrote:

> How much of the 600GB is required by each user, each time they sit down and
> log in to one of these cloned machines?
>
> If it's typical--they only need the basic libs, shell, and dm--I would
> think NFS would be a great solution for this.
> You might consider using NFS with unionfs/aufs layered on top to store
> differences locally.
>
> Also, if you segregate the data by partition, only clone the root fs
> initially (for booting), you could NFS share the data so it's transferred
> on-demand (or if you must, clone it too, but doing that separately from the
> OS lets you do it at a later time and/or via a different mechanism even if
> you want).
>
> /*
> PLUG: http://plug.org, #utah on irc.freenode.net
> Unsubscribe: http://plug.org/mailman/options/plug
> Don't fear the penguin.
> */
>


More information about the PLUG mailing list