[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: PROPOSAL for FHS: Mount points for CDs, floppies and alien OS



On Wed, 14 Jun 2000, Alan Cox wrote:

> > > Mounts should be by _volume_name_ or handy label not by device.
[I asked:]
> > Hmmm, do we then have /mnt/null for unnamed/unlabeled media?
> > Fall back on the device type and/or name?  Or what?
> 
> "or handy label". 

What sort of label is most handy under that scheme? Or handy and
user-friendly in the case of identical volume names? (Imagine that
somebody just burned a copy of a "MyFiles" disc and is running a verify
outside the CD-burning program...or whatever other example comes to mind.)
Why do I worry?  As a user, I don't really want to guess where an
automounter might put something.

> > > Another common location for remote mounts is /export/machinename/...
> > 
> > This is especially true of large, multi-user systems with lots
> > of NFS activity and/or a running automounter.

> Why is that a consideration. On a SAN its not always clear what is local
> or remote and that can depend on 'configuration of the week' or even be 
> changed by high level I/O balancing tools 8)

That is, the hierarchy looks the same no matter whence its contents come,
correct? 
 
> I raised exports as a question if we should allow it too 

Hmmm... probably, since the purpose here can be different, and it's
desirable to accommodate as many reasonable schemes as possible.  For
software installation on a standalone workstation, it would be convenient
to just deal with a simple (even if simplistic) standard mountpoint.  For
purposes of uniform, on-demand sharing, a different scheme may be more
convenient.  As you point out, it can blur advantageously.

A fictional, complex example might be interesting (if not, just skip this
paragraph :)  Imagine, say, a future high-availability cluster with shared
SCSI buses.  host-a:/dev/sdd and host-b:/dev/sdd are the same physical
device, and clustername:/<path>/volume is the NFS export.  [This example
combines a bit of Tru64 clustering with Linux, is purely hypothetical, and
(being off the top of my head) potentially flawed.]  Now imagine host-a's
SCSI controller dies, but both machines live through the experience.  
What happens to the mountpoints?  Obviously, there's some additional
undefined/hypothetical failover software involved, and external clients
should see the same NFS export.  Said software needs to deal with host-a's
mounts [and probably host-b's], hopefully in an easily compliant way.  

Okay, "high level I/O balancing tools" is quicker, but I wanted to briefly
look at a scenario; someone writing such a tool should hopefully
be able to concentrate on the tool rather than worry about the spec.

With an /exports/host/<name> setup, it's tempting to suggest that
/mounts/name be a symbolic link to /exports/host/name.  I worry about
namespaces here, but it might turn out to be a feature.  (i.e., you can
change /mounts/name to refer to the most optimal target.)  Unless, of
course, I haven't had enough coffee, and I'm missing something :-)

Any better thoughts?

  -- John






Reply to: