[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: replacing /usr with a new mountpoint



Douglas Allan Tutty wrote:
On Thu, May 03, 2007 at 11:54:10AM +0200, Martin Marcher wrote:
On 5/3/07, Douglas Allan Tutty <dtutty@porchlight.ca> wrote:
Somewhere in the debian documentation is a warning that after going to
single-user mode a return to multi-user is not guaranteed to work.
too bad i'm trying to do all of that without actually rebooting (more
a matter of "because it should be possible" not a requirement)

Reboot into single user (with the -s option if there isn't a grub menu
item already) so that you know noting under /usr is being used, mv /usr
to /oldusr, fix fstab so that the new usr mounts on /usr, then shutdown
-r.  Of course be careful not to use any binaries that reside under
/usr.  Stick wit straight bash and other stuff under /bin.  Use the full
path to make sure.
all of this is done and the system already works with the new /usr
mountpoint I'd just like to regain the space without rebooting - to be
honest this is the whole point of this exercise.


I'm not understanding.  Do you mean that you mounted /usr over /usr
without emptying it?  If so, and you insist on non rebooting, then at
least stop X and as much else as you can (as a precaution), then umount
/usr, which will now show your full /usr directory tree, mv /usr
/oldusr, mkdir /usr, fix owners and permissions to match /oldusr,
remount /usr, and if everything is working, rm -rf /oldusr.

Note that existing running apps that have files from /usr open will
continue to work since open files are not unlinked until they are
closed.

Which will prevent being able to umount /usr in order to mv the underlying /usr to /oldusr.

This is why it's necessary to go to single user mode, which 'should' kill any process with open files in the /usr tree.


Good luck,

Doug.



Since it's necessary to go "single user" anyway, what's the difference between getting there from runlevel 2 versus rebooting to it? All users need to be told to save work and log off, in either case. The only diff I can see would be for a large server system that could take "forever" to reboot.

Anyhow, as an exercise, you might want to consider going to runlevel 1 as noted earlier, then using 'ps' to see if there're any running processes that should have died but didn't. And, use 'fuser' to see if any of these are using the /usr tree in any way. You should also save a copy of the 'ps' output to a file for future reference. You can then kill whatever may need killing to allow the umount to succeed.

Then do the 'umount/mv/remount/remove' steps described earlier to regain the space and return to runlevel 2.

If there were processes normally started by rc scripts going to level 2, where the process is already running, you should see errors for those scripts, though it's possible for some to fail to detect there's already a running process, resulting in two copies executing.

Getting the interplay of rc scripts for various runlevels *and* runlevel transitions right is an arcane art, quite difficult to master (I make no claims as to mastery, just the difficulty of achieving it;). So, if you find that some programs end up with two copies running (you can check by comparing with the file created from the 'ps' output, above), you can manually kill them.

Bob

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature


Reply to: