Re: LSB init scripts and multiple lines of output
On 6/1/06, martin f krafft <email@example.com> wrote:
I am faced with the problem on how to tackle multiline output from
an init.d script, which I have just converted to LSB. Since the
package is mdadm and RAID is kinda essential to those that have it
configured, I'd rather not hide information but give the user the
In my ideal world, this is what it would look like:
Starting RAID devices ...
/dev/md0 has been started with 3 drives.
/dev/md1 has been started with 3 drives.
/dev/md2 assembled from 2 drives - need all 3 to start it
/dev/md3 assembled from 1 drive - not enough to start the array.
/dev/md4 has been started with 3 drives.
... done assembling RAID devices: failed.
I don't seem to be able to realise this with lsb-base, nor does it
seem that they even provide for this. The alternative -- all in one
line -- just seems rather uninviting:
Starting RAID devices ... /dev/md0 has been started with 3 drives,
/dev/md1 has been started with 3 drives, /dev/md2 assembled from
2 drives - need all 3 to start it, /dev/md3 assembled from 1 drive
- not enough to start the array, /dev/md4 has been started with
3 drives. failed.
Generally, I would not have a problem doing something like
Starting RAID devices ... failed (see log for details).
But the problem is quite simply that by the time the script runs,
/var may not be there, and neither is /usr/bin/logger.
So what to do?
My current approach, which is to map short terms to the long errors
is just too much of an obfuscating hack, and it runs more than 80
characters as well:
Starting RAID devices ... md0 started, md1 started, md2
degraded+started, md3 degraded+failed, md4 started ... failed.
Starting RAID devices ... md0 ok, md1 ok, md2 2/3, md3 failed, md4
ok ... failed
I would go to check why just 2 out of 3 disks are ok in md2 and why
md3 failed. The only missing information from the output above is if
md3 failed with 0 or 1 disk ok.
I really think that all that multpline lines are annoying and hard to debug,
since it's not in RAID services but almost everywhere.
If the service 'foo' isn't starting and you've no idea the reason, because
there's too much stuff to read during the boot, it will be easier just look
at 'md3 failed', and associate it with the mountpoint that hosts the files
for that service. Unfortunately it seems that the common sense says
otherwise, and people are just populating more and more the boot
output as admin it isn't useful for me, really.