[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: mkisofs aborts but exit value is 0



Hi,

>>>>> I wrote:
>>>>> Of course, checkreading has to be done by a second
>>>>> computer meanwhile.
>>>> Joerg Schilling wrote :
>>>> Why do you believe that there is a difference?
>>> Volker Kuhlmann wrote : 
>>> The first computer might not have enough bandwidth for burning and
>>> verifying. 

My own one can feed via mkisofs about 3.5x DVD speed.
Choked obviously by random access disk performance.
I.e. mkisofs can collect data fast enough to keep
the CPU 20% and the burner 90% busy.
There is enough bus bandwidth and CPU power left for
checksumming a stream of 10 MB/s simultaneously.
But see below. :(

>>> Also, when
>>> trying to ensure that a burned disk is readable in more than one drive,
>>> verifying it in a drive other than the one it was burnt in has been a
>>> good idea since day -1.

I can only agree.


>> Joerg Schilling wrote :
>> And how is this related to a filesystem snapshot?
> Volker Kuhlmann wrote :
> Not, I believe he was talking about checkreading the burnt disks, sorry
> if I misunderstood.

Possibly my fault.

I meant to be talking about the large time window of
a real world DVD (or CD) level 0 backup. The remark about
checkreading on a second PC was mainly intended to 
illustrate time optimization which can be applied by
the operator.

Although i rather wanted to brag about my own achievement
of resumeable backups, the time window is indeed related
to snapshots in two aspects :

1) Can i uphold the snapshot long enough ? 

My main concerns are an eventual shutdown, or the
disk space needed to uphold two versions of an agile
filesystem over a few days. The changes resp. saved
old states of changes have to be stored somewhere, 
don't they ?

2) Does a snapshot really decrease the probability
of backing up a file while it is in an ill state ?
"Ill" in the sense that it does not comply to a
valid persistent state as expected by its applications
when they open it.
Not "ill" in the sense of "a case for fsck".

I am still considering stochastical reasons
why a sudden snapshot should catch less random
ill (see above) file states than a slow scan as
done by a reading backup program.
There may be influences of the byte range of
file changes between two consistent (backupable)
states, the time that is needed to perform those
changes and the time needed by the backup program
to read that changed area completely.
But in part these influences apply to a snapshot, too.

I get the impression that not the overall size of
the backup time window is essential for the risk.
It seems to be about smaller time windows opened
by the applications and the backup program while
operating on a particular file or fileset.
So it is helpful for consistency to burn at the
highest possible speed. If one has fast independent
disks it might be worth to buffer the DVD image
in order to increase reading speed. 


Of course, the backup program's life is much more
easy if it does not have to be aware of vanishing
files which it saw in its planning phase and cannot
find when it's time for reading.
Insofar, snapshots are very desirable. 

I wish i knew how to do some.

Another advantage: a snapshot does not solve problems
with files of busy databases. But one can shutdown
the database server, do the snapshot and restart the
database immediately.
Without a snapshot, one has to exclude the database
from the backup (or be careful at restore time) and
would have to backup the database files separately
while the service is down for that time.

But actually one should better make the DB server perform
a dump in a portable format and backup that dump.
In that case, the advantage of a snapshot diminishes.


A snapshot does not keep you safe if you don't know
exactly what particular services need to be shutdown
temporarily.

The solution would be a virtual single user mode.
Hard to imagine how this could be achieved, though.
(Atomically fork the complete running system including
 all filesystems and processes, shutdown one virtual
 copy to single user mode. Do backup and end this fork.
 I guess one would need virtual machines for that.
 Are any mainframers around here ? Is this realistic ?
 What to do with interactive applications ?)

... hmm. How do i check wether a file is currently
opened by any process of the system ? lsof ?
How does it do that ? 
" ... reads kernel memory in its search for open files ..."
"search" does not sound good, "kernel memory" does not
sound good.


> Bill Davidsen wrote:
> As long as it is read on another drive the check is effective. While
> bandwidth problems are possible, they are pretty rare with recent
> hardware. DVD doesn't run very fast.

Now i get just jealous.

My system got a DVD burner and a DVD ROM.
Since it only has two IDE controllers and
the system disk is IDE, both DVD drives share
the other IDE controller.
One of them alone is faster than both together.

Up to about 8x CD speed they do not hamper each
other much. But with DVD or 10x CD i begin to get
(harmless) buffer underruns of the burner and 
short stalls of the reading process.

It's suboptimal.
But i got inavoidable reasons to use vanilla SuSE
on a 700 dollar PC.


Have a nice day :)

Thomas



Reply to: