Re: Issues brining BD disks from the command line - write failures
> Ehh, I'm very sure I've seen it with DVDs too, and the read-ahead size
> there was larger.
In that case we should try to reproduce the problem.
At least the Linux kernel would need another reason why to misperceive
the size of the medium on the first hand.
In case of CD it is obviously the MMC compliant inclusion of two
non-data blocks at the end of TAO tracks.
The block device driver does know (at least roughly) the size of a CD.
I believe to see the size determination in my olde
static void get_sectorsize(struct scsi_cd *cd)
cmd = READ_CAPACITY;
the_result = scsi_execute_req(cd->device, cmd, DMA_FROM_DEVICE,
buffer, 8, NULL, SR_TIMEOUT,
cd->capacity = 1 + ((buffer << 24) |
(buffer << 16) |
(buffer << 8) |
This code matches the MMC description of the result of SCSI command
25h READ CAPACITY, which is supposed to tell the "capacity [...] with
respect to reading operations".
In the context of MMC, reading is not only reading of data, but also reading
of non-data sectors. Thus, READ CAPACITY counts the two non-data sectors
of TAO as readable. (Just not by command 2Bh READ(10), but by BEh READ CD.)
As said, the fault of Linux is not to handle the last two blocks
of CD tracks specially, resp. not to retry by reading single blocks
after reading the last cache tile has failed.
It has to be aware that those two blocks may or may not be part of
the track's payload data. Some try-and-error is inavoidable here.
But the error should not be forwarded to the user and it should not
eat up more than the two questionable blocks.
> > Nevertheless, that is a _read_ problem. Dale has a problem with
> > write errors.
> Sure, but you asked him to test afterwards by reading back.
I see. Well, if there is a read-ahead bug with DVD then the
checkreading by dd could indeed produce false i/o errors at the
very end of the track.
A safer proposal would then be
xorriso -outdev /dev/sr0 -check_media use=outdev --
If the medium is DVD+RW or BD-RE then there will be trailing stuff
anyway. One will have again to compute the size of the valid payload
like with my dd proposal, and then use -check_media option max_lba= :
xorriso -outdev /dev/sr0 -check_media max_lba=1700758 use=outdev --
(-outdev has to be used if the medium content is not an ISO 9660
filesystem. No writing will happen, because no xorriso command for
creating or changing an ISO image is used here. Moreover, xorriso
will not append data to a non-blank medium which it did not aquire
as input drive. So this is safe.)
> > mkudffs and cp.
> > But for what, particularly ?
> Random-file-access backups.
That's the reason why i began to develop xorriso.
It can record ACLs and xattr, can register MD5 checksums of medium
and of each single data file, does incremental backups based on
either MD5 or on inode properties, and can checkread its own backups
without the need for seeing the original files.
Example from man xorriso:
This changes the directory trees /projects and /personal_mail in the
ISO image so that they become exact copies of their disk counterparts.
ISO file objects get created, deleted or get their attributes adjusted
ACL, xattr, hard links and MD5 checksums will be recorded. Accelerated
comparison is enabled at the expense of potentially larger backup size.
Only media with the expected volume ID or blank media are accepted.
Files with names matching *.o or *.swp get excluded explicitly.
When done with writing the new session gets checked by its recorded
$ xorriso \
-abort_on FATAL \
-for_backup -disk_dev_ino on \
-assert_volid PROJECTS_MAIL_* FATAL \
-dev /dev/sr0 \
-volid PROJECTS_MAIL_"$(date +%Y_%m_%d_%H%M%S)" \
-not_leaf *.o -not_leaf *.swp \
-update_r /home/thomas/projects /projects \
-update_r /home/thomas/personal_mail /personal_mail \
-commit -toc -check_md5 FAILURE -- -eject all
To be used several times on the same medium, whenever an update of the
two disk trees to the medium is desired. Begin with a blank medium and
update it until the run fails gracefully due to lack of remaining space
on the old one.
[...] To apply zisofs compression to those data files which get
newly copied from the local filesystem, insert these commands
immediately before -commit :
-hardlinks perform_update \
-find / -type f -pending_data -exec set_filter --zisofs -- \
zisofs needs zlib and its development headers at compile time of xorriso.
Linux kernels usually detect zisofs and uncompress automatically.
> but wouldn't mind burning some larger disks.
BD-RE seems technically fine. Still a bit expensive, though.
> I used
> ext2 in the past, useless for reading from, but good enough for dd'ing
> back to disk before reading. With larger sizes that becomes a bit
xorriso has extraction commands which can sort extraction by the
block addresses of the files. Reads with the same speed as dd but
can hop over unwanted stuff.
First check which backup sessions are on the medium:
$ xorriso -outdev /dev/sr0 -toc
Then enable restoring of ACL, xattr and hard links. Load the desired
session and copy the file trees to disk. Avoid to create
/home/thomas/restored without rwx-permission.
$ xorriso -for_backup \
-load volid 'PROJECTS_MAIL_2008_06_19*' \
-indev /dev/sr0 \
-osirrox on:auto_chmod_on:sort_lba_on \
-chmod u+rwx / -- \
-extract /projects /home/thomas/restored/projects \
-extract /personal_mail /home/thomas/restored/personal_mail \
The final command -rollback_end prevents an error message about the
altered image being discarded.
A potential problem is that block address sorting might need multiple
visits of the same directory, which might have restrictive
w-permissions in the backup.
So one has to allow xorriso to aquire w-permission to disk directories
to which it wants to restore files. Not a problem if the directory
itself stems from the backup. But unsafe if it already exists on
disk and shall really be write-protected.
Well, dd is not overly safe either.
Have a nice day :)