[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: Problem mounting encrypted blu-ray disc or image



On Sat, 9 Jul 2022, B.M. wrote:

Verifying that your procdure with two UDF images is not the culprit would
help even if the result is boringly ok, as we expect. (Or we are in for
a surprise ...)

I don't have two UDF images.

Not been following this closely, but I do something very similar and
have never had a problem.

However, immediately after burning the disk I verify it like this:


fileSHA=$( sha1sum $UDFIMAGE | cut -d' ' -f1 )
cdromSHA=$( dd status=progress if=/dev/cdrom bs=1k count=$maxsize |
sha1sum | cut -d' ' -f1 )

STATUS=0

[[ "$fileSHA" != "$cdromSHA" ]] && STATUS=1


It's unusual, but I have had instances where the burn has completed
without any issues but the verify has failed. When that happens I got
several failures close together - I've assumed faulty disks.

I write slightly more often than once a month on average and I'm now on
disk 90 - nearly 7 years (prior to that I was using dvd), and I have
never had an issue accessing old backups (which I do from time to time)


Tim


In my script I create a file, put an encrypted UDF filesystem into it and start
writing compressed files into it. Unfortunately it can happen (and happened in
the past) that the filesystem got filled up completely.

Beside that, I use a fully encrypted system with several partitions...
Extract from df -h:

Filesystem                Size  Used Avail Use% Mounted on
/dev/mapper/sdb2_crypt     28G   23G  3.0G  89% /
/dev/sdb1                 447M  202M  221M  48% /boot
/dev/mapper/var_crypt      27G   18G  8.4G  68% /var
/dev/mapper/vraid1-home   1.8T  1.5T  251G  86% /home
/dev/mapper/BDbackup      6.5M  6.5M  2.0K 100% /mnt/BDbackup

(I create the image file as /home/TMP_BKP/backup.img just because that's where
I have enough available space.)

After the boring outcome you have the unencrypted images to make the next
step, namely to create /dev/mapper/BDbackup with a new empty image file
as base, to copy the images into it (e.g. by dd), and to close it.
Then try whether the two encrypted image files can be properly openend
as /dev/mapper/BDbackup ans show mountable UDF filesystems.

it's not only the burned disc which is not readable/mountable, it's
also the image I created before burning.

So we can exclude growisofs as culprit.

Might it be possible, that when my UDF filesystem gets filled completely,
the encryption get damaged?

That would be a bad bug in the device-mapper code and also such a mishap
is hard to imagine. The UDF driver is supposed not to write outside its
filesystem data range. That range would be at most as large as the payload
of the device mapping.

Doesn't look like that - I tried the following several times:
Create (a much smaller) image file, put an encrypted filesystem in it, fill it
completely with either cp or dd, unmount it, close and re-open with
cryptsetup, than check /dev/mapper/BDbackup: no problems, only hex zeros and
it's mountable.

Multi-disc backups are not
handled by my script, I have to intervene manually.

That's always a potential source of problems.

Do i get it right, that your script copies files into the mounted UDF
and gets a "filesystem full" error ?

What exactly are you doing next ?
(From where to where are you moving the surplus files ?
Does the first /dev/mapper device stay open while you create the encrypted
device for the second UDF filesystem ? Anything i don't think of ... ?)

If you want you can have a look at my script, I attached it to this mail...

Basically, I use extended attributes (user.xdg.tags) to manage which folders
have to get backuped, write the last backup date into user.xdg.comment. By
comparing file timestamps with these backup dates this allows for incremental
backups.
Then for each folder which should be backuped, I use tar and plzip, writing
into BKPDIR="/mnt/BDbackup".

"Filesystem full" is not handled at all. Typically if this happens it's quite
late i.e. most folders are already backuped and I do the following:
- remove the last lz-file, I never checked if it is corrupted
- burn the image
- reset user.xdg.comment for not yet backuped folders manually
- execute the script again, burn the so created second image

Since this is quite ugly, I try to prevent it by moving very large lz-files
from /mnt/BDbackup to a temporary location outside of /mnt/BDbackup while the
script is running. When the "create lz-files"-part of my script has finished, I
check if there is sufficient space to move the large files back to /mnt/
BDbackup. If yes I do this, if not I leave them outside, burn the first disc,
then I create a second image manually, put the large files into the empty
filesystem, burn this disc as well. Not perfect at all, I know, but it's
working... and I do this about every 3 or 6 months. Beside that, it's just a
second kind of backup additionally to bi-weekly backups on external, also
encrypted HDDs. (I think with these two kind of backups I'm doing enough to
save our precious personal files, images, videos etc., doing much more than
most people out there ;-)

Honestly I don't see where this process may corrupt the UDF fs or the
encryption. And I don't see an error / bug in my script neither.

 Or is my filesystem too large?

25 "GB" would rather be too small to swim in the swarm of other cryptsetup
users.


-----------------------------------------------------------------------

Slightly off topic: A riddle about your UDF image sizes:
# There is an old comment in my script at this line, saying:
# let's try that: 24064000K
# 24438784K according to dvd+rw-mediainfo but creates at
# least sometimes INVALID ADDRESS FOR WRITE;
# alternative according to internet research: 23500M

An unformatted single layer BD-R has 12,219,392 blocks = 23866 MiB =
24,438,784 KiB.
But growisofs formats it by default to 11,826,176 = 23098 MiB =
23,652,352 KiB.

Thanks for pointing this out, I didn't know that growisofs gives away a few
bytes... Do you know why that's the case?

growisofs_mmc.cpp emits a message in function bd_r_format()
    fprintf (stderr,"%s: pre-formatting blank BD-R for %.1fGB...\n",

ioctl_device,(f[0]<<24|f[1]<<16|f[2]<<8|f[3])*2048.0/1e9); Watch your
growisofs run for it.

Never noticed this error message though, see my third to last paragraph below.

(Note that it talks of merchant's GB = 1 billion, not of programmer's
 GiB = 107.3741,824. 23098 MiB = 24.220008448 GB)

IMGSIZE=24064000K
truncate -s $IMGSIZE $IMGFILE

The man page of truncate says that it's "K" are 1024, i.e KiB.
So your image has 23500 MiB which is too large for the default format
as normally applied to BD-R by growisofs.

growisofs has a bug to accept burn jobs which fit into unformatted BD-R
but then to spoil them by applying its default format:
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=699186

So how come that your growisofs run does not fail in the end ?

There is an undocumented growisofs option to suppress BD-R formatting:
  -use-the-force-luke=spare=none
There is also
  -use-the-force-luke=spare=min
which (i guess) will bring 23610 MiB of payload.

I didn't use these options.

With all that tried and learned, I'm going to try another full run of my
script, closely monitoring what's happening during the different steps. Not
today but when I have enough time, maybe in a week or even later.


(I take the occasion to point out that xorriso does not format BD-R
by default. I.e. default capacity is 23866 MiB.)

Never heard about xorriso before - from my understanding I could use it
instead of growisofs, but with larger images?

General question:
Do you think I should completely change my script such that it creates lz-
files, encrypts each of them and then writes them on an unencrypted disc?

Thank you very much.

Best,
Bernd



Reply to: