[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: UDF, ext2 and -o loop created images



> 1. Have the bugs Volker encountered with UDF been fixed now? Volker,
>    what versions of kernel and udftools were buggy for you?

Whatever was current at the time... ;)
Seriously, must have been SuSE 8.2, kernel 2.4.20, udftools 1.0.0b2.

> 2. How does UDF fare as far as faithfully preserving all the ext2
>    attributes (uids, permissions, hard/soft links, pathname lengths,
>    etc.)?

That's the problem. I went off iso9660 because it doesn't restore
hardlinks (its other shortcomings are masked well by rockridge). With
udf back then I tried a cp -a into a loop-mounted udf fs, and when
comparing directories some symlinks were missing. Running rsync over it
fixed it, but I didn't feel like it was really backup quality. I can't
say whether things have improved but I somewhat doubt it because nobody
is working on it. Back then I found it impossible to dig up any good
information about udf, there wasn't even documentation which explained
the mkudffs options! mkisofs is only good for video disks and crude
backups because it doesn't retain uids.

You also have serious performance issues with Linux udf on an optical
disk when reading. Some very simple tests resulted in no gain of udf
over ext2 on average. Both could kill your drive before you're done
reading if you're unlucky. There is a good reason why mkisofs carefully
lays out the blocks for iso9660.

> 3. How exactly are we supposed to determine/control the size of the
>    filesystem and image file?
>    (a) Is there a table/chart that shows the number of blocks of all
>        the common optical media (74min CD, 80min CD, 4.7GB DVD, etc.)
>        or is there some way we can get this from the /dev (loaded with
>        "blank" media) itself?

All blanks I have ever come across (22 different types, I kept careful
record) had the same capacity: 2298496 for DVD-, and 2295104 for DVD+.
I've read the nominal capacity is 4.7^ bytes, but that doesn't divide by
2048 (and the media's block size is 32k anyway I think) so there must be
some fudge factor.

dvd+rw-mediainfo (and cdrecord-prodvd) show the exact number of
available blocks of an inserted blank. Dito cdrecord for CDs.

>    (b) The ISO filesystem seems not to have a problem with image files
>        that are smaller than the target media. Is this true with UDF
>        and ext2 as well?

Yes. It is irrelevant of the filesystem. View the optical media like a
hard disk - there are a certain number of blocks in both. How you use
those blocks is up to you. You can burn reiserfs if you feel like it. Or
tar. Or cat /usr/bin/* | growisofs -Z /dev/burner=/dev/stdin.

>       (The answer might seem to be an obvious "yes",
>        but I don't want to assume that the image file size is identical
>        to the concept of a partition.)

??? You have always been able to create a filesystem in /dev/hd3 which
is smaller than the size of /dev/hd3. Hard disk partition sizes are an
upper limit. Likewise for optical media size.

>    (c) If so, how do we best go about resizing the filesystem as well as
>        its containing image file to match the actual storage requirements
>        of the files to be archived? (So that our .img files can be smaller
>        than the target media if we don't fill it all.) IMHO, it seems to
>        me that the minute "mount -o loop" became possible under Linux, we
>        should have had the tools to manage filesystem and image file sizes
>        for all of the supported filesystem types.

In an ideal world, yes... Barring the obvious restrictions, like iso9660
never being able to be writable (in pratice). In the real world,
volunteers are always welcome. ;) It also seems to me that burning is a
desktop application, and Linux is still a bit lacking there. I can't
find a better explanation than that either for the fact that when
burning N blocks onto a CD, Linux can't even read all of them back
before strangulating itself (sounds incredulous, but is true).

The alternative would be to implement a filesystem which can achieve say
2/3 of burn speed when copying files into it, making use of DVD+RW and
random access writing.

I've settled on ext2, and have a bunch of scripts for creating the
filesystems. It's not too bad really when comparing with alternatives.

> Of course, I am sure things get a lot more complicated by the filesystem
> "growing" or appending required for multisession recording of nonrewritable
> media. UDF may allow for a "growudffs" approach, but I doubt if ext2 can.

"Growing" in multisessions requires a filesystem which is specifically
designed for this. Hard disk filesystems are not. Dunno about udf. For
anything but iso and udf you will have to rewrite the whole filesystem,
or use random access writing on DVD+RW.

I don't consider CDs any more in this, their capacity is so small, you
can blockread, modify, and burn with a bit of scripting in little enough
time. Linux has missed the boat there.

Volker

-- 
Volker Kuhlmann			is possibly list0570 with the domain in header
http://volker.dnsalias.net/		Please do not CC list postings to me.



Reply to: