[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: dvd+rw-tools-5.20.4.10.8: File is too large



me > >That's why i think, i cannot use mkisofs -stream-media-size 
> >or Paul Serice's flyisofs.
Joerg Schilling > 
> "mkisofs -stream-media-size" has been designed to be used together with
> something like "star -tsize=700m -multivol -dump ... "

So i got that right, at least.


> >So my idea is to have some special option or form of pathspec
> >which allows me to let mkisofs cut the 2047 MB pieces out of
> >the original file directly. 
> 
> If you use star -tsize=700m -multivol, splitting _and_ reassembly
> of the file is handled by star automatically and does not force
> you to do strange things by hand.

As stated in the constraints of the backup task, tar is not an option.
Plain ISO with individual files is the goal in this case.

Large file handling shall not be the main purpose. It is only
included for keeping operation of the software simple. 
The backup admin shall not worry what weird sized files the users
may have created ... as long as there are enough media at hand. 


> If you try to use mkisofs to be used to auto-split large files,

The "auto" aspect will not be the duty of mkisofs. I would only
need the "split" capability. My program would be willing to format
any pathspec or option which describes the desired file piece.

Currently it writes messages to itself like :

  -cut_file '/u/test/bigfile' 0 2047m '/dvdbuffer/split_dir/bigfile_1_2-01'

It executes them, when the appropriate volume is up to be burned,
and then hands over as one of the pathspecs to mkisofs :
  ...
  /u/test/bigfile_1_2-01=/dvdbuffer/split_dir/bigfile_1_2-01
  ...
As said, this is operational with the only drawback of needing the
buffer directory /dvdbuffer/split_dir .

My dreamt pathspec would map the four parameters 
  target: /u/test/bigfile_1_2-01
  source: /u/test/bigfile 
  start:  0 
  count:  2047m 
to a mkisofs action which grafts the target with the 2047m
desired bytes into the ISO image. Thus avoiding /dvdbuffer/split_dir .

(_1_2 gives piece number and total numer, -01 is a unique counter
 to avoid name collisions in /dvdbuffer/split_dir )


> then it still doesn't handle reassembly.

There will be supporting software for this. The filenames of the
parts suggest quite cleary which is which. So it can be restored
without special tools, too. (A major constraint of my project.)

At least in one not so uncommon scenario, the split file parts
will be superior to tar-Archive chunks with whole large files :
If the hard disk filesystem where you unpack your backup is not able
to store large Files (e.g. ext2 and Reiserfs as installed summer 2000)
then you will get trouble with a 5GB monster.
The split file parts from ISO would slide smootly on the hard disk and
one can later cope with the problem to concatenate them at some
other place.

As Andy states:
"You don't know where you would *have to* access backuped data"
(I'm 95 % d'accord with that, just not outruling _any_ but _any general_
reason for having possibly incompatible files in backups)


Thank you, Joerg, for your attention. I will not annoy you with
insisting.

I now know that i did not overlook such a possibilty in man mkisofs.
The backup admins will either have to ban big files from the backup
(automatically, of course, and loudly reported) or provide 4.5 GB
of disk space or use a tool like shunt. Clear alternatives.


Have a nice day :)

Thomas



Reply to: