scdbackup@gmx.net wrote:
I began naively rounding each file size up to the first multiple of 2048,It looks like quite a fixed size attribute record is associated with each single data file. Maybe one should read the specs ... but in real life the effect of growing files or newly added files during the backup is much more annoying. With 50+ CDs of speed 4 you have to expect 2 workdays for a backup. In that time there can happen much on a disk.
Actually you may have to round up AND add a fixed amount to each file. I promise I'll try to get our web server up over the weekend, and put an older version of breaker.pl up elsewhere. It handles most of these problems, although the old version I have doesn't produce path-lists, it's easy to do manually.
after I had already rounded up to 4096 instead of 2048, I calculated the required space to be: 4695896064 bytes / 2048 = 2292918 blocks ... 2249218 extents written (4393 Mb)Luckily it is normal that a reasonable estimation isunderbid by mkisofs. With a fixed add of 1400 i can get pretty near to the final outcome without underestimatingit too often and too much.stat each file and directory round the size of each up to 4096, add to the running total stop adding files to the pathlist when I reach 4.7 * 10^9 - 4 * 10^6You are aware that mkisofs does accept whole directories ? It is not necessary to mention every single file.
Yes, but it makes the calculation vastly easier.
Does anyone know the real, correct method?I would be curious too. If the formula is deterministic (for what i got no experimental proof ;)) then i would like to incorporate it into my program.
Most of these backup things are a round up for the data plust a fixed value for meta data (directory info). I don't have the values, I rarely cut it that close, since I aim for equal size on all media instead of the last one being short.
-- bill davidsen <davidsen@tmr.com> CTO TMR Associates, Inc Doing interesting things with small computers since 1979