[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: 5.1 updates wanted, and CD remastered size w/ hardlink script



Hi Ronny,

On Tue, Jan 02, 2007 at 10:23:27PM +0100, Ronny Standtke wrote:
> Hi Klaus,
> 
> > The main problem concerning "rsync-friendlyness" are the
> > blockwise-compressed KNOPPIX cloop image files. These are optimized for
> > best possible compression, and differ almost 100% in each image.
> > Therefore, I don't think there is a way to make them more
> > rsync-friendly.  :-/
> 
> First let me confess that I only have dangerous smattering in this topic. 
> Would it be possible before cloop image creation to identify 
> files/blocks/ranges that are unchanged compared to the previous image? If so 
> could these ranges be mapped to the same blocks during compression?

No, and no. Even if you remove only a single file, the entire filesystem
layout can change (at least for iso9660, which is more FAT-like, and
especially for btree+ filesystems like reiserfs).

Compression makes it worse, even if you use a sortlist for files like we
do. After changing something at the beginning of he filesystem,
everything after moves up in an unpredictable order.

You may want to run a binary diff (bsdiff or xdelta) between the 5.1.1
and 5.1.0 image. In the CD, about 100 megs (at maximum) are changed, yet
I am quite sure that you will get over 90% difference. The rest are the
bootfiles and static data.

The only way to save download bandwith would be (which is possible
thanks to unionfs/aufs) offering "incremental" cloop images, which
contain whiteuts for deleted files, plus the added files. Of course,
your new DVD would constantly INCREASE in size on every update, since
the removed files are never really remvoved, just made invisible.

> 
> > Apparently this was one of the reasons the bug wasn't noticed in all
> > tests: our testers are mostly "power-users" who use the faster keyboard
> > shortcuts, rather than clicking around. Therefore, simply nobody ever
> > clicked on the switcher... ;-)
> 
> This is almost an academic example for an automatic (unit) test! :-)
> You can write a test that automatically "clicks" on the switcher and triggers 
> the bug. Then you fix the bug and run the test again. Now the test must pass. 

I disagree. This case is also an excellent academic example where an
automatic unit test would most likely NOT have found a bug, since the
click on the switcher worked perfectly and did what it was supposed to
do: Change to the next desktop. Since there is no more kicker on the
next desktop, no more automatic click tests for that, and no errors
either. For the automatic testing routine, the test went through without
errors, but for the human in front of the screen, there is a difficult
situation now.

Besides, since the manual (hotkey) switch worked, it is also not
necessarily the case that an automatic click (i.e. software-generated
event) would have caused the same behaviour.

I have a few real-life examples ready, where automatic unit testing
fails (i.e. finds no bugs) in cases that would have immediately been
found wrong by human testers. I'm also teaching this stuff, occasinally.
Software engineering is not (always) the solution of all problems,
instead, it can very well cause new problems that would not have
appeared when just continuing to use the "good old" evolutionary
programming model (that is used for many free software projects,
including Knoppix, so, we are not alone with bugs that survive for
decades, or reappear). ;-)

> Now whenever you change something you run this test again. This way you 

I agree that here should be a "checklist" for betatesters, but then,
imagine how long testing the DVD version will take, before we can do a
small bugfix update (and check again).

> be done by the developer with a single mouse click. This is even much faster, 
> more reliable and not so tedious and boring as manual tests during a beta 
> release. Anybody else enthusiastic about automatic testing now? ;-)

I am, as soon as you send me that automatic testing engine that finds all
bugs in the DVD version before the release. ;-)

Unfortunately, it seems this isn't even possible for the Linux kernel
alone.

Also a problem: In order to run unit tests, you have to (in most cases)
modify the software you want to test. Imagine recompiling 9000+ programs
with your testing libraries. After each modification in the testing
engine, again. Adding regression tests withing the software itself can
also introduce new bugs.

So, automatic testing DOES have its downsites. This can be proven by
really large sofware projects of really large software companies, that STILL show
bugs in spite of EVERYTHING having been run through extensive unit
testing for each release.


Just brainstorming. :-)

-Klaus



Reply to: