Hi, On Mon, Aug 27, 2012 at 02:28:50PM +0200, Tomas Dohnalek wrote: > Hi, > > I have started to work on automation of comparing outputs from glibc > testsuite, because of pointless manual work when new version arrives and > we want to do as much as possible in upstream. In the beginning of the > improvement process, we want to add simple one-line PASS/FAIL output for > each test and also apart from usual `make check` add `make installcheck` > rule for testing system binaries. > > I guess, that you are having the same difficulties so I want to ask, how > do you deal with current state? Have you made any internal tool for > comparing results or are you doing it manually? My glibc tests check automation consists of filtering "make -k check" output (e.g. sed -n 's|^make[^/]*/[^\]*/\([^]/]\+/[^]/]\+\)\] Error 1$|\1|p') and comparing it with a reference file for the given architecture. Any difference between the filtered output and the reference file is an error that requires manual handling. In other words, it's a regression testing adopted for real life were some tests are known to fail in some environments. -- ldv
Attachment:
pgpycIiIpIBl8.pgp
Description: PGP signature