[Date Prev][Date Next] [Thread Prev][Thread Next] [Date Index] [Thread Index]

Re: test suite feedback/questions



Hi Matt,

Thanks for the feedback.

Matt Taggart writes:
> 
> on August 1st. The test system is a base Debian "unstable" install. Here's 
> some feedback/questions.
> 
> - When running "install.sh" I get the following error,
>      "*ERROR*: Must have byacc and not bison installed as yacc on the system."
> LSB.tools/README points this out as well. It would be nice if the install 
> script just dealt with it or at least gave you reasonable hints as to how to 
> properly resolve the situation.

I'll expand the comment to suggest a few ways of working around this.
It can be a bit distro specific (eg on Debian using
/etc/alternatives). If someone has some spare time they could look at
changing the yacc code (just removed the detection code and the build
of vsxgen will break halfway through).

Unless people really want it, I'm hesitant to automatically install
byacc on the system (lots of distro specific issues here) and dislike
for fiddling with the system.

> - When running /home/tet/setup.sh ...
>   - every question is followed by a visable \c.

I think Andrew Josey went through the scripts recently and fixed these
up. I'll be putting a new tet/vsxgen tarball up in the next few days
which fixes other issues too.

>   - In "Determining missing #defines and #includes ..." I got,
>      *** Starting signal.h
>      Missing: #define        NSIG    (-1)    /* user supplied: 
> (highest_signal_n     umber + 1) */
>      *** Completed signal.h

This output is part of the vsxgen build and depending on the test
suite you install a few other errors are displayed as well. The
scripts automatically fix up these problems (where it knows about
it). If you look at SRC/vsxconfig.sh it should have been changed to:

#define  NSIG  (_NSIG)

I think its an issue with header file non-compliance in glibc.

> - The tests hang on
>      "10:29:15  Execute /tset/PTHR.os/procprim/sigwait/T.sigwait"
>   a CTRL-C gets past it. Should this test have some sort of timeout?

Yes. On some systems there is also a hang in an LSB.os test. We need
to sort these out.

> - Having the "report" formatted for printing by default makes it hard to read
>     online. I would like to see the "report" that's created by default
>     formatted for online reading and the one formatted for printing called
>     "report.killatree" (the "report" of my first run was 714 pages). Even
>     better would be HTML where you could click on things in the table to get
>     to results.

The report is generated from journal files in the subdirectories of
/home/tet/test_sets/results. Maybe Andrew would know if there are more
journal file analysers around? There is a dres awk script which can do
brief summaries of failures. I've also written a small journal file
parser (attached to this mail) which does similar summary generation
to dres and also allows you to intellgiently `diff' between journal
files.

> - (nitpick) Why is vsx0's $HOME "/home/tet/test_sets"?  I assume this was so 
> that scripts could refer to $HOME by using tilde but it kind of obfuscates
> and  makes things "hard for humans"(tm).

IIRC this is an artifact of the way some of the vsx-pcts tests are
written that require this (home directory testing). There might be
away around it.

Regards,

Chris.
-- 
yeohc@au1.ibm.com
IBM OzLabs Linux Development Group
Canberra, Australia
#!/usr/bin/perl -w

# Tool to summarise a journal file generated by TET 
# or to determine the difference in test results between two
# journals.
#
# (C) Copyright 2001 The Free Standards Group  Inc
#
# 21/5/2001 Chris Yeoh, IBM
#
#

use strict;
use Getopt::Std;

my(%StateMap) = (
 0 => "PASS",
 1 => "FAIL",
 2 => "UNRESOLVED",
 3 => "NOTINUSE",
 4 => "UNSUPPORTED",
 5 => "UNTESTED",
 6 => "UNINITIATED",
 7 => "UNREPORTED",
 101 => "WARNING",
 102 => "FIP",
 103 => "NOTIMP",
 104 => "UNAPPROVE");

my(%PFMap) = (
  "PASS" => "PASS",
  "FAIL" => "FAIL",
  "UNRESOLVED" => "FAIL",
  "NOTINUSE" => "PASS",
  "UNSUPPORTED" => "PASS",
  "UNTESTED" => "PASS",
  "UNINITIATED" => "FAIL",
  "UNREPORTED" => "FAIL",
  "WARNING" => "PASS",
  "FIP" => "PASS",
  "NOTIMP" => "PASS",
  "UNAPPROVE" => "PASS",
  "MISSING" => "FAIL",
  "UNKNOWN" => "FAIL");
  

# Analyses one journal file to get statistics on pass/failure
#
sub GatherStats($)
{
  my($journalFile) = shift;
  my($stats) = {};
  my($loop);
  my($line);
  local(*JFILE);

  # Initialise
  $stats->{STATE_SUMMARY} = {};
  foreach $loop (keys %StateMap)
  {
    $stats->{STATE_SUMMARY}{$StateMap{$loop}} = 0;
  }
  $stats->{STATE_SUMMARY}{TEST_ERROR} = 0;
  $stats->{STATE_SUMMARY}{UNKNOWN} = 0;
  $stats->{TOTAL_TESTS_PASSED} = 0;
  $stats->{TOTAL_TESTS_FAILED} = 0;

  # Analyse file
  open(JFILE, $journalFile) ||  die "Could not open file: $journalFile\n";
  
  my($testName);
  my($testNum);
  my(@line);
  my($testState);
  while (defined($line=<JFILE>))
  {
    # Look for system info
    if ($line =~ /^0\|(.*)\|/)
    {
      @line = split(/ /, $1);
      $stats->{TEST_DATE} = $line[2];
      $stats->{TEST_TIME} = $line[1];
    }
    elsif ($line=~ /^30\|.*VSX_SYS=(.*)$/)
    {
      $stats->{TEST_SYSTEM} = $1;
    }

    # Look for test results
    @line = split(/ /, $line);
    if ($line[0] =~ /^10/) { $testName = $line[1]; }
    elsif ($line[0] =~ /^400/) { $testNum = $line[1]; }
    elsif ($line[0] =~ /^220/)
    {
      # Test state report
      $testState = exists($StateMap{$line[2]}) ? $StateMap{$line[2]} 
        : "UNKNOWN";

      $stats->{STATE_SUMMARY}{$testState}++;
      $stats->{TESTS}{$testName}{$testNum} = $testState;
      
      $PFMap{$testState} eq "PASS" ? $stats->{TOTAL_TESTS_PASSED}++ 
	  : $stats->{TOTAL_TESTS_FAILED}++;
    }
  }

  close(JFILE);
  return $stats;
}


#----------------------------------------------------------------------
# Find the difference in test results between the two journals
sub DiffJournals($$)
{
  my($j1) = shift;
  my($j2) = shift;
  my($diffStats) = {};
  my($testName);
  my($testNum);

  $diffStats->{TESTS} = {};

  foreach $testName (sort keys %{$j1->{TESTS}})
  {
    foreach $testNum (sort {$a <=> $b} keys %{$j1->{TESTS}{$testName}})
    {
      if (exists($j2->{TESTS}{$testName}) 
	  && exists($j2->{TESTS}{$testName}{$testNum}))
      {
	if ($j1->{TESTS}{$testName}{$testNum} 
	    ne $j2->{TESTS}{$testName}{$testNum})
	{
	  $diffStats->{TESTS}{$testName}{$testNum} = 
	      "$j1->{TESTS}{$testName}{$testNum}," .
	      "$j2->{TESTS}{$testName}{$testNum}";
	}
      }
      else
      {
	$diffStats->{TESTS}{$testName}{$testNum} = 
	    "$j1->{TESTS}{$testName}{$testNum},MISSING";
      }
    }
  }

  # Check reverse
  foreach $testName (sort keys %{$j2->{TESTS}})
  {
    foreach $testNum (sort {$a <=> $b} keys %{$j2->{TESTS}{$testName}})
    {
      if (! (exists($j2->{TESTS}{$testName}) 
	   && exists($j2->{TESTS}{$testName}{$testNum})))
      {
	$diffStats->{TESTS}{$testName}{$testNum} = 
	    "MISSING,$j2->{TESTS}{$testName}{$testNum}";
      }
    }
  }

  
  return $diffStats;
}

#----------------------------------------------------------------------
sub PrintDiffSummary($$)
{
  my($diffStats) = shift;
  my($isDetailed)  = shift;

  my($testName);
  my($testNum);
  my($state1, $state2);

  foreach $testName (sort keys %{$diffStats->{TESTS}})
  {
    foreach $testNum (sort {$a <=> $b} keys %{$diffStats->{TESTS}{$testName}})
    {
      ($state1, $state2) = split(/,/, $diffStats->{TESTS}{$testName}{$testNum});
      if (($PFMap{$state1} ne $PFMap{$state2}) || $isDetailed)
      {
	print "$testName $testNum $diffStats->{TESTS}{$testName}{$testNum}\n";
      }
    }
  }
}


#----------------------------------------------------------------------
#
sub PrintSummary($$)
{
  my($stats) = shift;
  my($isDetailed) = shift;

  my($testState);

  my($testName);
  my($testNum);
  foreach $testName (sort keys %{$stats->{TESTS}})
  {
    foreach $testNum (sort {$a <=> $b} keys %{$stats->{TESTS}{$testName}})
    {
      if ( ($PFMap{$stats->{TESTS}{$testName}{$testNum}} eq "FAIL")
	   || $isDetailed )
      {
	print "$testName $testNum $stats->{TESTS}{$testName}{$testNum}\n";
      }
    }
  }

  print "\n\n";
  print "Test system: $stats->{TEST_SYSTEM}\n";
  print "Test was run: $stats->{TEST_DATE} $stats->{TEST_TIME} \n";

  print "Total Tests Passed: $stats->{TOTAL_TESTS_PASSED}\n";
  print "Total Tests Failed: $stats->{TOTAL_TESTS_FAILED}\n";


  foreach $testState (keys %{$stats->{STATE_SUMMARY}})
  {
    print "$testState: $stats->{STATE_SUMMARY}{$testState}\n";
  }


}

#----------------------------------------------------------------------
# Main bit

my(%options);

getopts('dh', \%options);

if (exists($options{'h'}) || ($#ARGV!=0 && $#ARGV!=1))
{
  print STDERR <<"EOM"
Usage: $0 [-h] [-d] journal [journal2]
    -h Display Help
    -d Detailed Summary
    When one journal file is supplied a summary of the tests
    is output. When two journal files are supplied the difference
    between the two is shown.

EOM
    ;
    exit(0);
}

if ($#ARGV==1)
{
  my($stats1);
  my($stats2);
  my($diffStats);
  $stats1 = GatherStats($ARGV[0]);
  $stats2 = GatherStats($ARGV[1]);
  $diffStats = DiffJournals($stats1, $stats2);
  PrintDiffSummary($diffStats, exists($options{'d'}));
}
else
{
  my($stats);
  $stats = GatherStats($ARGV[0]);
  PrintSummary($stats, exists($options{'d'}));
}


Reply to: