GCC has its own testsuite that can be run after you have compiled GCC. The official documentation is available here: http://gcc.gnu.org/install/test.html

Basic steps

  1. Install the prerequisites: DejaGnu, Tcl, and Expect. They are usually available in all GNU/Linux distributions.

  2. Run the testsuite

       cd objdir
       make -k check
  3. Analyze the results. Tests may already fail or be unsupported without your patch, you should only worry about new failures introduced by your patch. The simplest approach is to have two build directories, one with the regression testsuite results of a pristine copy of GCC (objdir_pristine) and another with the testsuite results of your patched copy (objdir_patched). Then call:

       gcc_src/contrib/compare_tests  objdir_pristine objdir_patched
  4. New failures need to be investigated and fixed. If you added new tests, make sure that they were run and passed. You can find detailed logs in the objdir_patched/*/testuite/*/*.log files (for example, gcc/testsuite/gcc/gcc.log).

Increase the verbosity

Because the testsuite tends to be long-running, you may wish to increase the verbosity level so that you get feedback as to its progress. You can do this by adding "-v" to RUNTESTFLAGS. Each "-v" added increases the verbosity level by one, so specify it multiple times if you would like more output. Example:

make -k check-gcc RUNTESTFLAGS="-v -v"

This can be combined with the other RUNTESTFLAGS options mentioned in the official documentation. Example:

make -k check-gcc RUNTESTFLAGS="compile.exp=2004* -v -v"

Testing a different target from the current host

In some circumstances it is possible to run tests locally for a different target than the current host (for example, darwin8 target tests on a darwin9 host system). In order to achieve this, you might also need to provide a 'sysroot' (to point at the libraries and headers for the target).

This can be accomplished by a command like this:

make -k check-gcc RUNTESTFLAGS="CFLAGS_FOR_TARGET=--sysroot=/path/to/target/root --target_board=unix/-other/-options"

You might wish to use:

CFLAGS_FOR_TARGET='$CFLAGS_FOR_TARGET --sysroot=/path/to/target/root'

if CFLAGS_FOR_TARGET is already set for your test case(s).

Testing with a simulator

For running the testsuite on a simulator (useful for cross targets) see: http://gcc.gnu.org/simtest-howto.html

and a similar page (with some useful links): Building_Cross_Toolchains_with_gcc

Interpretation of testsuite results

Normal testsuite results usually contain a few FAILs -- unexpected failures. Thus it might be hard to determine if the changes being tested actually broke something. There are several ways to deal with this situation:

Using validate_failures.py

The script <src>/contrib/testsuite-management/validate_failures.py can be used to maintain a list of known/expected failures outside of DejaGNU. This is useful when working in a branch with relatively stable failures, which you have determined to be "ignorable".

Since modifying dejagnu files to mark XFAILs is not always trivial, validate_failures.py offers a lightweight approach that can support lists for multiple targets.

The idea is to create a manifest file that contains the FAIL, XPASS, UNRESOLVED output from make check. You cut and paste that output into the manifest file and then use validate_failures.py to decide whether the failures are ignorable or not.

$ <src>/contrib/testsuite-management/validate_failures.py --help
Usage: This script provides a coarser XFAILing mechanism that requires no
detailed DejaGNU markings.  This is useful in a variety of scenarios:

- Development branches with many known failures waiting to be fixed.
- Release branches with known failures that are not considered
  important for the particular release criteria used in that branch.

The script must be executed from the toplevel build directory.  When
executed it will:

1- Determine the target built: TARGET
2- Determine the source directory: SRCDIR
3- Look for a failure manifest file in
   <SRCDIR>/contrib/testsuite-management/<TARGET>.xfail
4- Collect all the <tool>.sum files from the build tree.
5- Produce a report stating:
   a- Failures expected in the manifest but not present in the build.
   b- Failures in the build not expected in the manifest.
6- If all the build failures are expected in the manifest, it exits
   with exit code 0.  Otherwise, it exits with error code 1.


Options:
  -h, --help            show this help message and exit
  --build_dir=BUILD_DIR
                        Build directory to check (default = .)
  --manifest            Produce the manifest for the current build (default =
                        False)
  --force               When used with --manifest, it will overwrite an
                        existing manifest file (default = False)
  --verbosity=VERBOSITY
                        Verbosity level (default = 0)

Documentation on writing testcases

See How to prepare a testcase

Running the testsuite on Cygwin

The testsuite runs on Cygwin, however slowly, and it is likely to hit a bug in Cygwin's process info management. To avoid that, anyone who wants to test on Cygwin is advised to build Cygwin DLL manually with this patch applied.

Compile time and memory utilization testing

This is not necessary in most cases, but it may be crucial if your changes may have a significant impact. In any case the more testing, the better: Compile time and memory utilization testing

None: Testing_GCC (last edited 2015-03-16 16:03:17 by ManuelLopezIbanez)