This is the mail archive of the
mailing list for the GCC project.
Re: QMTest and the G++ testsuite
- From: Mark Mitchell <mark at codesourcery dot com>
- To: "Joseph S. Myers" <jsm28 at cam dot ac dot uk>
- Cc: "gcc at gcc dot gnu dot org" <gcc at gcc dot gnu dot org>
- Date: Mon, 20 May 2002 09:33:17 -0700
- Subject: Re: QMTest and the G++ testsuite
--On Monday, May 20, 2002 04:52:24 PM +0100 "Joseph S. Myers"
> On Mon, 20 May 2002, Mark Mitchell wrote:
>> 2. Support for direct comparisons between test results. QMTest can
>> easily answer the question "Did I break anything relative to the
>> results I have from yesterday?"
> How? This doesn't seem to be covered in the README. What's the
> equivalent of running diff against saved .sum files?
You do "qmtest run -O <previous results>". We don't have a direct
diff utility yet, although that's something we'd like to add. The
"make qmtest-g++" target currently simulates the DejaGNU methodology
(taking XFAILs from the tests themselves) because I didn't want to
change the paradigm *and* the tool at once.
>> I've attached the file README.QMTEST, now present in the "testsuite"
>> directory, that explains how to try out "make qmtest-g++" instead of
>> "make check-g++".
> When this is nonexperimental (i.e., when QMTest is the recommended or only
> way to run tests), could the docs please go in the internals manual
> (sourcebuild.texi) rather than having yet more miscellaneous READMEs about
> the place?
> Should people working on testsuites intended to go in GCC at some later
> date (e.g. ACATS, the GNU Pascal testsuite) now work towards QMTest
> harnesses for them, rather than DejaGnu ones?
That's for the community to decide.
I think before we make a policy decision we need to get some collective
experience with QMTest. I think the consensus will eventually be that
it is better than DejaGNU for our purposes, and so QMTest will be the
right answer -- but clearly we are not there yet.
>> With QMTest, each source file is considered a single test. If any
>> of the seven sub-tests fail, the entire test is considered to fail.
>> However, QMTest does present information about *why* the test
>> failed, so the same information is effectively available.
>> It is true that, therefore, causing an already failing test to "fail
>> more" is not immediately detectable through an additional unexpected
>> failure messages when using QMTest. On the other hand, most people
>> seem to think of each source file as "a test", not "twelve tests",
>> so the model QMTest uses may be more natural.
> How does this work where a single file contains both tests expected to
> pass and those expected to fail? (That is, in such a case, what results
> does QMTest give if all tests are as expected, or if a test meant to work
> is broken, or if a test expected to fail starts passing?)
Each test an outcome: PASS, FAIL, ERROR, or UNTESTED.
Each test can also have an expected outcome.
In your scenario, the test outcome would be FAIL. (At least one of the
component tests fails.) The expected outcome would also be FAIL -- you
expected one of the component tests to fail. So, the result would be
XFAIL -- even if a different set of failures occurred. If all the
component tests passed you would instead get an XPASS.
There's a philosophical skew between DejaGNU and QMTest on this point;
QMTest thinks a single test case should test a single thing, whereas
with DejaGNU you can test three things in a single test case. There's
no easy way around this skew -- but it's also not clear to me that in
practice this would actually matter for us. We already think in the
QMTest terms; mostly people say "foo1.C fails", not "the third little
testlet in foo1.C fails".
We could always rewrite the testcase as three testcases, if we care that
much about each of the sub-results.
Mark Mitchell email@example.com
CodeSourcery, LLC http://www.codesourcery.com