This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: Small update to reversed_comparison_code
- To: mrs at windriver dot com
- Subject: Re: Small update to reversed_comparison_code
- From: kenner at vlsi1 dot ultra dot nyu dot edu (Richard Kenner)
- Date: Tue, 13 Mar 01 22:20:13 EST
- Cc: gcc at gcc dot gnu dot org
So, in your case above, I would argue that your statement of what the
testing system is saying is wrong, it doesn't say that there is
something wrong with this patch. Rather, it says, there is a
regression; it now fails when your patch is in the tree, and that much
is true. The fix _could_ be, buy a better machine, it could be to
improve the testcase, it could be to fix the testing infrastructure,
it could be to fix the patch. The testing system cannot tell you
which, nor can one infer an answer from the testing system.
Exactly. And that's the risk with all sorts of tests. You see this
quite commonly with medical tests, for example. There is an argument
that says that if it's hard to interpret the result of a test, the
results are likely to be *misinterpreted* and that can be worse than
not doing the test at all.
I happen to think that should one add a warning to gcc that causes 100
testcases in the testsuite to fail, they should _not_ check the work
in, without also fixing the testframework or the testcases, at the
same time.
Clearly. I don't think anybody will dispute that.