This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] PR C/12466 -Wold-style-definition incorrectly warnswithellipsises


On Wed, 1 Oct 2003, Kaveh R. Ghazi wrote:
>  > From: Hans-Peter Nilsson <hp@bitrange.com>
>  > >  > *Never add stuff to a testcase, particularly not new tests*.

By that I meant "Never add stuff to an existing file,
particularly do not add *new tests* in existing files".
I thought it'd appear the same, but I guess it didn't.

>  > On Wed, 1 Oct 2003, Kaveh R. Ghazi wrote:
>  > > I think we should make a distinction between regression testcases that
>  > > (used to) crash the compiler and feature testcases that ensure a
>  > > feature, flag or optimization/transformation works properly.
>  > Not as long as they look the same to the observer of test-suite
>  > results, and not as long as the modified test-suite would fail
>  > for an "unenhanced" compiler, where it used to succeed.

> You haven't given a reason for your assertion.  Presumably the
> enhanced compiler and the enhanced testcase are checked in at the same
> time.  So the observer of the testsuite would see no new failures.

Hum, well, I was thinking along the lines of comparing the
test-results of an installed compiler with those of a new
version and not having to worry about what combinations of
compiler and test are supposed to be valid.  I recall
discussions of checking against installed versions of gcc so it
seems some people sometimes perform tests this way.

This apparently doesn't satisfy your need for reason, but I
think it's simpler to follow the same rules for bugs as for
features *when you can*.  It's too easy to see test-cases for
bugs in features (whatever "features" is) as tests for new
features, for one.

>  > > In the latter case, as we modify or enhance the feature in question we
>  > > should update the existing testcase to reflect the new reality.
>  >
>  > When you modify the feature, I agree you need to modify the
>  > original test-cases.  But when you enhance the feature you *add
>  > new test-cases* for the enhancement.
>
> Again, no rationale is given for your position.

The test gcc.c-torture/compile/simd-3.c fails on cris-axis-elf.
I haven't investigated it, but it's there, and I can keep an eye
on it to see that the failure is just in that test, and I can
compare the failure with other toolchain ports where I have
access to hardware or a simulator.  Hum, it's in c-torture.  An
ICE test?  "Feature error" or just a bug; bug in front-end or
back-end? Judging by history the simd tests seem to have been
added as feature tests.  Not really separate, methinks, and
that's ok with me, as long as a test is separate from new tests
and placed according to original intent.  If people would add
tests in that supposed feature-test file, I can't tell if it's a
new failure.  I may not be able to tell if an old failure is
fixed since new ones show up as the same single FAIL.

> Adding new test-cases* is orthogonal to adding new files when
> considered by itself.

I don't see how to "consider it by itself" usefully; they're too
related to being separate except in special cases like error
tests in dg-tests.  So the effect is usually not orthogonal:
new test-cases in the same file look like the same failure,
while they appear separate if put in separate files.  That's it.

> We keep ICE testcases separate

In what way?  By the directory and/or filename you mean? That's
just a clue of the original intent of the test.  Today's feature
test is tomorrow's regression test.  Whether it ICEs or not is
not that unimportant.

> because they
> test for individual bugs/regressions and we wish to isolate bugs, plus
> bugs can be very platform specific.  So a testcase with N bugs in it
> may pass on one box but fail on several others all for different
> reasons and different bugs.

That's a reason to keep them separate, so you don't have M tests
that trig N bugs and look as a single FAIL result.  The more
separate they are, the better you can track them.

> However feature tests (like for warnings) usually either just work or
> they don't.  If the feature's tests logically belong together, it
> serves no purpose other than to add bloat to put them in separate
> files.

Bug tests and feature tests aren't really separate enough to
make that kind of distinction except really initially.  Both
usually either PASS or FAIL; the reason for a FAIL isn't
apparent without investigation.  Besides, after some time, if a
feature test fails, a FAIL may be due to rot in the machinery it
uses and then it suddenly the feature test appears to be an ICE
test.

I recall old tests for nested functions that are excellent base
tests for checking that you don't introduce bad stuff when you
fiddle with e.g. the nonlocal goto machinery for a port.  I
can't really classify them, whether they're regression tests
or feature tests.  Some originate as ICE tests, some as feature
tests.

No other simd-test than gcc.c-torture/compile/simd-3.c fails on
cris-axis-elf.  I'm happy they're so separate.

I really don't think bloat of the test-suite is an issue
anywhere.  It shouldn't trump rules of simplicity and separation
as long as you don't bloat it with resource hogs like
gcc.c-torture/compile/20001226-1.c.

I must say I didn't expect that strong disagreement considering
that you say you generally agree to the philosophy of "never
adding stuff to a testcase^Wtestfile, particularly not (adding)
new tests".  Oh well, much ado... I'm rambling so I better send
this before every single paragraph says the same thing...

brgds, H-P


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]