This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH 01/15] Selftest framework (unittests v4)


On Tue, Nov 24, 2015 at 01:44:34PM -0700, Jeff Law wrote:
> On 11/19/2015 11:44 AM, Bernd Schmidt wrote:
> >On 11/19/2015 07:08 PM, David Malcolm wrote:
> >>gcc_assert terminates the process and no further testing is done,
> >>whereas the approach the kit tries to run as much of the testsuite as
> >>possible, and then fail if any errors occurred.
> >
> >Yeah, but let's say someone is working on bitmaps and one of the bitmap
> >tests fail, it's somewhat unlikely that cfg will also fail (or if it
> >does, it'll be a consequence of the earlier failure). You debug the
> >issue, fix it, and run cc1 -fself-test again to see if that sorted it out.
> >
> >As I said, it's a matter of taste and style and I won't claim that my
> >way is necessarily the right one, but I do want to see if others feel
> >the same.
> I was originally going to say that immediate abort would be the preferred
> method of operation, but as I thought more about it....
In general I really dislike over engineering, andI kind of agree running
all the tests is that.  however looking at all the half way decent test
systems I'vedelt with I think they all supported it, so my guess is its
likely we'd end up adding this some day  Combining that with it not
beingtoo terrible to support it seems kind of harmless to build it in.

> I think this really is a question of how the tests are likely used.  I kind
> of expect that most of the time they'll be used as part of an early sanity
> test.
> 
> So to continue with the bitmap botch causing a CFG failure, presumably the
> developer was mucking around in the bitmap code already and when they see
> the CFG test failure, they're going to suspect they've mucked up the bitmap
> code in some way.
> 
> The first question should then be did the bitmap tests pass or fail and if
> they passed, then those tests clearly need extending :-)
> 
> >
> >>The patch kit does use a lot of "magic" via macros and C++.
> >>
> >>Taking registration/discovery/running in completely the other direction,
> >>another approach could be a completely manual approach, with something
> >>like this in toplev.c:
> >>
> >>   bitmap_selftest ();
> >>   et_forest_selftest ();
> >>   /* etc */
> >>   vec_selftest ();
> >>
> >>This has the advantage of being explicit, and the disadvantage of
> >>requiring a bit more typing.
> The one advantage of explicit registration I see is the ability to order the
> tests so that the lowest level data structures are tested first, moving to
> increasingly more complex stuff.
> 
> But if we're in a mode of run everything, then ordering won't be that
> important.
> 
> In the end I think I lean towards run everything with automatic
> registration/discovery.  But I still have state worries.  Or to put it
> another way, given a test of tests, we should be able to run them in an
> arbitrary order with no changes in the expected output or pass/fail results.

I haven't looked at the details of the auto registration, but assuming
its more or less the standard type thing I'd think we should be able to
support randomizing the test order pretty easily.  If we can easily run
the tests in a couple hundred random orders every month I'd think the
odds of inter test dependancies are fairly low.

Trev

> 
> jeff
> 


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]