This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Delete powerpcspe


On Thu, Dec 13, 2018 at 5:49 PM Jeff Law <law@redhat.com> wrote:
>
> On 12/12/18 10:33 AM, Segher Boessenkool wrote:
> > On Wed, Dec 12, 2018 at 11:36:29AM +0100, Richard Biener wrote:
> >> On Tue, Dec 11, 2018 at 2:37 PM Jeff Law <law@redhat.com> wrote:
> >>> One way to deal with these problems is to create a fake simulator that
> >>> always returns success.  That's what my tester does for the embedded
> >>> targets.  That allows us to do reliable compile-time tests as well as
> >>> the various scan-whatever tests.
> >>>
> >>> It would be trivial to start sending those results to gcc-testresults.
> >>
> >> I think it would be more useful if the execute testing would be
> >> reported as UNSUPPORTED rather than simply PASS w/o being
> >> sure it does.
> >
> > Yes.
> Yes, but I don't think we've got a reasonable way to do that in the
> existing dejagnu framework.
>
>
> >
> >> But while posting to gcc-testresults is a sign of testing tracking
> >> regressions (and progressions!) in bugzilla and caring for those
> >> bugs is far more important...
> >
> > If results are posted to gcc-testresults then other people can get a
> > feel whether the port is detoriating, and at what rate.  If no results
> > are posted we just have to assume the worst.  Most people do not have
> > the time (or setup) to test it for themselves.
> Yup.  I wish I had the time to extract more of the data the tester is
> gathering and produce this kind of info.
>
> I have not made it a priority to try and address all the issues I've
> seen in the tester.  We have some ports that are incredibly flaky
> (epiphany for example), and many that have a lot of failures, but are
> stable in their set of failures.
>
> My goal to date has mostly been to identify regressions.  I'm not even
> able to keep up with that.  For example s390/s390x have been failing for
> about a week with their kernel builds.    sparc, i686, aarch64 are
> consistently tripping over regressions.  ia64 hasn't worked since we put
> in qsort consistency checking, etc etc.

Yeah :/

I wonder if we could set up auto-(simulator)-testing for all supported
archs (and build testing for all supported configs) on the CF
(with the required scripting in contrib/ so it's easy to replicate).  I'd
simply test only released snapshots to keep the load reasonable
and besides posting to gcc-testresults also post testresults
differences to gcc-regression?

That said, can we document how to simulator-test $target in
a structural way somewhere?  Either my means of (a) script(s)
in contrib/ or by simple documentation in a new gcc/testing.texi
or on the wiki?

You at least seem to have some sort of scripting for some targets?
Esp. having target boards and simulator configs would be nice
(and pointers where to look for simulators).

Richard.

> Jeff


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]