This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [testsuite] Run guality tests on Solaris
- From: Rainer Orth <ro at CeBiTec dot Uni-Bielefeld dot DE>
- To: Jeff Law <law at redhat dot com>
- Cc: Jakub Jelinek <jakub at redhat dot com>, gcc-patches at gcc dot gnu dot org, Alexandre Oliva <aoliva at redhat dot com>, Mike Stump <mikestump at comcast dot net>
- Date: Mon, 02 Feb 2015 14:18:32 +0100
- Subject: Re: [testsuite] Run guality tests on Solaris
- Authentication-results: sourceware.org; auth=none
- References: <yddlhknjc4e dot fsf at lokon dot CeBiTec dot Uni-Bielefeld dot DE> <54C949C7 dot 9080902 at redhat dot com> <20150128204635 dot GB1746 at tucnak dot redhat dot com> <yddk306lfe3 dot fsf at CeBiTec dot Uni-Bielefeld dot DE> <54CB2080 dot 7070106 at redhat dot com> <20150130081914 dot GW1746 at tucnak dot redhat dot com> <54CBDE72 dot 3040406 at redhat dot com>
Jeff Law <law@redhat.com> writes:
> On 01/30/15 01:19, Jakub Jelinek wrote:
>>
>> The biggest problem is that what fails and what does not varries between
>> targets and between optimization levels. Right now we have no way to xfail
>> test XYZ for -Os on x86_64-linux and for -O2 and -O3 on i686-linux ia32, and
>> the lists would become very large. Some tests in guality are xfaileded just
>> in case, even when they actually XPASS on many targets.
> I thought we added that kind of capability a while back. There's still
> significant potential for them to get unwieldy. The hope would be that
> we'd have a set for x86, x86_64, aarch64, etc, but not have to do anything
> special for the OS.
I fear this won't suffice: it certainly will depend on the debug format
used, and even so there are differences between Linux/x86 and
Solaris/x86, both using ELF and DWARF (perhaps a DWARF-4 vs. DWARF-2
difference?). And Darwin/x86 with Mach-O will certainly differ again
(not currently noticeable since the guality tests are disabled there
wholesale).
>> The way to look for regressions in the guality area, at least as I do it
>> regularly, is just compare test_summary results.
>> If we'd disable this by default, I'm sure our debug quality would sink very
>> quickly.
> Yup. But it'd still be nicer if our test runs were cleaner.
Very true. I wonder how best to go forward with filing PRs for the
failures: one PR for failing test may be overkill, but it would require
lots of analysis to group by failure with common cause.
Rainer
--
-----------------------------------------------------------------------------
Rainer Orth, Center for Biotechnology, Bielefeld University