This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: a quick few questions


>>>>> ">" == Tom Lord <lord@emf.net> writes:

>> We have some questions about the automated overnight testing.
>> 1) Why is it only overnight?   Why not more frequently?

AFAIK each tester runs independently on its own schedule.  I believe
Geoff's is continuous, for instance.

>> 2) Is it easy to speed up the testing with more hw and/or
>>    with infrastructure additions (e.g., using distcc)?
>>    Or, is it generally believed that they run about as fast
>>    as they could be expected to?    If they can be sped 
>>    up, by how much?

Bootstrap can be sped up in a few ways.  You can trade memory for time
with --enable-libgcj-multifile.  You can --disable-static (probably
only wise if you're only interested in java though).  My impression is
that the distcc/ccache changes got stuck, but I don't recall.  (They
wouldn't help with libgcj anyway, currently a major time sink in the
build.)

Testing can be sped up with additional hardware a bit.  I think you
can already run the target library test suites in parallel, and you
can run multilib tests in parallel.

What you can't do currently is run parts of a single test suite (eg,
gcc's) in parallel.  Ben Elliston has been talking about fixing this
though.

However...

>>    If it could be sped up considerably, even if only on
>>    one or two platforms, then one idea is to make a subset
>>    of the testing an _enforced_ part of the commit process,
>>    with commits that would cause regressions being rejected.

There are situations where you don't want this, e.g. comment fixes,
indentation cleanups, documentation changes.  For these a plain build
usually suffices.  For a fix to libgcj, it doesn't make sense to do a
full test run, since the changes will necessarily be isolated in the
libgcj test suite, I assume the same is true for libstdc++.

This matters because it would mean increasing the cost of making some
sorts of changes.

Also, I think implementing the "0 FAIL" rule would be a necessary
prerequisite to doing something like this.  Otherwise everyone has to
agree on a test baseline for commits and update the baseline in
response to changes in it, not to mention the fact that test results
change by target.

>>    A variation on this is to continue to allow commits without
>>    enforced testing, but then have an auto-created branch from
>>    that that lags by some amount of time but contains only
>>    revisions that certainly pass certain tests.

Or the monotone idea, which is to have the auto-testers tag revisions
as "known good" according to criteria they determine.  It is useful to
have multiple levels of goodness, eg "bootstrap on x86", "bootstrap on
<exotic platform>", "no C regressions on x86-x-mips", "no java
regressions", etc, so that users can pick for themselves what they
want to check out.

For instance, libgcj hackers ordinarily want to check out something
that is "pretty good" -- can bootstrap -- but they may not care
whether there has been some random mauve regression.  Occasional
hackers might want something similar.  Regular core gcc hackers may
prefer to be closer to the bleeding edge.

If the testers run frequently enough, you could even ask for "version
with no fails on x86 linux and which bootstraps on ppc linux", though
I would suspect that the commit rate is sufficiently great that it
would be unusual for two testers to pick the same revision to test.
(Though of course one could purposely set up multiple testers in an
organization that synchronize in just this way...)

Tom


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]