This is the mail archive of the
mailing list for the GCC project.
Re: GCC Buildbot
On 21/09/17 14:18, Christophe Lyon wrote:
>> If this is something of interest, then we will need to understand what
>> is required, among those:
>> - which machines we can use as workers: we certainly need more worker
>> (previously known as slave) machines to test GCC in different
> To cover various archs, it may be more practical to build cross-compilers,
> using "cheap" x86_64 builders, and relying on qemu or other simulators
> to run the tests. I don't think the GCC compute farm can offer powerful
> enough machines for all the archs we want to test.
Interesting suggestion. I haven't had the opportunity to look at the
compile farm. However, it could be interesting to have a mix of workers:
native compile farm ones and some x86_64 doing cross compilation and
> It's not as good as using native hardware, but this is often faster.
> And it does not prevent from using native hardware for weekly
> bootstraps for instance.
>> - what kind of build configurations do we need and what they should do:
>> for example, do we want to build gcc standalone against system (the one
>> installed in the worker) binutils, glibc, etc or do we want a builder to
>> bootstrap everything?
> Using the system tools is OK for native builders, maybe not when building
> Then.... I think it's way safer to stick to given binutils/glibc/newlib versions
> and monitor only gcc changes. There are already frequent regressions,
> and it's easier to be sure it's related to gcc-changes only.
> And have a mechanism to upgrade such components after checking
> the impact on the gcc testsuite.
> In Linaro we have a job tracking all master branches, it is almost
> always red :(
Oh, that's surprising actually. I wouldn't have expected that. I would
have hoped that actually all masters would work most of the time. Do you
know if there's a specific reason for this?
>> - Currently we have a force build which allows people to force a build
>> on the worker. This requires no authentication and can certainly be
>> abused. We can add some sort of authentication, like for example, only
>> allow users with a gcc.gnu.org email? For now, it's not a problem.
>> - We are building gcc for C, C++, ObjC (Which is the default). Shall we
>> add more languages to the mix?
>> - the gdb buildbot has a feature I have disabled (the TRY scheduler)
>> which allows people to submit patches to the buildbot, buildbot patches
>> the current svn version, builds and tests that. Would we want something
>> like this?
> I think this is very useful.
> We have something like that both at Linaro and ST.
> On a few occasions, I did manually submit other people's patches
> for testing after they submitted them to gcc-patches@. It always
> caught a few problems in some less configurations.
I wonder how feasible it is to automatically extract the patches and
automatically do the running, and post back the results to the patches'
thread... just something that occurred to me. Haven't yet investigated.
>> - buildbot can notify people if the build fails or if there's a test
>> regression. Notification can be sent to IRC and email for example. What
>> would people prefer to have as the settings for notifications?
> I've recently seen complaints on the gdb list because the buildbot
> was sending notifications to too many people. I'm afraid that this
> is going to be a touchy area if the notifications contain too many
> false positives.
I discussed this with Pedro and Sergio and it was due to a bug in the
configuration that Sergio fixed, so notifications don't need to contain
false positives, unless of course, there's a bug. I will try to avoid
making the same mistake as Sergio and not spam GCC developers.
>> - an example of a successful build is:
>> This build shows several Changes because between the start and finish of
>> a build there were several new commits. Properties show among other
>> things test results. Responsible users show the people who were involved
>> in the changes for the build.
>> I am sure there are lots of other questions and issues. Please let me
>> know if you find this interesting and what you would like to see
> To summarize, I think such bots are very valuable, even if they only
> act as post-commit validations.
> But as other people expressed, the main difficulty is what to do with
> the results. Analyzing regression reports to make sure they are
> not false positive is very time consuming.
> Having a buggy bisect framework can also lead to embarrassing
> situations, like when I blamed a C++ front-end patch for a regression
> in fortran ;-)
> Most of the time, I consider it's more efficient for the project if I warn
> the author of the patch that introduced the regression than if I try to
> fix it myself. Except for the most trivial ones, it resulted several times
> in duplicated effort and waste of time. But of course, there are many
> more efficient gcc developers than me here :)
I think that's the point. I mean, as soon as a regression/build fail is
noticed, the buildbot should notify the right people of what happened
and those need to take notice and fix it or revert their patch. If
someone submits a patch, is notified it breaks GCC and does nothing,
then we have a bigger problem.
> Regarding the cpu power, maybe we could have free slots in
> some cloud? (travis? amazon?, ....)
Any suggestions on how to get these free slots? :)
Thanks for all the great suggestions and tips on your email.