This is the mail archive of the mailing list for the GCC project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

a quick few questions

As a design exercise and advocacy exercise, on the arch list we're
playing through some thought experiments to design a GCC-project
deployment of arch.   We're trying to infer reasonably well 
the scale and scope of GCC (e.g., what's the commit rate to
mainline?).  We're trying to design a solution that would initially 
closely match the current "feel" of your existing practices,
but that would also enable elaboration and evolution of those
practices as the comfort level grew.  Great fun (for us)!

Can anyone answer some questions about your testing infrastructure?

We understand (hopefully correctly) that GCC mainline is nightly
tested on a fairly large number of platforms and results reported.
These use the GCC test suite.  We understand that these tests take a
fairly long time to run.

We also understand that some volunteer testers do nighly or otherwise
periodic tests of their own, such as building the BSD kernel.

We have some questions about the automated overnight testing.

1) Why is it only overnight?   Why not more frequently?

   I can't guess whether its any or a combination of:

   a) tradition
   b) hw (money and slowest-supported-platform) limits
   c) not wanting to read results more frequently
   d) having found that more frequent results would 
      simply create too many false negatives (bugs that 
      would have been fixed quickly anyway)
   e) something else entirely

2) Is it easy to speed up the testing with more hw and/or
   with infrastructure additions (e.g., using distcc)?
   Or, is it generally believed that they run about as fast
   as they could be expected to?    If they can be sped 
   up, by how much?

   If it could be sped up considerably, even if only on
   one or two platforms, then one idea is to make a subset
   of the testing an _enforced_ part of the commit process,
   with commits that would cause regressions being rejected.
   A variation on this is to continue to allow commits without
   enforced testing, but then have an auto-created branch from
   that that lags by some amount of time but contains only
   revisions that certainly pass certain tests.

   (Pre-commit testing, as the coding standards require, is only
   probabilistic.  Nothing guarantees that the tree after the commit
   will match the tree that was tested prior to a commit.  The high
   commit rate practically ensures that the two trees _won't_ be
   identical.  A minor issue in practice, sure, but we're thinking
   about what it would take to close that gap anyway (and then what
   utility there would be in doing so).)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]