This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Proposal on how to study g++ compile-time regression and progression since 2.95.2 and initial analysis


> I propose two possible representative sources and give current results
> for a Secondary Evaluation Platform.  I think it is quite important to
> break this study into two parts: header processing speed (since some
> C++ projects are clearly dominated by many small source files each
> implementing a small part of a total program or library) and code
> generation speed (since some C++ projects are clearly dominated by
> template crunching and other heavy lifting, etc).

...you also reference memory used, which is a third part, and one that is 
also important. Plus runtime performance. I don't see why all four items 
shouldn't be considered.

> 1. Proposed C++ test one studies the time/memory to process an
> extended, but typical and standard, set of header files and is
> generated as follows (note: we preprocess with the 3.0 release

>     $srcdir/libstdc++-v3/testsuite/17_intro/headers.cc | \

Ha. This is really not a fair comparison, but it is a good test.

> 2. Proposed C++ test two studies time/memory to produce object code
> from C++ code instead of focusing on header processing speed as in C++
> test one.  Any large body of representative C++ code that uses no or
> few standard headers could work for this test.  stepanov_v1p2.C would
> appear to be too small.  POOMA would appear to fit the bill especially
> since it is already being used as a real-world regression test for
> gcc3.  

seems like a good choice

-benjamin


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]