This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Resuming SPEC performance tracking at RedHat


 People from gcc community found that GCC performance tracking at
RedHat stopped after Diego left RedHat.  As I understand this was
helpful for some of them.  Therefore we decided to resume GCC
Performance Tracking on GCC.  This work is based on Diego Novillo's
scripts (Diego, thanks for the good scripts) with the following
changes:

 o Only Intel Core2 machine is used now for this.  I don't plan to do
   performance tracking for other machines (like PPC, AMD, and Itanium)
   because they are already tested by SUSE and I don't want a
   duplication and to use rare not mainstream machines which could be
   used for development purposes in RedHat.

o SPEC2000 will be tested for 64-bit and 32-bit mode (-m32).

 o I am not going to use SPEC95 for performance tracking.  It is too
   small benchmarks which does not reflect modern size programs.

 o I've added SPEC2006 performance tracking.  It will be done once a
   week because SPEC2006 takes > 10 times more
   time for execution.  The SPEC2006 will be run only for -O2
   -mtune=generic and only for 64-bit mode .

 o Now SPEC2000 and SPEC2006 will run 3 times (as it is used for
   reportable results).  It permits to generate graphs with the error
   marks (minimal and maximum values will be presented on the graph. For
   geom. mean scores, the geom. mean of minimal scores and maximum scores
   will be present).  I think it will be useful even if it may look
   ugly.

 o Sometimes people saw unexpected drop or increase in scores.
   Usually it was result of one or more SPEC program failure.  Now
   scores with failed programs will be marked on the graphs.

 o I removed elapsed compilation times which was measured with one
   second accuracy.  I found them not useful
   especially for individual programs (e.g. compilation time of gzip
   was 1, 2 or 3 seconds depended from how lucky we were).

 o I've added CPU (user time) compilation time for all SPEC only.  In
   general case it has no sense to measure them for individual
   benchmarks because of runspec start time (measuring them without
   runspec start time would require SPEC testsuite modification).

   The compiler is built with release checking to see the final
   result to see how the release would look like.

 o I am going to do release performance tracking too.  Now there is
   comparison of 3.2 to 4.2 releases.

The result for SPEC2000 will be ready at 6am ET for 64-bit mode at
12pm for 32-bit mode from Monday to Saturday.  The result for SPEC2006
will be ready at 2pm on Sunday.

There is a possibility that the results will be not posted on next week
because I'll be on vacation and because the used site is in beta state
but they will be posted later and not lost.  I hope Intel server
survive such rigorous testing too.

The performance tracking results is on

https://vmakarov.108.redhat.com/nonav/spec/index.html

Vlad


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]