This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]

Re: Benchmark suite...


> 
> There are two types of benchmarks we can run on the compiler on.. 
> 
> First, the benchmarkes used to study generated code. These benchmarks
> should be small and simple. They are intended to find places where the
> compiler is creating ineffecient code.

Thats exactly what I am shooting for. I want to create suite where you can
see how for example using leas at inc place behaves and where it is hit
and where miss.
> 
> Second, the benchmarks used to study a whole system. These need to be big
> and complicated, to emulate real-life applications. Finding out why they

Thats not my goal. Only purpose for adding more complex ones (like XaoS loop
an so on is that Jeff pointed, that some optimizations (like register allocatin
or gcse possible) don't show itself a much at simple code.
They are in separate file, not targeted for main tunning. Idea was that you
might tune the simple cases and then look if it works well for complex ones
too.
> 
> I believe that we want to use only the latter types of benchmark programs,
> simple benchmarks, which test function-calling, CSE, eliminating unused
> code, branch-prediction, deducing the range of a variable to eliminate
> superflouis comparisons....  In many ways, we want this to be the
> equivalent of the test suite, a lot of short programs which don't break
> the compiler, but do aid in measuring the resultant code. So, any given
> optimization can be judged based on how it affects the micro-benchmarks in
> the suite.... Ideally, each one should be a single function, and only a
> few dozen lines of code.
> 
> Some of the tests have to be lengthy and complicated, to help gauge
> things like register-spilling, and other similar optimizations that are
> only used in complicated code..
> 
> Thus, some of the LISP suites might be fairly good.. (TAK would be ideal
> for measuring function-call overhead.) I would suggest using the Gabriel
> benchmarks
> (http://www-cgi.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/lisp/code/bench/gabriel/0.html)

Good idea, I will look at them.

> 
> I think that this is absolutely the wrong idea. We want to have a test
> suite that, instead of detecting bugs, tests how well the optimizer is
> doing.

The purpose for more complex tests was to stress optimizer. Many optimizations
seems to be good in simple benchmarks (for example like enabling -fschedule-insns
at i386, but loose in more complex tests. I have this complex tests separated
out into "unsorted" category and their only purpose is to show this case.
Because it is hard to write such complex benchmark, complex internal loops 
of some real programs should be good IMO. I am not planning to add more 
of them in near future.
> 
> How does quicksort help with that? What is it supposed to test?
> (function-invocation speed? If so, TAK would be better, it is simpler and
For function invocatiuon I have recusive hanoi, wich does manly just this
operations...
> all function invocation.)
Quicksort test is just few of lines (not more than page) and it tests mainly
inrenal loop comparing numbers. But agreed, we should have two tests, first
clearing array and second doing the recursion. I will change that.
Interesting is that exactly this benchmark shows quite strange behaviour :)

I've added some other simple loops. Of course I will look closely to tests
and probably remove those which are useless and redundant.
I needed to make enought tests to make it a bit usefull, thats why I am
added more tests that I probably want to include. 
> 
> Here is my contribution of a couple benchmarks in this philosphy:
Thank you very much!
They are very interesting.

Honza


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]