This is the mail archive of the
gcc@gcc.gnu.org
mailing list for the GCC project.
Re: optimisation question
- From: Marcin Dalecki <martin at dalecki dot de>
- To: Joe Buck <Joe dot Buck at synopsys dot COM>
- Cc: gcc at gcc dot gnu dot org, Robert Dewar <dewar at adacore dot com>,"Remy X.O. Martin" <vsxo at hotmail dot com>
- Date: Tue, 1 Feb 2005 01:00:13 +0100
- Subject: Re: optimisation question
- References: <20050131152659.0dc92d6c@portia.local> <41FE6ADB.3070706@adacore.com> <20050131234317.GA4737@synopsys.com>
On 2005-02-01, at 00:43, Joe Buck wrote:
Unless you've done profiling and determined that you have a critical
inner
loop, it's best to optimize your code's form for readability and
maintainability, rather than to tweak it based on a belief you haven't
tested that compilers will "like it better".
Optimization without profiling and measurement is like swimming without
water. Code optimization can give you only linearly better results. If
you
really want to work miracles you should look at an *much* higher
abstraction
level. Most of micro optimization which applied 10 years ago will
frequently
do no good those days due to changes in hardware as well. Thus caches
for example.
Or instruction scheduling - turns out it's best done at runtime by the
CPU
instead of the compiler. Yea. I know there is a CPU from intel
promising miracles
by the still to be developer compiler to start working at full gear
frequently
called Itanic. It will sink. Static code analysis just can't get you
around fundamental complexities of the underlying theory. Most stuff
simply can't
be analyzed in reasonable time. Thus most compilers simple end up as a
heap
of heuristics applied after each other. However the set of heuristics
used by
commercial grade products is really HUGE those days. It's hard to beat
them
by just looking at the assembler code for yourself.