This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Can we speed up the gcc_target structure?


Ian Lance Taylor <ian@wasabisystems.com> writes:

> Back in the old days, gcc had a lot of code which was conditionally
> compiled with #ifdef.  That was ugly, but the resulting code was fast.
> Over time, a lot of the parameters checked with #ifdef were converted
> into macros which were checked at runtime using if.  That was less
> ugly, and, since the macros normally had constant values, when gcc was
> compiled with an optimizing compiler, the code was just as fast in the
> normal case.  When it was slower, it was generally because the
> compiler was doing something it couldn't do before.
[...]

Any change along these lines loses what I think is a critical property
of the target vector, which is that modifications to the target-specific
code on the far side of it do *not* require recompilation of the
entire machine-independent compiler.

I consider it a desirable and achievable goal to be able to swap out
the entire back end without rebuilding any of the optimizers; this
entails having *everything* go through the target vector or some other
sort of link-time interface.  (For instance, I see no need to change
the way recog.c interacts with insn-recog.c for this purpose.)

Furthermore, while a 3% measured speed hit is a concern, I think that
trying to win it back by undoing the targetm transformation - in the
object files, if not in the source code - is barking up the wrong
tree.  Instead we should be looking for ways to avoid having targetm
hooks in critical paths in the first place.  It's been my experience
that that is a much more fruitful source of optimization
opportunities.

zw


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]