This is the mail archive of the
mailing list for the GCC project.
Re: gcc compile-time performance
- From: "David S. Miller" <davem at redhat dot com>
- To: shebs at apple dot com
- Cc: dberlin at dberlin dot org, dhazeghi at pacbell dot net, neil at daikokuya dot demon dot co dot uk, ak at suse dot de, gcc at gcc dot gnu dot org
- Date: Fri, 17 May 2002 09:57:53 -0700 (PDT)
- Subject: Re: gcc compile-time performance
- References: <Pine.LNX.email@example.com><3CE52EFB.68C0809D@apple.com>
From: Stan Shebs <firstname.lastname@example.org>
Date: Fri, 17 May 2002 09:25:32 -0700
That's my personal suspicion too, but no, I don't have any real
evidence. The lack of hot spots in profiling is a strong hint.
One oddball idea I've thought about is to functionize all the
tree and rtl macros, and run a profile on that to see what are
the most used/abused macros.
I know that the subreg-byte changes added a lot of overhead
particularly via the subreg_regno_offset() function (which was
an inline macro in my original diffs).
The divisions are what kill it. That overhead could be eliminated
if all the mode sizes were powers of 2 and we had some
GET_MODE_SIZE_LOG2() interface. Then we just transform all the
divides there into shifts.
Then there's the extreme approach of having maintainers only
accept patches that either remove code or make the compiler run
There is a better way, have maintainers work on approval of such
changes faster than approval of other changes :-)