This is the mail archive of the
mailing list for the GCC project.
Re: GCC 3.3 compile speed regression - AN ANSWER
- From: <tm_gccmail at mail dot kloo dot net>
- To: Linus Torvalds <torvalds at transmeta dot com>
- Cc: jsm28 at cam dot ac dot uk, gcc at gcc dot gnu dot org
- Date: Tue, 11 Feb 2003 17:21:04 -0800 (PST)
- Subject: Re: GCC 3.3 compile speed regression - AN ANSWER
On Tue, 11 Feb 2003, Linus Torvalds wrote:
> On Tue, 11 Feb 2003 firstname.lastname@example.org wrote:
> > On Tue, 11 Feb 2003, Linus Torvalds wrote:
> > >
> > > The thing is, if you want to make gcc faster, you have to bite the
> > > bullet and throw out code that doesn't perform well. And you have to
> > > _remove_ phases of optimization, instead of adding new ones. Having
> > > different phases where you operate on different kinds of data structures
> > > (ie tree -> ssa -> rtl) is just fundamentally slow, as you have to
> > > marshall the data into the right format for the next phase (which is
> > > likely bad for caches too).
> > Well, if you apply this line of reasoning to Linux, then you could remove
> > stuff such as the virtual filessytem layer, since all it does convert data
> > from one format to another.
> Well, it doesn't, actually.
> One of the things the VFS layer is very careful about is to try to _share_
> the data as much as at all possible (which is quite a lot, actually)
> between all layers. It basically never copies data - EVER.
The data copy overhead may be nil. I suspect the code execution overhead
is not nil.
> The thing is, when it comes to the kernel, we _have_ looked at
> performance. In fact, that's usually the first thing we look at,
> especially when it comes to data structures. Because data structures
> really are the thing that determine how fast you can go.
> Also, I think you'll find that newer kernels tend to be _faster_ than
> older ones. And yes, that's again because it's one of the main concerns.
> So I think your argument falls flat on its face.
It depends on your evaluation metric.
GCC supports more targets with every release, so by that metric, it is
> > There is a reason why the code exists in multiple formats during
> > compilation; some transformations are easier done in some formats than
> > others.
> And this will slow things down. There's no question about it. At dubious
> gain - because you _can_ plot a line where gcc is getting slower and
> slower to compile over the last few years, but you can _not_ plot a
> line which shows the resulting improvement in resultant code quality.
You're measuring quality by one criteria on a single testcase.
GCC has improved greatly in areas such as C++ conformance, C++ abstraction
reduction, Java support, and function inlining, which may not necessarily
be relevant to you.
Similarly, we can evaluate Linux quality by another criteria, such as
the number of lines of source code in the kernel.
By that metric, Linux keeps getting steadily larger and larger which is
probably not very good. It takes more disk space on mirrors, and makes
updates more difficult for kernel hackers on dialup connections.
Or for variety, you can compare 2.0.34 and 2.5.x running on an 8 megabyte
386-based machine. I suspect that 2.5.x wil not run faster than 2.0.34 on
> One of the main things that people complained about in the kernel mailing
> list was that not only did the compiler get slower, but the code
> _generated_ also got bigger and slower. That is indeed what the thread got
> started about.
Well, it would be more helpful if they actually tracked down and reported
specific incidents rather than complaining in general.
You probably get annoyed when people complain that kernel X+1 is now much
slower than kernel X without specifying the circumstances and/or
performing any investigation. It's pretty much the same situation for
compiler people as well. If people point out specific instances where code
is slow, and can analyze it and figure out why, then people can probably