This is the mail archive of the
mailing list for the GCC project.
RE: gcc compile-time performance
- From: dewar at gnat dot com (Robert Dewar)
- To: dewar at gnat dot com, gcc at gcc dot gnu dot org, scott at coyotegulch dot com
- Date: Sat, 18 May 2002 11:11:08 -0400 (EDT)
- Subject: RE: gcc compile-time performance
> What counts on the Space Shuttle (I never mentioned "station") is code
> accuracy, actually. I was using it as an example to counter your belief that
> the only folk who use "old" computers are hobbiests. And you ignored the
> other examples I gave from industry and education.
Actually code performance is important too. Oddly enough, code accuracy is
probably not absolutely critical. Why not, because in a safety-critical
environment (say something that is certified at FCC level A), you have to
look at the object code anyway (you can't trust the compiler, or even the
linker). Furthermore, you tend to use a VERY simple subset of the language
where it is relatively unlikely that the compiler gets things wrong, and
you tend to work with low optimization levels since you want source
One problem with GCC in the SC world is the very poor quality of unoptimized
code. We generally attempt to get people to use at least -O1, since in
practice we find this helps traceability, but this is a segment that is
very optimization adverse (they wanted a requirement in the Ada RM that
any compiler which was annex H compliant must have a mode where absolutely
no optimization was performed at all -- this got rejected since it is
semantic nonsense -- but it shows the level of concern).
In any case, the point I was making was that this is a discussion about
compile time performance (see subject line), and the use of slow target
processors in embedded environments is, I still maintain, irrelevant
to that particular discussion.