Breaking up the 'parse' time into 'parse/bison' and 'parse/other'

Tim Josling tej@melbpc.org.au
Wed Feb 26 09:52:00 GMT 2003


Andy,

I assume you didn't track the earlier discussions.

The problem is that the existing messages exaggerate the slowness of bison and
mislead people.

I suggested that the extra system calls would be excessive but others pointed
out that the overhead is in fact very low. There is already a call around each
token to meaure the lex time and this does not produce a measureable effect.

I confirmed this by taking out all the timevar calls and it made no difference
to the CPU time. However I will be measuring the impact of the change
otherwise I would be justly pilloried.

I would agree that gprof is a good tool and I do use it (see the discussion
earlier this month). I haven't been able to  get any of the hardware oriented
tools going at this stage on my system due to kernel versions etc.

The report you referred to about the parser - did it quantify the number of
cache misses and the resulting %impact?

Tim Josling

Andi Kleen wrote:
> 
> Tim Josling <tej@melbpc.org.au> writes:
> 
> > Recapping: Option -Q gives you a breakup of the time spent in the various
> > compiler phases. It misleadingly, according to me, gives a high number for the
> > parser because it also counts all the code in the parse 'actions'. So people
> > are always suggesting we rewrite the parsers in native code.
> 
> Cache line profiling showed that the parser causes excessive cache misses
> for its LALR(1) tables. Cache misses are slow.
> 
> > The plan is I will produce a patch that breaks parse time into "parse/bison"
> > and "parse/actions", or some such wording.
> 
> Slowing it down even more by addings thousands of system calls?
> (A system call is much slower than an ordinary function call on most
> operating systems). Probably not a good idea.
> 
> If you want accurate profiling results use a real profiler.
> 
> -Andi



More information about the Gcc mailing list