This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: gcc compile-time performance


Marc Espie wrote:-

> Specifically, there ought to be a reasonable interface to the preprocessor,

Like cpp_get_token () ?  8-)  The compiler never interfaces with the
lexer, as the macro expander sits on top of it.

> and I would tend to think it ought to be feasible to just swap one set of
> preprocessor functions to another.  That's right, that means settings
> things up so that both preprocessors are part of gcc.
> I assume that most things will be opaque to the compiler itself: after all,
> all it should see would be a set of tokens, with opaque ways to compare 
> values (where compare is to be taken in an abstract way, e.g., hash table ?),
> and a way to spew out token values (identifiers).
> 
> If I understand things correctly, the bandwidth bottleneck is at the front
> of the preprocessor. Tokens are already much easier on the processing
> power. Sure, getting through a function pointer may slow things down a bit,
> but I don't think it will slow things down that bad, and definitely not a
> two-fold slow-down, as you seem to imply ubiquitous multibyte support would
> imply...

That might be a way to do it.  The code duplication of the whole lexer
would worry me, but if we isolated just the bits that care about mbchar
support into a single file (which isn't quite the case at present) with
conditional compilation, we could just compile it twice to two different
object files I guess.

Neil.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]