This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Reduce GC overhead of the C++ lexer buffer


Richard Guenther wrote:

> CP_LEXER_BUFFER_SIZE is the number of elements in the cp_token GGC vector:
> 
>   /* Create the buffer.  */
>   alloc = CP_LEXER_BUFFER_SIZE;
>   buffer = GGC_NEWVEC (cp_token, alloc);
> 
> to reduce the GC overhead (the unused part at the end of a GC page), the
> amount of memory allocated needs to be (close to) a power of two.  Thus
> the division by sizeof(cp_token).  The number 262144 was chosen to be
> a reasonable compromise between 32bits (20 bytes per cp_token) and 64bits
> (larger), so that the number of initial tokens is around the original
> value 10000.  But I can easily halve this number, if requested.

I think the value (8K) is OK, but I still don't understand the division.
 Why not say 8K?  You're taking a power of two and dividing it by
something; that's only going to end up being a power of two if the think
you're dividing by is also a power of two.  In this case, it is (on my
machine cp_token has size 32), but that won't necessarily be true in future?

-- 
Mark Mitchell
CodeSourcery
mark@codesourcery.com
(650) 331-3385 x713


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]