This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] Reduce GC overhead of the C++ lexer buffer


Michael Matz wrote:
> Hi,
> 
> On Mon, 19 Jun 2006, Mark Mitchell wrote:
> 
>>> At the moment we have
>>>
>>> cp/parser.c:292 (cp_lexer_new_main)  0: 0.0%  82369152:33.3%  0: 0.0% 
>>> 15784576:37.2%  7
>>>
>>> which is due to the fact that the initial lexer buffer size is no where
>>> near a power-of-two value and we keep gc-reallocating the vector, doubling
>>> its size.
>>>
>>> With the following patch, this overhead is removed nearly completely
>>>
>>> cp/parser.c:266 (cp_lexer_new_main)                       0: 0.0%         
>>> 36: 0.0%          0: 0.0%          4: 0.0%          1
>>>
>>> Comments?
>> This is a good result -- but what test case did you use?
> 
> Everything with many functions.  In this case it was tramp3d-v4.

Good, thanks.  Please make sure to mention test cases in future.

>> Also, can you explain why it's important to use a power of two here?
> 
> Because all tokens are read up front, and if the buffer doesn't fit, its
> size is doubled, hence the non-power-of-two-ness increases .  As the whole
> buffer lives in garbage collected space and ggc-page.c makes sure to only
> allocate power-of-two space, this wastes huge amounts of memory (never
> touched, though) if the input size is not a power of two.

Thanks for the explanation.  The patch is OK, thanks!

-- 
Mark Mitchell
CodeSourcery
mark@codesourcery.com
(650) 331-3385 x713


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]