This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Something about the mt_allocator.


On Sun, 2004-02-08 at 23:18, Felix Yen wrote:
> Re: false sharing.
>  >  Simply due to the fact that I do not know enough about this topic, 
> especially
>  >  across a variety of platforms. Suggestions/comments are as always 
> appriciated!
> 
> I eventually concluded that no action is required.  

Why?

> I don't think 
> there's any way to address false sharing in this style of allocator.  
> The fundamental premise is that recycling blocks without coalescing 
> them is a good idea.  

This would just help prevent external fragmentation (possibly).

> This style of recycling probably does interfere 
> with whatever the underlying allocator is doing to prevent false 
> sharing.  

How?

> Does this cost outweigh the benefit?  It depends on how 
> recycling is done, how often it's used, and other factors including the 
> quality of the underlying allocator.  So we just need test results 
> demonstrating that the new allocator outperforms new_allocator (and 
> pool_allocator).

AFAICS, the allocator (mt_alloc) makes no effort to make sure that the
block of size (4K-4*sizeof(void*)) is 4K aligned. It may very well being
in the middle of an existing page, which may be needed by 2 threads
running on 2 different processors at the same time. Also, for systems
where the default malloc does not have a 4*4 byte overhead (maybe a 2~3
*4 byte overhead), then the new memory will not exactly be page aligned
always even if the first request was page aligned. Also, in the middle
of 2 requests by mt_alloc<>, the user may ask for 8 bytes? What to do
now?





-- 
	-Dhruv Matani.
http://www.geocities.com/dhruvbird/




Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]