This is the mail archive of the
libstdc++@gcc.gnu.org
mailing list for the libstdc++ project.
Re: hash policy patch
- From: Paolo Carlini <paolo dot carlini at oracle dot com>
- To: François Dumont <frs dot dumont at gmail dot com>
- Cc: Paolo Carlini <pcarlini at gmail dot com>, "libstdc++ at gcc dot gnu dot org" <libstdc++ at gcc dot gnu dot org>
- Date: Fri, 16 Sep 2011 03:00:38 +0200
- Subject: Re: hash policy patch
- References: <4E2F1A56.3010000@free.fr> <4E2F204B.6060207@oracle.com> <4E31C6CE.2070906@free.fr> <7B3982F6-FEAA-4023-AC36-84B10A513651@oracle.com> <4E3849E9.5000505@free.fr> <4E5FD090.8070102@oracle.com> <4E6A6C60.2090204@gmail.com> <4E710FC7.2040606@gmail.com>
Hi,
And here is this one again:
2011-09-14 François Dumont <fdumont@gcc.gnu.org>
* include/bits/hashtable.h (_Hashtable<>::__rehash_policy(const
_RehashPolicy&)): Commit the modification of the policy only
if no
exception occured.
*
testsuite/23_containers/unordered_set/max_load_factor/robustness.cc:
New.
Ok... but:
+ us.max_load_factor(.5f);
+ VERIFY( us.max_load_factor() == .5f );
as we discussed already (didn't we?), this kind of VERIFY is in general
very brittle (even if on the widespread base-2 systems probably we are
lucky in this *specific* case): please just remove it, I don't think
we'll miss much anyway.
I also wondered if in __rehash_policy method we shouldn't rehash as
soon as __n_bkt != _M_bucket_count rather than only when __n_bkt >
_M_bucket_count. Users might change max load factor also to reduce the
number of buckets...
I should find the time to check C++11 about this. I'll let you know my
opinion ASAP.
Paolo.