This is the mail archive of the
mailing list for the libstdc++ project.
Re: [Bug libstdc++/61107] stl_algo.h: std::__inplace_stable_partition() doesn't process the whole data range
- From: Christopher Jefferson <chris at bubblescope dot net>
- To: Jonathan Wakely <jwakely at redhat dot com>
- Cc: FranÃois Dumont <frs dot dumont at gmail dot com>, "libstdc++ at gcc dot gnu dot org" <libstdc++ at gcc dot gnu dot org>, gcc-patches <gcc-patches at gcc dot gnu dot org>
- Date: Wed, 12 Nov 2014 14:56:08 +0000
- Subject: Re: [Bug libstdc++/61107] stl_algo.h: std::__inplace_stable_partition() doesn't process the whole data range
- Authentication-results: sourceware.org; auth=none
- References: <5441800A dot 1040609 at gmail dot com> <546124F8 dot 4050003 at gmail dot com> <20141110214511 dot GF5191 at redhat dot com> <546138B1 dot 7000304 at gmail dot com> <20141110222000 dot GH5191 at redhat dot com>
I did suggest this change, so I feel I should defend it!
Our testing of many algorithms is woefully slim, that is how (for
example) the segfaulting bug in std::nth_element got through into a
release -- the tests for that algorithm were terrible, and basically
didn't test the functionality on enough possible inputs.
I consider a series of random inputs to be a good practical way of
getting decent code coverage and perform a basic sanity test, without
the need for an excessive amount of coding. While these tests aren't
showing anything yet,
a) We didn't know that until after they were written and executed, and:
b) They might help catch problems in future, particular in other
algorithms changing underlying functionality.
I recently added a set of similar tests for a number of algorithms.
If you have an alternative suggestion for better testing I'd be happy
to hear it, but I think the algorithms need something beyond just one
or two hardwired inputs.
On 10 November 2014 22:20, Jonathan Wakely <email@example.com> wrote:
> On 10/11/14 23:14 +0100, FranÃois Dumont wrote:
>> I introduced the random tests after Christopher Jefferson request to
>> have more intensive tests on those algos. Is it the whole stuff of tests
>> using random numbers that you don't like or just the usage of mt19937 ?
> The use of random number in general.
>> If second is this new version using the usual random_device I used so far
>> better ?
> That would be much worse because failures would not be reproducible!
>> If it is the whole usage of random numbers that you don't like I will
>> simply get rid of the new tests files.
> Did the new tests fail before your fix to stl_algo.h?
> If yes, you could extract the values generated in the case that fails
> and add a test using those values (this is what I should have done for
> the leaking set tests)
> If no, they aren't really testing anything useful.