This is the mail archive of the mailing list for the libstdc++ project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [Bug libstdc++/61107] stl_algo.h: std::__inplace_stable_partition() doesn't process the whole data range

I did suggest this change, so I feel I should defend it!

Our testing of many algorithms is woefully slim, that is how (for
example) the segfaulting bug in std::nth_element got through into a
release -- the tests for that algorithm were terrible, and basically
didn't test the functionality on enough possible inputs.

I consider a series of random inputs to be a good practical way of
getting decent code coverage and perform a basic sanity test, without
the need for an excessive amount of coding. While these tests aren't
showing anything yet,

a) We didn't know that until after they were written and executed, and:
b) They might help catch problems in future, particular in other
algorithms changing underlying functionality.

I recently added a set of similar tests for a number of algorithms.

If you have an alternative suggestion for better testing I'd be happy
to hear it, but I think the algorithms need something beyond just one
or two hardwired inputs.


On 10 November 2014 22:20, Jonathan Wakely <> wrote:
> On 10/11/14 23:14 +0100, FranÃois Dumont wrote:
>>    I introduced the random tests after Christopher Jefferson request to
>> have more intensive tests on those algos. Is it the whole stuff of tests
>> using random numbers that you don't like or just the usage of mt19937 ?
> The use of random number in general.
>> If second is this new version using the usual random_device I used so far
>> better ?
> That would be much worse because failures would not be reproducible!
>> If it is the whole usage of random numbers that you don't like I will
>> simply get rid of the new tests files.
> Did the new tests fail before your fix to stl_algo.h?
> If yes, you could extract the values generated in the case that fails
> and add a test using those values (this is what I should have done for
> the leaking set tests)
> If no, they aren't really testing anything useful.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]