This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH][libstdc++-v3 parallel mode] Avoid taking address of dereferenced random access iterator


On 03/10/2011 11:37 AM, Jonathan Wakely wrote:
On 10 March 2011 09:47, Johannes Singler wrote:
The attached patch patch solves a conformance problem of the parallel mode
helper routine multiseq_partition.  I have added a test case for that.
  multiseq_selection has similar problems, but is unused, so I plan to remove
that completely (which might ask for renaming of the file and the test).

Please update the copyright date in the changed file as well.

Done.


Should I use unique_ptr (or alloca, or something similar) here for a better
exception safety (this routine is not parallel itself)?

unique_ptr is C++0x only, auto_ptr would work. But I see other heap allocation in that function are already unguarded so there doesn't seem to be much point guarding one and not the others. How about defining a local RAII type (and combining the three allocations into one) e.g.

       struct _Guard
       {
           _DifferenceType* _M_ns;

           ~_Guard() { delete[] _M_ns; }
       } __guard = { };

__guard._M_ns = new _DifferenceType[__m*3];

       _DifferenceType* __ns = __guard._M_ns;
       _DifferenceType* __a = __guard._M_ns + __m;
       _DifferenceType* __b = __guard._M_ns + 2*__m;
       _DifferenceType __l;

That ensures the _Guard destructor will clean up on exiting the
function, so you can remove the delete statements.

Well, isn't it a bit ugly to define such a guard newly every time?
In other places, parallel mode uses std::vector, but I guess this is actually also discouraged for internal use, since it adds the <vector> dependency.


Johannes


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]