This is the mail archive of the mailing list for the libstdc++ project.

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: merge branch profile-stdlib

On Fri, Jun 12, 2009 at 6:20 PM, Benjamin Kosnik <> wrote:
> Silvius, I wanted to give you some quick feedback on your branch this
> week. I haven't been following this branch super closely but have
> checked it out, and started to read your documentation and build/play
> with it. However, I will be gone until mid-week and unable to follow-up
> immediately.
> So, take these as preliminary notes. Sorry if they are rushed or
> confused.
> There are some interesting ideas here. I would like some more time to
> look at the way this is organized, and think about the way options
> are presented to users. But in general, I think that there is merit in
> your approach.
> > I would like to merge the profile-stdlib branch into trunk.  I made
> > the changes requested by your previous reviews:
> > - Gave up on adding a runtime library.  The diagnostic implementation
> > was moved from profc++/ to include/profile/impl/.
> > - Reverted driver changes so that the interface is -D_GLIBCXX_DEBUG,
> > consistent with current extensions.
> I think you mean _GLIBCXX_PROFILE here.
> > Before I go with a fine comb, could you please take a look at the
> > branch and let me know if there are any other major issues?
> In profile_mode.xml, need to revisit for -fprofile-stdlib status. Also,
> can replace "stdlib" and "stdlibc++" with libstdc++.

The documentation has not been updated for a while.  I will do that ASAP.

> > The big picture organization is:
> > - All profile extension headers live in include/profile/.  They
> > include profile/base.h.
> > - All diagnostic implementations live in include/profile/impl/.  They
> > are included by profile/impl/profiler.h.
> > - profile/base.h includes profile/impl/profiler.h.  This is the only
> > direct connection between include/profile/ and include/profile/impl/.
> >
> Useful, thanks.
> > - The relation with debug and parallel extensions has not been
> > defined.
> I'm currently of the mind that all these extensions should be mutually
> exclusive. Thoughts?

I agree.  I will add a preprocessor check in config.h.

> > - We are using vector and unordered_map in the implementation, with
> > default allocators.  This can cause infinite cycles if say the
> > application code uses libstdc++ containers to gather allocation
> > statistics.
> As with Jonathan, this strikes me as something to be fixed sooner
> rather than later. As you say, the design flaw is understood (custom
> allocators for internal vector and unordered_map instances.)

OK.  I will fix this on the branch.

> > - The machine-specific performance model component is not on the
> > branch.  I decided to treat it as a separate component and add it
> > later.  The decisions are based on generic operation performance
> > ratios.
> Can we get some visibility here in terms of a specific instance of a
> machine-specific model? This seems very hand-wavy. Where are you going
> with this?
> Certainly, Cost Model: Math goes here
> is not sufficient.

It's not clear to me how to integrate the module that creates the cost
model parameters for a given machine.  The cost model is a database
with data points such as "operation=map<int,int>::insert
initial_size=100 average_time=100ns".

The cost model generator is a collection of C++ programs that exercise
various library operation and record execution times into a database.
My take at this point is to just define a precise format for this
database and provide a set of default values in case the database is
not provided.  It's not clear how to distribute the cost model
generator.  Do you have any suggestions?

Also, there's another component that's not clear how to distribute.
We produce a trace that needs to be interpreted at some point.  We can
do it when the program ends, but this would be useful as a standalone
tool, so that we could process traces from several executions in order
to produce smoother diagnostics.

> > - Many diagnostics have not been implemented yet.
> Is there a list of possible directions for future work? I see a lot
> already. I'm interested in where you are going with

We have some future work outlined in
Yes, there's a lot more that can be done.  For programs that reference
memory through libstdc++, this is a good handle at the right
abstraction level, so there are many potential uses.

> Anyway.
> Stepping back a bit, now that you have this, can you show how it's
> applied to some kind of C++ source base? How would this be integrated
> into a user's build system? How are you using it in the real world? Has
> it been useful? What kind of expectations should we have about execution
> speed w/ profile-stdlib active, given that you had concerns about debug
> mode overhead?
> best,
> benjamin

I will do it on the SPEC apps that use the standard C++ library, using
the SPEC harness, and get back with details and overhead measurements.

Our overhead control mechanism is based on:
- compile time switches (turn on/off each diagnostic or classes of
diagnostics with, e.g, -D_GLIBCXX_PROFILE_VECTOR_TO_LIST=1)
- run time switches (turn on/off each diagnostic or classes of
diagnostics with, e.g, "set GLIBCXX_PROFILE_VECTOR_TO_LIST=1")
- run time stack trace depth limit.
There are two reasons why we need overhead control.  First, it is very
important for continuous testing infrastructures, where (1) you don't
care where exactly an error occurs, but whether an error was
introduced and (2) you want robustness and minimal overhead, because
you're running tests continuously.  Then, there are these applications
that just behave differently if you introduce too much overhead

All this will keep me busy for a while.  I'll get back when all the
issues have been addressed.

Thank you!

Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]