This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: implementation of std::thread::hardware_concurrency()



On 7 Nov 2011, at 14:52, Jonathan Wakely wrote:


On 7 November 2011 14:40, Iain Sandoe wrote:
so there's a reason to use the systlbyname (and use hw.logicalcpu or
similar, maybe).
[unless that's just a buggy sysconf]

Well if that's how they want to play it then I'm not even going to think about changing that code without being able to test it on a real system!

fair ;-)


What does "switch to dual cpu" actually mean? disable hyperthreading?
disable one core on each die to save power?  disable one die to save
power? lie about number of cores so buggy software doesn't get
confused?

it's done via a control panel which is intended to be a developer's tool to allow testing their code on machines with different capabilities from their development box.
I don't think that the average Joe User would need/want to do this....


I think someone with access to a darwin box and motivation to improve
it will have to make any improvements.  I would suggest using
sysctlnametomib and caching the result, then using sysctl, to avoid
the overhead of systlbyname parsing the name on every call.

yeah ... if only one could ulimit -hoursinday unlimited ...


At some
point in the near future (no pun intended) I want to enhance
std::async so it checks the system load and maybe the
hardware_concurrency when deciding whether to run an asynchronous task
in a new thread or not, so I want thread:hardware_concurrency() to be
as fast as possible.


For the 99.99% case, I suspect that assuming that the number of cpus online == number in the box is safe on Darwin (for normal end users).
An interested dev. can do what you suggest ..


Iain


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]