This is the mail archive of the
mailing list for the GCC project.
Re: Git and GCC. Why not with fork, exec and pipes like in linux?
- From: "J.C. Pizarro" <jcpiza at gmail dot com>
- To: "Jon Smirl" <jonsmirl at gmail dot com>, "Linus Torvalds" <torvalds at linux-foundation dot org>
- Cc: "Jeff King" <peff at peff dot net>, "Nicolas Pitre" <nico at cam dot org>, "Daniel Berlin" <dberlin at dberlin dot org>, "Harvey Harrison" <harvey dot harrison at gmail dot com>, "David Miller" <davem at davemloft dot net>, ismail at pardus dot org dot tr, gcc at gcc dot gnu dot org, git at vger dot kernel dot org
- Date: Thu, 6 Dec 2007 20:25:28 +0100
- Subject: Re: Git and GCC. Why not with fork, exec and pipes like in linux?
On 2007/12/06, "Jon Smirl" <firstname.lastname@example.org> wrote:
> On 12/6/07, Linus Torvalds <email@example.com> wrote:
> > On Thu, 6 Dec 2007, Jeff King wrote:
> > >
> > > What is really disappointing is that we saved only about 20% of the
> > > time. I didn't sit around watching the stages, but my guess is that we
> > > spent a long time in the single threaded "writing objects" stage with a
> > > thrashing delta cache.
> > I don't think you spent all that much time writing the objects. That part
> > isn't very intensive, it's mostly about the IO.
> > I suspect you may simply be dominated by memory-throughput issues. The
> > delta matching doesn't cache all that well, and using two or more cores
> > isn't going to help all that much if they are largely waiting for memory
> > (and quite possibly also perhaps fighting each other for a shared cache?
> > Is this a Core 2 with the shared L2?)
> When I lasted looked at the code, the problem was in evenly dividing
> the work. I was using a four core machine and most of the time one
> core would end up with 3-5x the work of the lightest loaded core.
> Setting pack.threads up to 20 fixed the problem. With a high number of
> threads I was able to get a 4hr pack to finished in something like
> A scheme where each core could work a minute without communicating to
> the other cores would be best. It would also be more efficient if the
> cores could avoid having sync points between them.
> Jon Smirl
For multicores CPUs, don't divide the work in threads.
To divide the work in processes!
Tips, tricks and hacks: to use fork, exec, pipes and another IPC mechanisms like
mutexes, shared memory's IPC, file locks, pipes, semaphores, RPCs, sockets, etc.
to access concurrently and parallely to the filelocked database.
For Intel Quad Core e.g., x4 cores, it need a parent process and 4
linked to the parent with pipes.
The parent process can be
* no-threaded using select/epoll/libevent
* threaded using Pth (GNU Portable Threads), NPTL (from RedHat) or whatever.