This is the mail archive of the
mailing list for the GCC project.
Re: [PATCH] gcc parallel make check
- From: David Malcolm <dmalcolm at redhat dot com>
- To: Mike Stump <mikestump at comcast dot net>
- Cc: VandeVondele Joost <joost dot vandevondele at mat dot ethz dot ch>, Jakub Jelinek <jakub at redhat dot com>, "gcc at gcc dot gnu dot org" <gcc at gcc dot gnu dot org>, "fortran at gcc dot gnu dot org" <fortran at gcc dot gnu dot org>, "gcc-patches at gcc dot gnu dot org" <gcc-patches at gcc dot gnu dot org>
- Date: Wed, 10 Sep 2014 16:38:32 -0400
- Subject: Re: [PATCH] gcc parallel make check
- Authentication-results: sourceware.org; auth=none
- References: <908103EDB4893A42920B21D3568BFD93150F4103 at MBX23 dot d dot ethz dot ch> <20140905143740 dot GL17454 at tucnak dot redhat dot com> <908103EDB4893A42920B21D3568BFD93150F414C at MBX23 dot d dot ethz dot ch> ,<20140905145304 dot GM17454 at tucnak dot redhat dot com> ,<908103EDB4893A42920B21D3568BFD93150F7F45 at MBX23 dot d dot ethz dot ch> <908103EDB4893A42920B21D3568BFD93150F816B at MBX23 dot d dot ethz dot ch> <229476F6-B901-4C6E-AE0B-3A53521AE996 at comcast dot net>
On Wed, 2014-09-10 at 11:19 -0700, Mike Stump wrote:
> On Sep 9, 2014, at 8:14 AM, VandeVondele Joost
> <firstname.lastname@example.org> wrote:
> > Attached is a further revision of the patch, now dealing with
> So when last I played in this area, I wanted a command line tool that
> would bin-pack from the command line. I would then grab the seconds
> per for each .exp, and bin pack to the fixed N, where N was the core
> count or related to it like, like N+1, N*1.1+1, N*2, ceil(N*1.1)).
> Then, I would just have 60-100 bins, and that -j64 run would be nicer.
> The only reason why I didnât push that patch up was I didnât know of
> any such program. :-( I mention this in case someone knows of such a
> tool that is open source, hopefully GNU software. The idea being, if
> a user has a 64 cores or want the .exp files to be more balanced on
> their target, they can be bothered to download the tool, donât have
> it, and you get something a little more static.
> Another way is to just make the buckets 60 seconds apiece. This way,
> have nice box, 60 seconds to test, otherwise, the test time is at
> most 1 minute unbalanced.
Perhaps this is a silly question, but has anyone tried going the whole
way and not having buckets, going to an extremely fine-grained approach:
split out all of the dj work into three phases:
(A) test discovery; write out a fine-grained Makefile in which *every*
testcase is its own make target (to the extreme limit of
parallelizability e.g. on the per-input-file level)
(B) invoke the Makefile, with -jN; each make target invokes dejagnu for
an individual testcase, and gets its own .log file
(C) combine the results
That way all parallelization in (B) relies on "make" to do the right
thing in terms of total number running jobs, available cores, load
average etc, albeit with a performance hit for all of the extra
reinvocations of "expect" (and a reordering of the results, but we can
impose a stable sort in phase (C) I guess).
Has anyone tried this?
Hope this is constructive