Questions about LTO infrastructure and pragma omp target

Jakub Jelinek jakub@redhat.com
Tue Sep 3 18:54:00 GMT 2013


On Tue, Sep 03, 2013 at 10:29:56PM +0400, Michael V. Zolotukhin wrote:
> > The idea, as we discussed it at the GNU Tools Cauldron's Acceleration
> > BoF, is that the host program (for at least some acceleration devices)
> > will be responsible for loading the acceleration device's code to the
> > device, using some support library that is specific to each acceleration
> > device
> Unfortunately, I missed the Cauldron, though I'm familiar with the
> general idea and now I'm trying to clarify details.
> 
> > and for that it is useful to have the the code readily accessible
> > in the host program, and thus link it in as "data".
> Oh, if we just link the target binary as a data section into the host
> binary, then I see no problems in that, it seems absolutely feasible
> with the existing infrastructure.  I just thought (seemingly it was
> incorrect) that we're speaking about linking of target code with the
> host code.

No.  The rough idea is that you emit the accelerator related subset of CUs
into the (special named) LTO sections, and when linking a binary you collect
all those sections from all the input object files, compile those (without
-flto ideally separately), link together and finally embed into the
executable into a data section.  Similarly when linking a shared library,
you do a target shared library and embed it in a data section of the host
shared library.  It is kind of fat binaries/shared libraries.  Each of the
accelerators would use different name of the data sections, so they could
coexist.
For the MIC, you'd then use COI to create the binary or shared libraries
from the (ro)data section memory image.  For others whatever they support.
Perhaps it should be for MIC a shared library even in the binary and have
some binary in a data section of the libgomp plugin, because it really
should work also if the host binary doesn't have any #pragma omp target
in it at all, but shared libraries do.

	Jakub



More information about the Gcc mailing list