Offloading Support in GCC

Using GCC

GCC supports the following offload-device types (arguments are passed by -foffload-options= as described further below):

In general:

Using the GPU as stand-alone system:

Building and Obtaining GCC


Offload Support by GCC version

GCC 5 and later support two offloading configurations:

GCC 7 and later supports further:

GCC 10 supports additionally:

GCC 11 supports additionally:

GCC 12 changed:

GCC 13 changed:

GCC 14 changed:

GCC 15 changed:

/!\ This needs to be updated some more for OpenACC.

Terminology

Host compiler — a regular compiler. Not to be confused with build/host/target configure terms.

Accel compiler — a compiler that reads intermediate representation from the special LTO sections, and generates code for the accelerator device. Also called the "offload compiler" or "target compiler".

OpenMP — open multi-processing, supporting vector, thread and offloading directives/pragmas.

OpenACC — open accelerators, supporting offloading directives/pragmas.

Building host and accel compilers

Note that many Linux distributions support offloading compilers, which typically ship in additional packages. Most build GCC such that OpenMP target/OpenACC sections are not offloaded (i.e. run on the host) — unless -foffload=<targets> has been specified explicitly.

The host and offload compilers need to be able to find each other. This is achieved by installing the offload compiler into special locations, and informing each about the presence of the other. All available offload compilers must first be configured with "--enable-as-accelerator-for=host-triplet", and installed into the same prefix as the host compiler. Then the host compiler is built with "--enable-offload-targets=target1,target2,..." which identifies the offload compilers that have already been built and installed.

The install locations for the offload compilers differ from those of a normal cross toolchain, by the following mapping:

bin/$target-gcc

->

bin/$host-accel-$target-gcc

lib/gcc/$target/$ver/

->

lib/gcc/$host/$ver/accel/$target

It may be necessary to compile offload compilers with a sysroot, since otherwise install locations for libgomp could clash (maybe that library needs to move into lib/gcc/..?)

A target needs to provide a mkoffload tool if it wishes to be usable as an accelerator. It is installed as one of EXTRA_PROGRAMS, and the host lto-wrapper knows how to find it from the paths described above. mkoffload will invoke the offload compiler in LTO mode to produce an offload binary from the host object files, then post-process this to produce a new object file that can be linked in with the host executable. It can find the host compiler by examining the COLLECT_GCC environment variable, and it must take care to clear this and certain other environment variables when executing the offload compiler so as to not confuse it.

Compilation process

Host compiler performs the following actions:

  1. After #pragma omp target lowering and expansion, a new outlined function with the attribute "omp declare target" emerges — it will be later compiled both by host and accel compilers to produce two versions (or N+1 versions in case of N different accel targets).
    The decls for all global variables marked with "omp declare target" attribute, as well as decls for outlined target regions, are inserted into offload_vars and offload_funcs arrays.

  2. The expansion phase replaces pragmas with corresponding calls to the runtime library libgomp (GOMP_target{,_ext}, GOMP_target_data{,_ext} + GOMP_target_end_data, GOMP_target_update{,_ext}, GOMP_target_enter_exit_data). These calls are preceded by initialization of special structures, containing arguments for outlined functions (.omp_data_arr.*, .omp_data_sizes.*, .omp_data_kinds.*).

  3. During the ipa_write_summaries pass the intermediate representation of outlined functions is streamed out into the .gnu.offload_lto_* sections of the "fat" object file. This object file also may contain .gnu.lto_* sections for the regular link-time optimizations.
    Also the decls from offload_funcs and offload_vars are streamed out into the .gnu.offload_lto_.offload_table section. Later an accel compiler will read this section to produce target's mapping table.

  4. In omp_finish_file function the addresses from offload_funcs and offload_vars are written into the .gnu.offload_funcs and .gnu.offload_vars sections correspondingly.
    Optionally, if -flto is present, the decls from offload_funcs and offload_vars are streamed out into the .gnu.lto_.offload_table section. Later the host compiler in LTO mode will use them to produce the final host's table with addresses.

  5. When all source files are compiled, pre-linker driver collect2 is invoked. It runs the linker, which loads linker plugin liblto_plugin.so, which runs lto-wrapper. Without offloading the lto-wrapper is called for link-time recompilation if at least one object file contains .gnu.lto_* sections. If some files contain offloading, then linker plugin will execute lto-wrapper even if there are no .gnu.lto_* sections. Offloading without linker plugin is not supported.

  6. lto-wrapper runs mkoffload for each accel target, specified during the configuration.

  7. mkoffload runs accel compiler, which reads IR from the .gnu.offload_lto_* sections and compiles it for the accel target. Then mkoffload packs this target code (image) into the special section of a new host's object file. The object file produced with mkoffload should contain a constructor that calls GOMP_offload_register{,_ver} to identify itself at run-time. Arguments to that function are a symbol called __OFFLOAD_TABLE__ (provided by libgcc and unique per shared object), a target identifier, and some other data needed for a particular target (a pointer to the image, a table with information about mappings between host and offload functions and variables).

  8. Linker adds new object files, produced by mkoffloads, to the list of host's input object files.

Address mapping tables

This example shows how the tables with addresses are created. It consists of 3 source files: apple.c, banana.c and citron.c. Each of them contains 2 outlined target regions *._omp_fn.{0,1}. Global variables are handled in a similar manner. There are 3 different ways of compilation, which are described in detail below.

Section name

Content

.gnu.lto_.offload_table

Contains IR of the decls for the host compiler

.gnu.offload_lto_.offload_table

Contains IR of the decls for the accel compiler

.gnu.offload_funcs

Contains addresses in the host binary

<Target section>

Contains addresses in the target image
* For Intel MIC targets the addresses are stored in ELF binary similar to host addresses
* For Nvidia PTX targets the addresses are stored in PTX assembly

All files without -flto

First,

gcc -c -fopenmp apple.c banana.c citron.c

produces 3 object files with the following sections:

apple.o

Section name

Content

.gnu.offload_lto_.offload_table

apple._omp_fn.0
apple._omp_fn.1

.gnu.offload_funcs

apple._omp_fn.0
apple._omp_fn.1

banana.o

Section name

Content

.gnu.offload_lto_.offload_table

banana._omp_fn.0
banana._omp_fn.1

.gnu.offload_funcs

banana._omp_fn.0
banana._omp_fn.1

citron.o

Section name

Content

.gnu.offload_lto_.offload_table

citron._omp_fn.0
citron._omp_fn.1

.gnu.offload_funcs

citron._omp_fn.0
citron._omp_fn.1

Next,

gcc -fopenmp apple.o banana.o citron.o

runs an accel compiler, which reads IR from .gnu.offload_lto_.offload_table and produces the final target table:

Target image

Section name

Content

<Target section>

apple._omp_fn.0
apple._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1

Finally, the host linker joins these 3 objects and therefore .gnu.offload_funcs sections into the host binary:

Host binary

Section name

Content

.gnu.offload_funcs

<__offload_func_table>
apple._omp_fn.0
apple._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1
<__offload_funcs_end>

__offload_func_table and __offload_funcs_end are special symbols, defined in crtoffloadbegin.o and crtoffloadend.o respectively.

All files with -flto

First,

gcc -c -fopenmp -flto apple.c banana.c citron.c

produces 3 object files with the following sections:

apple.o

Section name

Content

.gnu.lto_.offload_table

apple._omp_fn.0
apple._omp_fn.1

.gnu.offload_lto_.offload_table

apple._omp_fn.0
apple._omp_fn.1

banana.o

Section name

Content

.gnu.lto_.offload_table

banana._omp_fn.0
banana._omp_fn.1

.gnu.offload_lto_.offload_table

banana._omp_fn.0
banana._omp_fn.1

citron.o

Section name

Content

.gnu.lto_.offload_table

citron._omp_fn.0
citron._omp_fn.1

.gnu.offload_lto_.offload_table

citron._omp_fn.0
citron._omp_fn.1

Next,

gcc -fopenmp apple.o banana.o citron.o

runs an accel compiler, which produces the final target table, like in the previous case:

Target image

Section name

Content

<Target section>

apple._omp_fn.0
apple._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1

Next, host compiler is executed in LTO WPA mode, i.e. it reads IR from .gnu.lto_.offload_table from apple.o, banana.o, citron.o, and writes the joint table into .gnu.lto_.offload_table in the temporary object ccXXXXXX.ltrans0.o:

ccXXXXXX.ltrans0.o

Section name

Content

.gnu.lto_.offload_table

apple._omp_fn.0
apple._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1

In case of multiple partitions the joint table is written into the first partition only.

Next, host compiler is executed in LTO LTRANS mode. It reads the temporary table from .gnu.lto_.offload_table and writes the final table into the final object ccXXXXXX.ltrans0.ltrans.o:

ccXXXXXX.ltrans0.ltrans.o

Section name

Content

.gnu.offload_funcs

apple._omp_fn.0
apple._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1

Finally, the host linker joins crtoffloadbegin.o, ccXXXXXX.ltrans0.ltrans.o and crtoffloadend.o:

Host binary

Section name

Content

.gnu.offload_funcs

<__offload_func_table>
apple._omp_fn.0
apple._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1
<__offload_funcs_end>

Some files with and some without -flto

First,

gcc -c -fopenmp banana.c
gcc -c -fopenmp -flto apple.c citron.c

produces 3 object files with the following sections:

apple.o

Section name

Content

.gnu.lto_.offload_table

apple._omp_fn.0
apple._omp_fn.1

.gnu.offload_lto_.offload_table

apple._omp_fn.0
apple._omp_fn.1

banana.o

Section name

Content

.gnu.offload_lto_.offload_table

banana._omp_fn.0
banana._omp_fn.1

.gnu.offload_funcs

banana._omp_fn.0
banana._omp_fn.1

citron.o

Section name

Content

.gnu.lto_.offload_table

citron._omp_fn.0
citron._omp_fn.1

.gnu.offload_lto_.offload_table

citron._omp_fn.0
citron._omp_fn.1

Next, while running

gcc -fopenmp apple.o banana.o citron.o

the linker plugin creates a list of objects with offload sections and passes it to lto-wrapper. The order must be exactly the same as the final order after recompilation and linking. In this example it is: apple.o, citron.o and banana.o. Therefore, the accel compiler will produce the following target table:

Target image

Section name

Content

<Target section>

apple._omp_fn.0
apple._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1

Next, host compiler will recompile LTO objects (apple.o and citron.o) into ccXXXXXX.ltrans0.ltrans.o with the following table:

ccXXXXXX.ltrans0.ltrans.o

Section name

Content

.gnu.offload_funcs

apple._omp_fn.0
apple._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1

Finally, the host linker joins all objects in this order: crtoffloadbegin.o, ccXXXXXX.ltrans0.ltrans.o, banana.o, crtoffloadend.o; with the following host table:

Host binary

Section name

Content

.gnu.offload_funcs

<__offload_func_table>
apple._omp_fn.0
apple._omp_fn.1
citron._omp_fn.0
citron._omp_fn.1
banana._omp_fn.0
banana._omp_fn.1
<__offload_funcs_end>

Compilation without -flto

Offloading-related steps are marked in bold.

Compilation with -flto

Compilation options

/!\ The syntax below works with all GCC version. In GCC 12, the option was split into `-foffload=` and `-foffload-options=` avoiding the side effects of -foffload=target=-option and having a clearer semantic.

The main option to control offloading is:

  1. -foffload=<targets>=<options>
    By default, GCC will build offload images for all offload targets specified in configure with non-target-specific options passed to host compiler. (However, in most Linux distributions: by default, offloading is disabled (executed on the host) and -foffload=<targets> is required to compile to enable the offloading to accelerators.) This option is used to control offload targets and options for them. It can be used in a few ways:

    • -foffload=disable
      Tells GCC to disable offload support. Target regions will be run in host fallback mode.

    • -foffload=<targets>
      Tells GCC to build offload images for <targets>. They will be built with non-target-specific options passed to host compiler.

    • -foffload=<options>
      Tells GCC to build offload images for all targets specified in configure. They will be built with non-target-specific options passed to host compiler plus <options>.

    • -foffload=<targets>=<options>
      Tells GCC to build offload images for <targets>. They will be built with non-target-specific options passed to host compiler plus <options>.

    <targets> are separated by commas. Several <options> can be specified by separating them by spaces. Options specified by -foffload are appended to the end of option set, so in case of option conflicts they have more priority. The -foffload flag can be specified several times, and you have to do that to specify different <options> for different <targets>.

    /!\ Before GCC 14, you may need to specify -foffload=-lm and for Fortran -foffload=-lgfortran, if the offloaded code uses math functions or Fortran-library procedures.

    /!\ If you use atomics directly or indirectly, you may need to specify -foffload=-latomic or, if only one target needs it, e.g., -foffload=nvptx-none=-latomic.

For AMD GCN devices, you have to specify additionally the GPU to be used: -march=<name> where name is either fiji (third generation), gfx900 or gfx906 (fifth-generation VEGA), or gfx908 or gfx90a (CDNA). In order to apply this setting to the AMD GCN offloading target, only, and not to the host (-march=…) or all other offloading targets (as with -foffload=-march=…), use -foffload=amdgcn-amdhsa=<options>. For instance: -foffload=amdgcn-amdhsa="-march=gfx906". [NOTE: the target-triplet can be set when building the compiler and might differ between vendors; it can also be, e.g., 'amdgcn-unknown-amdhsa'.]

Also there are several internal options, which should not be specified by user:

  1. -foffload-abi=[lp64|ilp32]
    The option is generated by the host compiler. It is supposed to tell mkoffload (and offload compiler) which ABI is used in streamed GIMPLE, because host and offload compilers must have the same ABI.

  2. -foffload-objects=/tmp/ccxxxha
    This option is generated by linker plugin. It is used to pass the list of object files with offloading to lto-wrapper.

Examples

Runtime support in libgomp

libgomp plugins

libgomp is designed to be independent of accelerator type it work with. In order to make it possible, plugins are used, while the libgomp itself contains only a generic interface and callbacks to the plugin for invoking target-dependent functionality. Plugins are shared object, implementing a set of routines listed below.

Common for OpenMP and OpenACC:

GOMP_OFFLOAD_get_name
GOMP_OFFLOAD_get_caps
GOMP_OFFLOAD_get_type
GOMP_OFFLOAD_get_num_devices
GOMP_OFFLOAD_init_device
GOMP_OFFLOAD_fini_device
GOMP_OFFLOAD_version
GOMP_OFFLOAD_load_image
GOMP_OFFLOAD_unload_image
GOMP_OFFLOAD_alloc
GOMP_OFFLOAD_free
GOMP_OFFLOAD_dev2host
GOMP_OFFLOAD_host2dev

OpenMP specific:

GOMP_OFFLOAD_run
GOMP_OFFLOAD_async_run
GOMP_OFFLOAD_dev2dev

OpenACC specific:

GOMP_OFFLOAD_openacc_parallel
GOMP_OFFLOAD_openacc_register_async_cleanup
GOMP_OFFLOAD_openacc_async_test
GOMP_OFFLOAD_openacc_async_test_all
GOMP_OFFLOAD_openacc_async_wait
GOMP_OFFLOAD_openacc_async_wait_async
GOMP_OFFLOAD_openacc_async_wait_all
GOMP_OFFLOAD_openacc_async_wait_all_async
GOMP_OFFLOAD_openacc_async_set_async
GOMP_OFFLOAD_openacc_create_thread_data
GOMP_OFFLOAD_openacc_destroy_thread_data
GOMP_OFFLOAD_openacc_get_current_cuda_device
GOMP_OFFLOAD_openacc_get_current_cuda_context
GOMP_OFFLOAD_openacc_get_cuda_stream
GOMP_OFFLOAD_openacc_set_cuda_stream

libgomp gets the list of offload targets from the configure (specified by --enable-offload-targets=target1,target2,...). During the offload initialization, it tries to load plugins named libgomp-plugin-<target>.so.1 from standard dynamic linker paths. The plugins can use third-party target-dependent libraries to perform low-level interaction with the accel devices. E.g., the plugin for Intel MIC devices uses liboffloadmic.so for implementing libgomp callbacks, and the plugin for Nvidia PTX devices uses libcuda.so.

Address translation

When #pragma omp target is expanded, the host_addr of outlined function is passed to GOMP_target{,_ext}. If target device is not available, libgomp just performs host fallback using host_addr. But to run the function on target, it needs to translate host_addr into the corresponding target_addr. The idea is to have [ host_addr, size ] arrays in .gnu.offload_{funcs,vars} sections which are ordered exactly the same as corresponding [ target_addr ] arrays inside the target images (size is needed only for vars).

To keep this host_addr -> target_addr mapping at runtime, each device descriptor gomp_device_descr contains a splay tree. When gomp_init_device performs initialization, it walks the whole array and in each iteration picks n-th host pair host_start/host_end plus corresponding n-th target pair tgt_start/tgt_end, and inserts it into the splay tree.

Pointer mapping kinds

Libgomp pointer mapping kinds (notes)

Execution process

When an executable or dynamic shared object is loaded, it calls GOMP_offload_register{,_ver} N times, where N is number of accel images, embedded into this exec/dso. This function stores the pointers to the images and other data needed by accel plugin into offload_images.

The first call to GOMP_target{,_ext}, GOMP_target_data{,_ext}, GOMP_target_update{,_ext} or GOMP_target_enter_exit_data performs corresponding device initialization: it calls GOMP_OFFLOAD_init_device from the plugin, and then stores address mapping table in the splay tree.

In case of Intel MIC, GOMP_OFFLOAD_init_device creates a new process on the device, and then offloads the accel images with the type == OFFLOAD_TARGET_TYPE_INTEL_MIC. All accel images, even inside the executable, represent dynamic shared objects, which are loaded into the newly created process.

GOMP_target{,_ext} looks up the host_addr passed to it in the splay tree and passes corresponding target_addr to plugin's GOMP_OFFLOAD_run function.

Partial Offloading

Partial offloading means that for some of the potentially offloadable regions, offloadable code is not created. For example:

  1. Parts of an application are compiled with offloading enabled, but other parts with offloading disabled.
  2. Usage of constructs in the offloading region that cannot be supported:
    • nvptx doesn't support setjmp/longjmp, exceptions (?), alloca, computed goto, non-local goto, for example;

    • hsa offloading fails if the compiler can't "gridify" certain loops.
  3. The compiler determines that offloading is not feasible. For example, if no parallelism is usable in an offloading region, single-threaded offloading execution will typically be slower than host-fallback execution because of hardware characteristics. Also, on a non-shared memory system, offloading incurs data copy penalties.

In shared memory offloading configurations, the run-time system can just use host-fallback. If not expected by a user, this may incur a performance regression, but the program semantics will not be affected (unless in the offloading region the program makes use of any program constructs that exhibit different behavior when executing in offloaded vs. host-fallback mode). Doing host-fallback in non-shared memory offloading configurations however may lead to hard-to-find problems, if a user expects that all offloading regions are executed on the device, but in fact some of them are silently executed on the host with different data environment.

If offloaded code is expected to be run on an accelerator, but that code is not in fact available, the run-time system will (silently) resort to host-fallback execution.

Therefore it is important in such cases to emit compile-time diagnostics.

OpenMP, for example, doesn't guarantee that all target regions must be executed on the device, but in this case a user can't be sure that some library function always will offload (because the library might be replaced by fallback version), and they will have to write something like:

map_data_to_target ();
some_library1_fn_with_offload ();
get_data_from_target ();   /* ! */
send_data_to_target ();    /* ! */
some_library2_fn_with_offload ();
get_data_from_target ();   /* ! */
send_data_to_target ();    /* ! */
some_library3_fn_with_offload ();
unmap_data_from_target ();

It may be worth discussing whether there should be a way to allow the run-time system to deduce what data needs to be resynced on target region entries/exits in presence of fallback execution; explicit copying via map(from/to:...) is a too big hammer for that.

In non-shared memory offloading configurations, it is user error if compiling parts of an application with offloading enabled, but other parts with offloading disabled. The compiler/run-time system are not expected to "fix up" any possible conflicts in data management.

Currently, the compilation process (host compiler) will stop if there is an error in any offload compilation. It is under discussion to change this (at least depending on some option): either downgrade all errors in the offloading compiler into warnings that just result in the offloading image for the particular accelerator not being created, or issue errors, but still allow the linking.

How to build an offloading-enabled GCC

Patches enabling OpenMP 4.0 offloading to Intel MIC are merged to trunk. They include general infrastructure changes, mkoffload tool, libgomp plugin, Intel MIC runtime offload library liboffloadmic and an emulator. This emulator lies under liboffloadmic and reproduces MIC's HW and SW stack behavior allowing to run offloaded code in a separate address space using the host machine. The emulator consists of 4 shared libraries which replace COI and MYO libraries from Intel Manycore Platform Software Stack (MPSS). In case of real offloading, user is supposed to specify path to MPSS libraries in LD_LIBRARY_PATH, this will overload emulator libraries on runtime.

tschwinge is using his gcc-playground build scripts, in particular for GCC trunk offloading-enabled builds see the branches big-offload/master (bootstrap, all languages), light-offload/master (no bootstrap, only C, C++, Fortran). Clone, populate [...]/source-{gcc,newlib,nvptx-tools} (for example, using git worktree, git-new-workdir, or symlinks to existing source trees), and then invoke the RUN script, or similar.

In the following instructions, note that DESTDIR specifies where the toolchain is to be installed. In the steps below, DESTDIR is set to /install, although any directory with sufficient write permissions should work so long as DESTDIR is set to an absolute path. Furthermore, during install DESTDIR may be will be populated with a usr/local/ subdirectories. If your system creates a DESTDIR/usr/local, and assuming that DESTDIR is /install as with the examples below, be sure to replace /install/bin with /install/usr/local/bin and set LD_LIBRARY_PATH to /install/usr/local/lib64 when you follow steps 3 and 4 below.

1. Building accel compiler:

For Intel MIC:

../configure --build=x86_64-intelmicemul-linux-gnu --host=x86_64-intelmicemul-linux-gnu --target=x86_64-intelmicemul-linux-gnu --enable-as-accelerator-for=x86_64-pc-linux-gnu
make
make install DESTDIR=/install

For Nvidia PTX:

(Also see https://gcc.gnu.org/install/specific.html#nvptx-x-none)

First set up nvptx-tools. Note that ptxas must be in your PATH:

  ${NVPTX_TOOLS_SRC}/configure
  make
  make install DESTDIR=/install

Next insert a symbolic to nvptx-newlib's newlib directory into the directory containing the gcc sources. Then proceed to build the nvptx offloading gcc. Note that INSTDIR/usr/local/bin needs to be in your PATH:

../configure --target=nvptx-none --enable-as-accelerator-for=x86_64-pc-linux-gnu --with-build-time-tools=[install-nvptx-tools]/nvptx-none/bin --disable-sjlj-exceptions --enable-newlib-io-long-long
make
make install DESTDIR=/install

Finally, remove the newlib symlink from the gcc sources directory.

For AMD GCN:

There's no prebuilt assembler for GCN, nor any GNU binutils port, so the first step is to build LLVM configured for AMDGPU:

cmake -D 'LLVM_TARGETS_TO_BUILD=X86;AMDGPU' -D LLVM_ENABLE_PROJECTS=lld $srcdir/llvm-13.0.1/llvm
make
# don't install

For GCC 11 and earlier, please use LLVM 9 (the LLVM included with ROCm is not supported). For GCC 12 onwards, please use LLVM 13.0.1 (LLVM 9 support will be dropped in GCC 13, and LLVM 10, 11, 12, and 13.0 are not compatible with any version of GCC). Users of the devel/omp/gcc-11 development branch should use LLVM 13.0.1 only, since 2022-05-24.

Most of the LLVM tools are unnecessary, they're not where GCC can find them, and they have the wrong names (from GCC's point of view), so make a few copies into the directory where you intend to install GCC:

cp -a llvmobj/bin/llvm-ar /install/usr/local/amdgcn-amdhsa/bin/ar
cp -a llvmobj/bin/llvm-ar /install/usr/local/amdgcn-amdhsa/bin/ranlib
cp -a llvmobj/bin/llvm-mc /install/usr/local/amdgcn-amdhsa/bin/as
cp -a llvmobj/bin/llvm-nm /install/usr/local/amdgcn-amdhsa/bin/nm
cp -a llvmobj/bin/lld /install/usr/local/amdgcn-amdhsa/bin/ld

Next insert a symbolic to the Newlib source's "newlib" directory into the directory containing the gcc sources. The Newlib version needs to be contemporaeous with GCC, at least until the ABI is finalized. Then build GCC and Newlib together, as follows:

ln -s $newlibsrc/newlib ../newlib
../configure --target=amdgcn-amdhsa --enable-languages=c,lto,fortran --disable-sjlj-exceptions --with-newlib --enable-as-accelerator-for=x86_64-pc-linux-gnu --with-build-time-tools=/install/usr/local/amdgcn-amdhsa/bin --disable-libquadmath
make
make install DESTDIR=/install
rm ../newlib

In order to run compiled code on the GPU you will need to install the ROCm drivers and libraries.

2. Building host compiler:

../configure --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --target=x86_64-pc-linux-gnu --enable-offload-targets=x86_64-intelmicemul-linux-gnu=/install/prefix,nvptx-none=/install/usr/local/,amdgcn-amdhsa=/install/usr/local/ --with-cuda-driver=[cuda_install_path]
make
make install DESTDIR=/install

If you install both compilers without DESTDIR, then there is no need to specify the paths to accel install trees in the --enable-offload-targets option.

3. Building an application:

/install/bin/gcc -fopenmp test.c
/install/bin/gcc -fopenacc test.c

4. Running an application using the Intel MIC emulator:

export LD_LIBRARY_PATH="/install/lib64/"
./a.out

This creates 2 processes on host: the a.out process and "target" process.

KNL instructions can be emulated by running target process under Intel Software Development Emulator (SDE):

export LD_LIBRARY_PATH="/install/lib64/"
/install/bin/gcc -fopenmp -Ofast -foffload="-march=knl" test.c
OFFLOAD_EMUL_RUN="sde -knl --" ./a.out

The debugger can be attached to the target process by:

OFFLOAD_EMUL_RUN=gdb ./a.out

..., and multiple devices can be emulated by:

OFFLOAD_EMUL_KNC_NUM=2 ./a.out # For GCC 5
OFFLOAD_EMUL_NUM=2 ./a.out     # For GCC 6

Running 'make check'

make check-target-libgomp

Known issues

Debugging offload compiler invocations

Intel MIC offloading

Troubleshooting

See also

None: Offloading (last edited 2024-09-02 16:26:46 by AndrewStubbs)