Offloading Support in GCC
Using GCC
GCC supports the following offload-device types (arguments are passed by -foffload-options= as described further below):
AMD GPUs (Radeon) GCN/RDNA / CDNA Instinct - see AMD GCN specific compiler flags and offloading specific information for AMD GCN
- Supported are gfx900 Vega 10 and gfx906 Vega 20 (GCN5), CDNA1 Instinct MI100 series devices (gfx908) and CDNA2 Instinct MI200 series devices (gfx90a) and the consumer cards gfx90c (GCN5), gfx1030, gfx1036 (RDNA2), gfx1100 and gfx1103 (RDNA3).
GCC calls this target "GCN" for historic reasons, but CDNA/Instinct devices are also supported. See AMD GCN -march= for the supported hardware.
Note that Fiji (gfx803) support has been removed from GCC 15, and was deprecated in GCC 14 (it needed to be explicitly enabled when GCC was built using --with-arch=fiji); for GCC ≤ 13, see link for ROCm and LLVM remarks.
NVidia GPUs (nvptx) - see nvptx specific compiler flags and offload specific information for nvptx
In general:
- Offloading is enabled by default when compiling with OpenMP or OpenACC enabled
Which of those offload-device types are supported is a GCC configure option; with distribution builds, optional additional packages are required to enable it (see next section)
- No hardware-vendor libraries (like CUDA or ROCm) are required for compilation. And when run: if the hardware library is not available and/or no suitable offload device is available, host fallback is done. Compiling with codegeneration both Nvidia (nvptx) and AMD GPUs in the same binary is supported; whether that program then run on the host, on Nvidia GPUs, or AMD GPUs – or on Nvidia and AMD GPUs is decided at run time.
Disabling offloading and specifying offload-compile flags as described at https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html#index-foffload
For GCC ≤ 13, you may need to additionally specify -foffload-options=-lgfortran and/or -foffload-options=-lm during linking. (Since GCC 14, those are auto-linked if linked on the host.)
In GCC ≤ 11, pass the arguments to -foffload= instead of -foffload-options=.
OpenACC is enabled by -fopenacc see OpenACC and https://gcc.gnu.org/onlinedocs/gcc/C-Dialect-Options.html#index-fopenacc
OpenMP is enabled by -fopenmp (and a subset is with -fopenmp-simd), see https://gcc.gnu.org/projects/gomp/ (includes also a by-GCC-version implementation status)
Information about the environment variables, implementation and device specific options can be found in the GOMP manual at https://gcc.gnu.org/onlinedocs/libgomp/ (for mainline)
Additional manuals - also for older GCC versions - can be found at https://gcc.gnu.org/onlinedocs/gcc/
Using the GPU as stand-alone system:
Compile using the embedded compiler. The C compiler is typically named x86_64-pc-linux-gnu-accel-amdgcn-amdhsa-gcc or x86_64-pc-linux-gnu-accel-nvptx-none-gcc. Other compilers like gfortran and g++ might be available, depending how the offload GCC was configured.
- To run the program:
AMD GPUs: Locate gcn-run and then run gcn-run ./a.out - assuming the compilation above produced a.out as executable.
The gcn-run is part of the GCC offload build and installed along side the offload compiler. It might be located at /usr/lib64/gcc/x86_64-*/*/accel/amdgcn-amdhsa/gcn-run or <install-dir>/libexec/gcc/x86_64-pc-linux-gnu/*/accel/amdgcn-amdhsa/gcn-run(where /*/ denotes the GCC version.)
Nvptx: nvptx-none-run and then run nvptx-none-run ./a.out - assuming the compilation above produced a.out as executable.
The nvptx-none-run binrary is part of the nvptx-tools and is, hence, located at its install directory. (nvptx-none-run-single is a shell script which uses flock to ensure only one nvptx-none-run-single is active at a time.)
Building and Obtaining GCC
General build information: https://gcc.gnu.org/install/ - especially for AMD and NVidia GPU support, see also the notes at https://gcc.gnu.org/install/specific.html
Simplified how-to, see links above and far below for more details/special cases
Some devel libraries are needed; in particular: GMP, mpfr, mpc and ISL (see install link above); or you run ./contrib/download_prerequisites to build it in tree.
Get Newlib, unpack and add a symlink from the Newlib's 'newlib' subdirectory in the GCC source directory (e.g. ln -s ../newlib-cygwin/newlib .), which will build it alongside the compiler for nvptx and amdgcn but not for the host. – Recommened version: newest release or git version; see link for details and on-top requirements.
For Nvidia support, you need additionally nvptx-tools – distro package or build + install. Add the 'bin' subdirectory to your PATH for the build (only)
- For AMD support, you need 'lld' and llvm-mc/llvm-ar/llvm-nm/llvm-runlib of LLVM 15 or higher — during build time, those have to be available at compile time as 'amdgcn-amdhsa-{ld,as,ar,nm,ranlib}` in the PATH for the build (only); you may want to use some temp-directory for this.
Build first the AMD and/or Nvidia GPU support by running configure with, e.g. — use an empty directory for the build (or two):
AMD GPU: --prefix=Your_Install_Path --target=amdgcn-amdhsa --enable-as-accelerator-for=x86_64-pc-linux-gnu
Nvidia GPU: --prefix=Your_Install_Path --target=nvptx-none --with-arch=sm_70 --enable-as-accelerator-for=x86_64-pc-linux-gnu – where the default value was set to sm_70.
Build the host compiler with offloading support: --prefix=Your_Install_Path --enable-offload-targets=nvptx-none,amdgcn-amdhsa (remove the one which you didn't build for above, unless you compiled for both) -- Use an empty directory for the build
- After "make -j n' (uses n processes concurrently) and 'make install' in all build directories, link/copy into the install directory:
- AMD GPU: Copy (or sym-link) LLVM's 'lld' and llvm-mc/llvm-ar/llvm-nm/llvm-runlib to the install's 'libexec/gcc/x86_64-pc-linux-gnu/*/accel/amdgcn-amdhsa/' directory as 'ld', 'as', 'ar', 'nm', and 'runlib', respectively.
- Nvidia GPU: Copy (or sym-link) the nvptx-none-{ar,as,ld,nm,ranlib} from the install's 'bin' directory to 'libexec/gcc/x86_64-pc-linux-gnu/*/accel/nvptx-none/' as {ar,as,ld,nm,ranlib} (i.e. without the nvptx-none- prefix).
That's all – you can now call '<install-dir>/bin/gcc' to compile (or add '<install-dir>/bin' to the PATH). You may want to add the '<install>/lib64' to the LD_LIBRARY_PATH before executing the generated binaries.
- Linux distributions:
distros usually have a GCC as system default and often ship with either older or newer GCC versions (or both) in addition
GCC permits to be build with --enable-offload-defaulted as done by distributions, which makes offloading optional (for building or running).
On openSUSE/SUSE, install additionally cross-{nvptx,amdgcn}-gcc13 – newer GCC can be also found at https://build.opensuse.org/project/show/devel:gcc
On Debian/Ubuntu, install additionally gcc-13-offload-{nvptx,amdgcn} – on Ubuntu, newer GCC can be also found at https://launchpad.net/~ubuntu-toolchain-r/+archive/ubuntu/test
On Fedora/RHEL, install additionally {gcc,libgomp}-offload-{nvptx,amdgcn} (amdgcn support since Fedora 40)
- Besides GCC mainline, the OG branch can be of interest:
- Provides features not yet in a release - either backported from mainline or only on the OG (and scheduled for mainline inclusion)
Available as source in the devel/omp/gcc-14 (OG14) branch in the GCC git repository
An older OG12 build with AMD GCN support is available at https://sourcery.sw.siemens.com/GNUToolchain/subscription57188
For AMD, build scripts exists at https://github.com/ROCm-Developer-Tools/og
Spack (https://spack.io/) is a package management system for installing a variety of common software packages and libraries. Some of the GCC Spack packages support offloading support as well
HPC Container Maker (https://github.com/NVIDIA/hpc-container-maker), hpccm for short, provides a language for creating container recipes for Docker and Singularity. It ships with a variety of building blocks for common compilers and libraries. In particular, the gnu building block, which will install GCC, has an option for building with OpenACC support. For more information, check the documentation (https://github.com/NVIDIA/hpc-container-maker/blob/master/docs/building_blocks.md#gnu).
Contents
Offload Support by GCC version
GCC 5 and later support two offloading configurations:
OpenMP to Intel MIC targets (upcoming Intel Xeon Phi products codenamed KNL) as well as MIC emulation on host.
OpenACC to Nvidia PTX targets.
GCC 7 and later supports further:
OpenMP to Nvidia PTX targets.
GCC 10 supports additionally:
OpenMP and OpenACC offloading to AMD GCN targets (the non-offloading back-end was introduced in GCC 9).
GCC 11 supports additionally:
- For AMD GCN the gfx908 GPUs
GCC 12 changed:
Support for AMD HSAIL was removed, use AMDGCN instead.
GCC 13 changed:
- Added support for AMD GCN gfx90a GPUs (MI200 series)
- Removed Intel MIC support.
GCC 14 changed:
- Deprecated (+ disabled by default) support for AMD Fiji (gfx803) GPUs due to removed ROCm and deprecated LLVM support
- Added support for the consumer cards gfx90c (GCN5), gfx1030, gfx1036 (RDNA2), gfx1100 and gfx1103 (RDNA3)
GCC 15 changed:
- Removed AMD Fiji (gfx803) GPU support.
This needs to be updated some more for OpenACC.
Terminology
Host compiler — a regular compiler. Not to be confused with build/host/target configure terms.
Accel compiler — a compiler that reads intermediate representation from the special LTO sections, and generates code for the accelerator device. Also called the "offload compiler" or "target compiler".
OpenMP — open multi-processing, supporting vector, thread and offloading directives/pragmas.
OpenACC — open accelerators, supporting offloading directives/pragmas.
Building host and accel compilers
Note that many Linux distributions support offloading compilers, which typically ship in additional packages. Most build GCC such that OpenMP target/OpenACC sections are not offloaded (i.e. run on the host) — unless -foffload=<targets> has been specified explicitly.
See Installing GCC for generic build information
See also Target specific installation notes for nvptx and AMD GCN
The host and offload compilers need to be able to find each other. This is achieved by installing the offload compiler into special locations, and informing each about the presence of the other. All available offload compilers must first be configured with "--enable-as-accelerator-for=host-triplet", and installed into the same prefix as the host compiler. Then the host compiler is built with "--enable-offload-targets=target1,target2,..." which identifies the offload compilers that have already been built and installed.
The install locations for the offload compilers differ from those of a normal cross toolchain, by the following mapping:
bin/$target-gcc |
-> |
bin/$host-accel-$target-gcc |
lib/gcc/$target/$ver/ |
-> |
lib/gcc/$host/$ver/accel/$target |
It may be necessary to compile offload compilers with a sysroot, since otherwise install locations for libgomp could clash (maybe that library needs to move into lib/gcc/..?)
A target needs to provide a mkoffload tool if it wishes to be usable as an accelerator. It is installed as one of EXTRA_PROGRAMS, and the host lto-wrapper knows how to find it from the paths described above. mkoffload will invoke the offload compiler in LTO mode to produce an offload binary from the host object files, then post-process this to produce a new object file that can be linked in with the host executable. It can find the host compiler by examining the COLLECT_GCC environment variable, and it must take care to clear this and certain other environment variables when executing the offload compiler so as to not confuse it.
Compilation process
Host compiler performs the following actions:
After #pragma omp target lowering and expansion, a new outlined function with the attribute "omp declare target" emerges — it will be later compiled both by host and accel compilers to produce two versions (or N+1 versions in case of N different accel targets).
The decls for all global variables marked with "omp declare target" attribute, as well as decls for outlined target regions, are inserted into offload_vars and offload_funcs arrays.The expansion phase replaces pragmas with corresponding calls to the runtime library libgomp (GOMP_target{,_ext}, GOMP_target_data{,_ext} + GOMP_target_end_data, GOMP_target_update{,_ext}, GOMP_target_enter_exit_data). These calls are preceded by initialization of special structures, containing arguments for outlined functions (.omp_data_arr.*, .omp_data_sizes.*, .omp_data_kinds.*).
During the ipa_write_summaries pass the intermediate representation of outlined functions is streamed out into the .gnu.offload_lto_* sections of the "fat" object file. This object file also may contain .gnu.lto_* sections for the regular link-time optimizations.
Also the decls from offload_funcs and offload_vars are streamed out into the .gnu.offload_lto_.offload_table section. Later an accel compiler will read this section to produce target's mapping table.In omp_finish_file function the addresses from offload_funcs and offload_vars are written into the .gnu.offload_funcs and .gnu.offload_vars sections correspondingly.
Optionally, if -flto is present, the decls from offload_funcs and offload_vars are streamed out into the .gnu.lto_.offload_table section. Later the host compiler in LTO mode will use them to produce the final host's table with addresses.When all source files are compiled, pre-linker driver collect2 is invoked. It runs the linker, which loads linker plugin liblto_plugin.so, which runs lto-wrapper. Without offloading the lto-wrapper is called for link-time recompilation if at least one object file contains .gnu.lto_* sections. If some files contain offloading, then linker plugin will execute lto-wrapper even if there are no .gnu.lto_* sections. Offloading without linker plugin is not supported.
lto-wrapper runs mkoffload for each accel target, specified during the configuration.
mkoffload runs accel compiler, which reads IR from the .gnu.offload_lto_* sections and compiles it for the accel target. Then mkoffload packs this target code (image) into the special section of a new host's object file. The object file produced with mkoffload should contain a constructor that calls GOMP_offload_register{,_ver} to identify itself at run-time. Arguments to that function are a symbol called __OFFLOAD_TABLE__ (provided by libgcc and unique per shared object), a target identifier, and some other data needed for a particular target (a pointer to the image, a table with information about mappings between host and offload functions and variables).
Linker adds new object files, produced by mkoffloads, to the list of host's input object files.
Address mapping tables
This example shows how the tables with addresses are created. It consists of 3 source files: apple.c, banana.c and citron.c. Each of them contains 2 outlined target regions *._omp_fn.{0,1}. Global variables are handled in a similar manner. There are 3 different ways of compilation, which are described in detail below.
Section name |
Content |
.gnu.lto_.offload_table |
Contains IR of the decls for the host compiler |
.gnu.offload_lto_.offload_table |
Contains IR of the decls for the accel compiler |
.gnu.offload_funcs |
Contains addresses in the host binary |
<Target section> |
Contains addresses in the target image |
All files without -flto
First,
gcc -c -fopenmp apple.c banana.c citron.c
produces 3 object files with the following sections:
apple.o |
|
Section name |
Content |
.gnu.offload_lto_.offload_table |
apple._omp_fn.0 |
.gnu.offload_funcs |
apple._omp_fn.0 |
banana.o |
|
Section name |
Content |
.gnu.offload_lto_.offload_table |
banana._omp_fn.0 |
.gnu.offload_funcs |
banana._omp_fn.0 |
citron.o |
|
Section name |
Content |
.gnu.offload_lto_.offload_table |
citron._omp_fn.0 |
.gnu.offload_funcs |
citron._omp_fn.0 |
Next,
gcc -fopenmp apple.o banana.o citron.o
runs an accel compiler, which reads IR from .gnu.offload_lto_.offload_table and produces the final target table:
Target image |
|
Section name |
Content |
<Target section> |
apple._omp_fn.0 |
Finally, the host linker joins these 3 objects and therefore .gnu.offload_funcs sections into the host binary:
Host binary |
|
Section name |
Content |
.gnu.offload_funcs |
<__offload_func_table> |
__offload_func_table and __offload_funcs_end are special symbols, defined in crtoffloadbegin.o and crtoffloadend.o respectively.
All files with -flto
First,
gcc -c -fopenmp -flto apple.c banana.c citron.c
produces 3 object files with the following sections:
apple.o |
|
Section name |
Content |
.gnu.lto_.offload_table |
apple._omp_fn.0 |
.gnu.offload_lto_.offload_table |
apple._omp_fn.0 |
banana.o |
|
Section name |
Content |
.gnu.lto_.offload_table |
banana._omp_fn.0 |
.gnu.offload_lto_.offload_table |
banana._omp_fn.0 |
citron.o |
|
Section name |
Content |
.gnu.lto_.offload_table |
citron._omp_fn.0 |
.gnu.offload_lto_.offload_table |
citron._omp_fn.0 |
Next,
gcc -fopenmp apple.o banana.o citron.o
runs an accel compiler, which produces the final target table, like in the previous case:
Target image |
|
Section name |
Content |
<Target section> |
apple._omp_fn.0 |
Next, host compiler is executed in LTO WPA mode, i.e. it reads IR from .gnu.lto_.offload_table from apple.o, banana.o, citron.o, and writes the joint table into .gnu.lto_.offload_table in the temporary object ccXXXXXX.ltrans0.o:
ccXXXXXX.ltrans0.o |
|
Section name |
Content |
.gnu.lto_.offload_table |
apple._omp_fn.0 |
In case of multiple partitions the joint table is written into the first partition only.
Next, host compiler is executed in LTO LTRANS mode. It reads the temporary table from .gnu.lto_.offload_table and writes the final table into the final object ccXXXXXX.ltrans0.ltrans.o:
ccXXXXXX.ltrans0.ltrans.o |
|
Section name |
Content |
.gnu.offload_funcs |
apple._omp_fn.0 |
Finally, the host linker joins crtoffloadbegin.o, ccXXXXXX.ltrans0.ltrans.o and crtoffloadend.o:
Host binary |
|
Section name |
Content |
.gnu.offload_funcs |
<__offload_func_table> |
Some files with and some without -flto
First,
gcc -c -fopenmp banana.c gcc -c -fopenmp -flto apple.c citron.c
produces 3 object files with the following sections:
apple.o |
|
Section name |
Content |
.gnu.lto_.offload_table |
apple._omp_fn.0 |
.gnu.offload_lto_.offload_table |
apple._omp_fn.0 |
banana.o |
|
Section name |
Content |
.gnu.offload_lto_.offload_table |
banana._omp_fn.0 |
.gnu.offload_funcs |
banana._omp_fn.0 |
citron.o |
|
Section name |
Content |
.gnu.lto_.offload_table |
citron._omp_fn.0 |
.gnu.offload_lto_.offload_table |
citron._omp_fn.0 |
Next, while running
gcc -fopenmp apple.o banana.o citron.o
the linker plugin creates a list of objects with offload sections and passes it to lto-wrapper. The order must be exactly the same as the final order after recompilation and linking. In this example it is: apple.o, citron.o and banana.o. Therefore, the accel compiler will produce the following target table:
Target image |
|
Section name |
Content |
<Target section> |
apple._omp_fn.0 |
Next, host compiler will recompile LTO objects (apple.o and citron.o) into ccXXXXXX.ltrans0.ltrans.o with the following table:
ccXXXXXX.ltrans0.ltrans.o |
|
Section name |
Content |
.gnu.offload_funcs |
apple._omp_fn.0 |
Finally, the host linker joins all objects in this order: crtoffloadbegin.o, ccXXXXXX.ltrans0.ltrans.o, banana.o, crtoffloadend.o; with the following host table:
Host binary |
|
Section name |
Content |
.gnu.offload_funcs |
<__offload_func_table> |
Compilation without -flto
Offloading-related steps are marked in bold.
gcc
cc1 # Compile first source file into plain asm + intermediate representation for accel
as # Assemble this asm + IR into object file
... # Compile and assemble all remaining source files
collect2 # Pre-linker driver
collect-ld # Simple wrapper over ld
ld with liblto_plugin.so # Perform linking
lto-wrapper # Is called from liblto_plugin.so
intelmic/mkoffload # Prepare offload image for Intel MIC devices
accel_gcc # Read target IR from all objects and produce target DSO
objcopy # Save target DSO in a special section in a new host's object file
.../mkoffload # Prepare images for other targets
...
Compilation with -flto
gcc
cc1 # Compile first source file into plain asm + intermediate representation + IR for accel
as # Assemble this asm + IR into temporary object file
... # Compile and assemble all remaining source files
collect2 # Pre-linker driver
collect-ld # Simple wrapper over ld
ld with liblto_plugin.so # Perform linking
lto-wrapper # Is called from liblto_plugin.so
gcc
lto1 # Perform whole program analysis and split into new partitions
gcc
lto1 # Perform local transformations in the first object file
as # Assemble into final object code
... # Perform local transformations in each partitioned object file
intelmic/mkoffload # Prepare offload image for Intel MIC devices
accel_gcc # Read target IR from all partitions and produce target DSO
objcopy # Save target DSO in a special section in a new host's object file
.../mkoffload # Prepare images for other targets
...
Compilation options
The syntax below works with all GCC version. In GCC 12, the option was split into `-foffload=` and `-foffload-options=` avoiding the side effects of -foffload=target=-option and having a clearer semantic.
The main option to control offloading is:
-foffload=<targets>=<options>
By default, GCC will build offload images for all offload targets specified in configure with non-target-specific options passed to host compiler. (However, in most Linux distributions: by default, offloading is disabled (executed on the host) and -foffload=<targets> is required to compile to enable the offloading to accelerators.) This option is used to control offload targets and options for them. It can be used in a few ways:-foffload=disable
Tells GCC to disable offload support. Target regions will be run in host fallback mode.-foffload=<targets>
Tells GCC to build offload images for <targets>. They will be built with non-target-specific options passed to host compiler.-foffload=<options>
Tells GCC to build offload images for all targets specified in configure. They will be built with non-target-specific options passed to host compiler plus <options>.-foffload=<targets>=<options>
Tells GCC to build offload images for <targets>. They will be built with non-target-specific options passed to host compiler plus <options>.
<targets> are separated by commas. Several <options> can be specified by separating them by spaces. Options specified by -foffload are appended to the end of option set, so in case of option conflicts they have more priority. The -foffload flag can be specified several times, and you have to do that to specify different <options> for different <targets>.
Before GCC 14, you may need to specify -foffload=-lm and for Fortran -foffload=-lgfortran, if the offloaded code uses math functions or Fortran-library procedures.
If you use atomics directly or indirectly, you may need to specify -foffload=-latomic or, if only one target needs it, e.g., -foffload=nvptx-none=-latomic.
For AMD GCN devices, you have to specify additionally the GPU to be used: -march=<name> where name is either fiji (third generation), gfx900 or gfx906 (fifth-generation VEGA), or gfx908 or gfx90a (CDNA). In order to apply this setting to the AMD GCN offloading target, only, and not to the host (-march=…) or all other offloading targets (as with -foffload=-march=…), use -foffload=amdgcn-amdhsa=<options>. For instance: -foffload=amdgcn-amdhsa="-march=gfx906". [NOTE: the target-triplet can be set when building the compiler and might differ between vendors; it can also be, e.g., 'amdgcn-unknown-amdhsa'.]
Also there are several internal options, which should not be specified by user:
-foffload-abi=[lp64|ilp32]
The option is generated by the host compiler. It is supposed to tell mkoffload (and offload compiler) which ABI is used in streamed GIMPLE, because host and offload compilers must have the same ABI.-foffload-objects=/tmp/ccxxxha
This option is generated by linker plugin. It is used to pass the list of object files with offloading to lto-wrapper.
Examples
gcc -fopenmp -c -O2 test1.c gcc -fopenmp -c -O1 -msse -foffload=-mavx test2.c gcc -fopenmp -foffload="-O3 -v" test1.o test2.o
In this example the offload images will be built with the following options: "-O2 -mavx -O3 -v" for targets specified in configure.
gcc -fopenmp -foffload=x86_64-intelmicemul-linux-gnu="-mavx2" -foffload=nvptx-none -foffload="-O3" -O2 test.c
In this example 2 offload images will be built: for MIC with "-O2 -mavx2 -O3" and for PTX with "-O2 -O3".
gfortran -fopenmp -foffload=amdgcn-amdhsa="-march=gfx900" -foffload=-lgfortran -Ofast test.f90
In this example, the AMD GCN offload image will be build for VEGA GPUs (-march=gfx900) and libgfortran will be linked on the offload image. (Note: Since GCC 14, the libgfortran linking will happen automatically when linking with "gfortran".)
Runtime support in libgomp
libgomp plugins
libgomp is designed to be independent of accelerator type it work with. In order to make it possible, plugins are used, while the libgomp itself contains only a generic interface and callbacks to the plugin for invoking target-dependent functionality. Plugins are shared object, implementing a set of routines listed below.
Common for OpenMP and OpenACC:
GOMP_OFFLOAD_get_name GOMP_OFFLOAD_get_caps GOMP_OFFLOAD_get_type GOMP_OFFLOAD_get_num_devices GOMP_OFFLOAD_init_device GOMP_OFFLOAD_fini_device GOMP_OFFLOAD_version GOMP_OFFLOAD_load_image GOMP_OFFLOAD_unload_image GOMP_OFFLOAD_alloc GOMP_OFFLOAD_free GOMP_OFFLOAD_dev2host GOMP_OFFLOAD_host2dev
OpenMP specific:
GOMP_OFFLOAD_run GOMP_OFFLOAD_async_run GOMP_OFFLOAD_dev2dev
OpenACC specific:
GOMP_OFFLOAD_openacc_parallel GOMP_OFFLOAD_openacc_register_async_cleanup GOMP_OFFLOAD_openacc_async_test GOMP_OFFLOAD_openacc_async_test_all GOMP_OFFLOAD_openacc_async_wait GOMP_OFFLOAD_openacc_async_wait_async GOMP_OFFLOAD_openacc_async_wait_all GOMP_OFFLOAD_openacc_async_wait_all_async GOMP_OFFLOAD_openacc_async_set_async GOMP_OFFLOAD_openacc_create_thread_data GOMP_OFFLOAD_openacc_destroy_thread_data GOMP_OFFLOAD_openacc_get_current_cuda_device GOMP_OFFLOAD_openacc_get_current_cuda_context GOMP_OFFLOAD_openacc_get_cuda_stream GOMP_OFFLOAD_openacc_set_cuda_stream
libgomp gets the list of offload targets from the configure (specified by --enable-offload-targets=target1,target2,...). During the offload initialization, it tries to load plugins named libgomp-plugin-<target>.so.1 from standard dynamic linker paths. The plugins can use third-party target-dependent libraries to perform low-level interaction with the accel devices. E.g., the plugin for Intel MIC devices uses liboffloadmic.so for implementing libgomp callbacks, and the plugin for Nvidia PTX devices uses libcuda.so.
Address translation
When #pragma omp target is expanded, the host_addr of outlined function is passed to GOMP_target{,_ext}. If target device is not available, libgomp just performs host fallback using host_addr. But to run the function on target, it needs to translate host_addr into the corresponding target_addr. The idea is to have [ host_addr, size ] arrays in .gnu.offload_{funcs,vars} sections which are ordered exactly the same as corresponding [ target_addr ] arrays inside the target images (size is needed only for vars).
To keep this host_addr -> target_addr mapping at runtime, each device descriptor gomp_device_descr contains a splay tree. When gomp_init_device performs initialization, it walks the whole array and in each iteration picks n-th host pair host_start/host_end plus corresponding n-th target pair tgt_start/tgt_end, and inserts it into the splay tree.
Pointer mapping kinds
Libgomp pointer mapping kinds (notes)
Execution process
When an executable or dynamic shared object is loaded, it calls GOMP_offload_register{,_ver} N times, where N is number of accel images, embedded into this exec/dso. This function stores the pointers to the images and other data needed by accel plugin into offload_images.
The first call to GOMP_target{,_ext}, GOMP_target_data{,_ext}, GOMP_target_update{,_ext} or GOMP_target_enter_exit_data performs corresponding device initialization: it calls GOMP_OFFLOAD_init_device from the plugin, and then stores address mapping table in the splay tree.
In case of Intel MIC, GOMP_OFFLOAD_init_device creates a new process on the device, and then offloads the accel images with the type == OFFLOAD_TARGET_TYPE_INTEL_MIC. All accel images, even inside the executable, represent dynamic shared objects, which are loaded into the newly created process.
GOMP_target{,_ext} looks up the host_addr passed to it in the splay tree and passes corresponding target_addr to plugin's GOMP_OFFLOAD_run function.
Partial Offloading
Partial offloading means that for some of the potentially offloadable regions, offloadable code is not created. For example:
- Parts of an application are compiled with offloading enabled, but other parts with offloading disabled.
- Usage of constructs in the offloading region that cannot be supported:
nvptx doesn't support setjmp/longjmp, exceptions (?), alloca, computed goto, non-local goto, for example;
- hsa offloading fails if the compiler can't "gridify" certain loops.
- The compiler determines that offloading is not feasible. For example, if no parallelism is usable in an offloading region, single-threaded offloading execution will typically be slower than host-fallback execution because of hardware characteristics. Also, on a non-shared memory system, offloading incurs data copy penalties.
In shared memory offloading configurations, the run-time system can just use host-fallback. If not expected by a user, this may incur a performance regression, but the program semantics will not be affected (unless in the offloading region the program makes use of any program constructs that exhibit different behavior when executing in offloaded vs. host-fallback mode). Doing host-fallback in non-shared memory offloading configurations however may lead to hard-to-find problems, if a user expects that all offloading regions are executed on the device, but in fact some of them are silently executed on the host with different data environment.
If offloaded code is expected to be run on an accelerator, but that code is not in fact available, the run-time system will (silently) resort to host-fallback execution.
Therefore it is important in such cases to emit compile-time diagnostics.
OpenMP, for example, doesn't guarantee that all target regions must be executed on the device, but in this case a user can't be sure that some library function always will offload (because the library might be replaced by fallback version), and they will have to write something like:
map_data_to_target (); some_library1_fn_with_offload (); get_data_from_target (); /* ! */ send_data_to_target (); /* ! */ some_library2_fn_with_offload (); get_data_from_target (); /* ! */ send_data_to_target (); /* ! */ some_library3_fn_with_offload (); unmap_data_from_target ();
It may be worth discussing whether there should be a way to allow the run-time system to deduce what data needs to be resynced on target region entries/exits in presence of fallback execution; explicit copying via map(from/to:...) is a too big hammer for that.
In non-shared memory offloading configurations, it is user error if compiling parts of an application with offloading enabled, but other parts with offloading disabled. The compiler/run-time system are not expected to "fix up" any possible conflicts in data management.
Currently, the compilation process (host compiler) will stop if there is an error in any offload compilation. It is under discussion to change this (at least depending on some option): either downgrade all errors in the offloading compiler into warnings that just result in the offloading image for the particular accelerator not being created, or issue errors, but still allow the linking.
How to build an offloading-enabled GCC
Patches enabling OpenMP 4.0 offloading to Intel MIC are merged to trunk. They include general infrastructure changes, mkoffload tool, libgomp plugin, Intel MIC runtime offload library liboffloadmic and an emulator. This emulator lies under liboffloadmic and reproduces MIC's HW and SW stack behavior allowing to run offloaded code in a separate address space using the host machine. The emulator consists of 4 shared libraries which replace COI and MYO libraries from Intel Manycore Platform Software Stack (MPSS). In case of real offloading, user is supposed to specify path to MPSS libraries in LD_LIBRARY_PATH, this will overload emulator libraries on runtime.
tschwinge is using his gcc-playground build scripts, in particular for GCC trunk offloading-enabled builds see the branches big-offload/master (bootstrap, all languages), light-offload/master (no bootstrap, only C, C++, Fortran). Clone, populate [...]/source-{gcc,newlib,nvptx-tools} (for example, using git worktree, git-new-workdir, or symlinks to existing source trees), and then invoke the RUN script, or similar.
In the following instructions, note that DESTDIR specifies where the toolchain is to be installed. In the steps below, DESTDIR is set to /install, although any directory with sufficient write permissions should work so long as DESTDIR is set to an absolute path. Furthermore, during install DESTDIR may be will be populated with a usr/local/ subdirectories. If your system creates a DESTDIR/usr/local, and assuming that DESTDIR is /install as with the examples below, be sure to replace /install/bin with /install/usr/local/bin and set LD_LIBRARY_PATH to /install/usr/local/lib64 when you follow steps 3 and 4 below.
1. Building accel compiler:
For Intel MIC:
../configure --build=x86_64-intelmicemul-linux-gnu --host=x86_64-intelmicemul-linux-gnu --target=x86_64-intelmicemul-linux-gnu --enable-as-accelerator-for=x86_64-pc-linux-gnu make make install DESTDIR=/install
For Nvidia PTX:
(Also see https://gcc.gnu.org/install/specific.html#nvptx-x-none)
First set up nvptx-tools. Note that ptxas must be in your PATH:
${NVPTX_TOOLS_SRC}/configure make make install DESTDIR=/install
Next insert a symbolic to nvptx-newlib's newlib directory into the directory containing the gcc sources. Then proceed to build the nvptx offloading gcc. Note that INSTDIR/usr/local/bin needs to be in your PATH:
../configure --target=nvptx-none --enable-as-accelerator-for=x86_64-pc-linux-gnu --with-build-time-tools=[install-nvptx-tools]/nvptx-none/bin --disable-sjlj-exceptions --enable-newlib-io-long-long make make install DESTDIR=/install
Finally, remove the newlib symlink from the gcc sources directory.
For AMD GCN:
There's no prebuilt assembler for GCN, nor any GNU binutils port, so the first step is to build LLVM configured for AMDGPU:
cmake -D 'LLVM_TARGETS_TO_BUILD=X86;AMDGPU' -D LLVM_ENABLE_PROJECTS=lld $srcdir/llvm-13.0.1/llvm make # don't install
For GCC 11 and earlier, please use LLVM 9 (the LLVM included with ROCm is not supported). For GCC 12 onwards, please use LLVM 13.0.1 (LLVM 9 support will be dropped in GCC 13, and LLVM 10, 11, 12, and 13.0 are not compatible with any version of GCC). Users of the devel/omp/gcc-11 development branch should use LLVM 13.0.1 only, since 2022-05-24.
Most of the LLVM tools are unnecessary, they're not where GCC can find them, and they have the wrong names (from GCC's point of view), so make a few copies into the directory where you intend to install GCC:
cp -a llvmobj/bin/llvm-ar /install/usr/local/amdgcn-amdhsa/bin/ar cp -a llvmobj/bin/llvm-ar /install/usr/local/amdgcn-amdhsa/bin/ranlib cp -a llvmobj/bin/llvm-mc /install/usr/local/amdgcn-amdhsa/bin/as cp -a llvmobj/bin/llvm-nm /install/usr/local/amdgcn-amdhsa/bin/nm cp -a llvmobj/bin/lld /install/usr/local/amdgcn-amdhsa/bin/ld
Next insert a symbolic to the Newlib source's "newlib" directory into the directory containing the gcc sources. The Newlib version needs to be contemporaeous with GCC, at least until the ABI is finalized. Then build GCC and Newlib together, as follows:
ln -s $newlibsrc/newlib ../newlib ../configure --target=amdgcn-amdhsa --enable-languages=c,lto,fortran --disable-sjlj-exceptions --with-newlib --enable-as-accelerator-for=x86_64-pc-linux-gnu --with-build-time-tools=/install/usr/local/amdgcn-amdhsa/bin --disable-libquadmath make make install DESTDIR=/install rm ../newlib
In order to run compiled code on the GPU you will need to install the ROCm drivers and libraries.
2. Building host compiler:
../configure --build=x86_64-pc-linux-gnu --host=x86_64-pc-linux-gnu --target=x86_64-pc-linux-gnu --enable-offload-targets=x86_64-intelmicemul-linux-gnu=/install/prefix,nvptx-none=/install/usr/local/,amdgcn-amdhsa=/install/usr/local/ --with-cuda-driver=[cuda_install_path] make make install DESTDIR=/install
If you install both compilers without DESTDIR, then there is no need to specify the paths to accel install trees in the --enable-offload-targets option.
3. Building an application:
/install/bin/gcc -fopenmp test.c /install/bin/gcc -fopenacc test.c
4. Running an application using the Intel MIC emulator:
export LD_LIBRARY_PATH="/install/lib64/" ./a.out
This creates 2 processes on host: the a.out process and "target" process.
KNL instructions can be emulated by running target process under Intel Software Development Emulator (SDE):
export LD_LIBRARY_PATH="/install/lib64/" /install/bin/gcc -fopenmp -Ofast -foffload="-march=knl" test.c OFFLOAD_EMUL_RUN="sde -knl --" ./a.out
The debugger can be attached to the target process by:
OFFLOAD_EMUL_RUN=gdb ./a.out
..., and multiple devices can be emulated by:
OFFLOAD_EMUL_KNC_NUM=2 ./a.out # For GCC 5 OFFLOAD_EMUL_NUM=2 ./a.out # For GCC 6
Running 'make check'
configure, make and install accel compiler (see #1)
- configure and make host compiler (see #2)
- From the host gcc build directory run:
make check-target-libgomp
Known issues
In-tree testing is not supported yet when an accel compiler is not installed. RFC patch.
If something goes wrong during the offloading compilation, the host binary is not created. However it's possible to continue compilation in such cases. Patch is here.
If someone builds an accel compiler without --enable-languages or with --enable-languages other than c,c++,fortran,lto, then bin directory will contain redundant drivers. Fix is here.
For OpenACC offloading, -foffload=disable does not do the right thing.
We should get rid of the (only) handful of ENABLE_OFFLOADING and ACCEL_COMPILER preprocessor conditionals, http://mid.mail-archive.com/874mh43i7q.fsf@kepler.schwinge.homeip.net.
Offloading compilation is slow, http://mid.mail-archive.com/87shzfa6z1.fsf@hertz.schwinge.homeip.net. Supposedly, because of having to invoke several tools (LTO streaming -> mkoffload -> offload compilers, assemblers, linkers -> combine the resulting images; but we have not done a detailed analysis on that).
Debugging offload compiler invocations
Intel MIC offloading
Intel MIC does not require special sysroot or build-time-tools, therefore the accel compiler should be configured as native (with same target in --build, --host and --target options). Probably it's better to configure it as cross compiler.
The host GCC build references/depends on the Intel MIC offloading compiler's installation directory (which thus has to be built and installed earlier), http://mid.mail-archive.com/878uaq68fn.fsf@kepler.schwinge.homeip.net.
- Add support for OpenACC offloading.
Troubleshooting
- ICE from LTO functions: ensure that the host and offload compilers are built from the same sources (LTO binaries are highly version specific).
- AMD Fiji (GFX803) devices don't work at all (rocminfo says "hsa api call failure"): downgrade the drivers to ROCm 3.8.
Error messages from the amdgcn assembler: ensure you installed a supported version of LLVM.
See also
Accelerator BoF (GNU Tools Cauldron 2014) video
OpenMP 4 Offloading Features implementation in GCC (Kirill Yukhin, GNU Tools Cauldron 2015) slides, video
Compiling for HSA accelerators with GCC (Martin Jambor, GNU Tools Cauldron 2015) slides, video
OpenACC & PTX (Nathan Sidwell, GNU Tools Cauldron 2015) video
Accelerator BoF (GNU Tools Cauldron 2015) slides1, slides2, video
Improving OpenACC kernels support in GCC (Thomas Schwinge, GNU Tools Cauldron 2017) slides
Future Direction of OpenACC (Cesar Philippidis, GNU Tools Cauldron 2018) slides video
Some overview about how the offloading works in the OpenACC, OpenMP, Offloading and GCC presentation (GNU Tools Cauldron 2022) OpenMP-OpenACC-Offload-Cauldron2022-1.pdf