This is the mail archive of the
mailing list for the GCC project.
Re: trying out openacc 2.0
- From: Mark Farnell <mark dot farnell at gmail dot com>
- To: gcc at gcc dot gnu dot org
- Date: Thu, 18 Dec 2014 08:38:03 +1300
- Subject: Re: trying out openacc 2.0
- Authentication-results: sourceware.org; auth=none
- References: <CADD2D=6jL75k=0sTc2WvSBfsfUs8ennZMD=tT_Yu55Ur-gk+RA at mail dot gmail dot com> <20141217111021 dot GA21619 at physik dot fu-berlin dot de>
But it would be highly unlikely to only build the compiler for the
accelerator, 99% of the time you build the host and the accelerator.
So why can't we simplify the build process by allowing users to
specify the host architecture and list all the accelerators at
./configure then the user only invoke the compiler once and build all
I guess the script can make a build directory for each host and
accelerator architecture, so that the object file of each architecture
will not mix. This would make the build process much more
On Thu, Dec 18, 2014 at 12:10 AM, Tobias Burnus
> Mark Farnell wrote:
>> So what parameters will I need to pass to ./configure if I want to
>> support PTX offloading?
> Pre-remark: I think that the https://gcc.gnu.org/wiki/Offloading page will be
> updated, once the support has been merged to the trunk.
> I think using the triplet "nvptx-unknown-none" instead of
>> So if I want to have CPU, KNL and PTX, do I end up building three compilers?
> That's my understanding. You then generate for the offloading/target section
> code for the host (as fall back) and for one or multiple accelerators. At
> invokation time, you then decide which accelerator to use (KNL, PTX or
> host fallback.) Assuming that you target both accelerators during
>> Finally, is the nvptx-tools project mentioned in Tobia's page aiming
>> at replacing the CUDA toolchain?
> Depend's what you mean by "CUDA toolchain"; the purpose of those tools is
> Namely, "as" just does some reordering and "ld" reduces the number of files
> by combining the PTX files. The actual 'assembler' is in the CUDA runtime
> However, GCC (plus the few aux tools) replaces the compilers (nvcc etc.)
> as that task is done by GCC.
>>> Also, are other GPUs such as the AMD ATI and the built-in GPUs such as
>>> the Intel GPU and AMD fusion supported?
> There was some work underway to support OpenACC with OpenCL as output,
> which is then fed to the OpenCL runtime library. The OpenACC part of
> that work ended up in gomp-4_0-branch and is hence not lost. I don't
> recall whether there was a branch or patch for the OpenCL support part.
> For AMD's HSA, see Jakub's email.