This is the mail archive of the gcc@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: OpenACC


Dear all,

I trust Mentor is currently in a very good relationship with NVIDIA. I
do have friends at NVIDIA too and respect them. But for the public
weal please note that by favoring PTX as an external format you take
the responsibility of the resulting vendor fragmentation and the waste
of multi-year community efforts to come up with something more
neutral. Please consider SPIR by Khronos Group [1] as the first
priority target. Although it is not completely adopted by vendors at
this point, both AMD and NVIDIA are moving into this direction e.g. by
having LLVM IR (very similar to SPIR) as pre-PTX representation in
CUDA, as IR for NVIDIA OpenCL and as IR for AMD OpenCL.

[1] http://www.khronos.org/registry/cl/specs/spir_spec-1.0-provisional.pdf

Best,
- D.

On 11/21/2013 08:42 PM, Jerome Glisse wrote:
> On Wed, Nov 20, 2013 at 03:41:17PM -0700, Nathan Sidwell wrote:
>> Hi, there seems to have been some confusion about the OpenACC 
>> development that we're currently engaged in.  I thought I'd
>> write here to clarify some things.
>> 
>> As Thomas previously announced, we're working on an
>> implementation of OpenACC 2.0 for x86-64/Linux host systems and
>> PTX accelerator devices.  OpenACC is specified at
>> http://www.openacc-standard.org/. There are several proprietary
>> implementations of OpenACC targeting a variety of accelerator
>> devices.  This is an opportunity to make OpenACC available in a
>> free software compiler.
>> 
>> For this development the accelerated code will be PTX -- one has
>> to start somewhere.  PTX is an ISA for a virtual machine.  Its 
>> specification is public and available at 
>> http://docs.nvidia.com/cuda/parallel-thread-execution/index.html.
>>
>> 
To get PTX code executed on a compute device (currently) requires
>> use of Nvidia's driver library.  That library is available for
>> zero cost and its API is documented at 
>> http://docs.nvidia.com/cuda/cuda-driver-api/index.html.
>> Although all those links contain the name 'cuda', don't be misled
>> by that -- it's an accident of history.
>> 
>> Nothing in this project is preventing others from working on
>> OpenACC support for different accelerator devices.  Neither will
>> anyone be forced to build OpenACC support -- as with OpenMP,
>> there will be a configuration option allowing one to configure a
>> compiler without it.
>> 
>> Targeting PTX, an ISA for use with a single manufacturer's
>> devices, is not different from targeting the other
>> single-manufacturer ISAs that GCC already supports.  It is, of
>> course, a steering committee decision as to whether a new backend
>> is acceptable once it meets technical review.
> 
> First, sorry have not had time to look at your code. So maybe your 
> code answer my worries.
> 
> While i do not worry to much to use PTX as a target (i know one
> can implement PTX backend for all the major GPU modulo maybe some
> small oddity that are NVidia specific).
> 
> What worries me is that no one is thinking about how to bundle the 
> end result ie do you add a new elf section that has ptx code that 
> can then be lower at runtime and also provide fallback CPU code
> for all those function so that program can start running with the
> CPU code and start using the GPU code as soon as the runtime is
> done generating the final GPU code.
> 
> If you add a new elf section maybe something PTX like stripped
> from any NVidia specificity would be better.
> 
> Ugglier solution is to store the PTX in the data section as some 
> static variable and have the program forcibly link with a specific 
> runtime that could them fetch it from the data section and do the 
> magic.
> 
> 
> Basicly my point is if we want to target transparent use of GPU
> then we want to make some standard ELF section which rely on some
> standard runtime library.
> 
> If everyone starts hidding some highlevel representation of code
> inside the data section and link with some custom specific runtime
> then we encourage fragmentation and bunch of others bad things.
> 
> I would really like to see work done toward agreeing on high level 
> representation of function that could be store in a new ELF
> section and having the compiler provide a CPU fallback for all of
> them.
> 
> 
> Otherwise we will just keep encouraging every hardware compagny to
> come up with their own solution in their own corner. Some will be
> better than other but worse of all some executable might not work
> anywhere because they can now start relying on some closed
> library.
> 
> 
> Cheers, Jerome
> 
>> 
>> GCC supports systems with proprietary runtimes.  Historically
>> GCC had to work with proprietary C libraries -- for instance, I
>> started in GCC development using a sparc-solaris system.  Now
>> that Linux has become so prevalent, and its C library is glibc,
>> there's the opportunity to build GCC with and for free software
>> libraries. However, that in itself, hasn't caused any of the
>> non-free host or target OS support to be removed. Nor should it,
>> IMHO, prevent GCC from adding support for systems that have
>> proprietary components (IIUC some CPUs rely on an opaque blob of
>> microcode in order to function).
>> 
>> As many of you will know, CodeSourcery, which Mentor purchased a
>> few years ago, has been contributing to GCC and other GNU
>> projects for over 15 years.  Several Sourcerers are maintainers
>> of particular pieces of the GNU project (mainly toolchain
>> components).
>> 
>> nathan
>> 
>> -- Nathan Sidwell


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]