User account creation filtered due to spam.

Bug 40028 - RFE - Add GPU acceleration library to gcc
Summary: RFE - Add GPU acceleration library to gcc
Status: UNCONFIRMED
Alias: None
Product: gcc
Classification: Unclassified
Component: middle-end (show other bugs)
Version: 4.5.0
: P3 enhancement
Target Milestone: ---
Assignee: Not yet assigned to anyone
URL:
Keywords:
Depends on:
Blocks:
 
Reported: 2009-05-05 16:18 UTC by Rob
Modified: 2011-06-14 21:38 UTC (History)
2 users (show)

See Also:
Host: *
Target: *
Build: *
Known to work:
Known to fail:
Last reconfirmed:


Attachments

Note You need to log in before you can comment on or make changes to this bug.
Description Rob 2009-05-05 16:18:42 UTC
RFE - It would be great if gcc had a couple (ATI / NVidia) of GPU libraries 
that gcc could use to speed up programs similar to what is done here:
http://www.pgroup.com/resources/accel.htm

"
The PGI 8.0 x64+GPU compilers automatically analyze whole program 
structure and data, split portions of the application between the 
x64 CPU and GPU as specified by user directives, and define and 
generate an optimized mapping of loops to automatically use the 
parallel cores, hardware threading capabilities and SIMD vector 
capabilities of modern GPUs. 

In addition to directives and pragmas that specify regions of code 
or functions to be accelerated, the PGI Fortran and C compilers 
will support user directives that give the programmer fine-grained 
control over the mapping of loops, allocation of memory, and 
optimization for the GPU memory hierarchy. 

The PGI compilers generate unified x64+GPU object files and 
executables that manage all movement of data to and from the GPU 
device while leveraging all existing host-side utilities—linker, 
librarians, makefiles—and require no changes to the existing 
standard HPC Linux/x64 programming environment.
"


A demo of a program written in the OpenCL Language is here: http://www.youtube.com/watch?v=r1sN1ELJfNo&feature=channel_page

The "GPGPU Programming Developer" Webpage is here:
http://gpgpu.org/developer

Some applications can be ran hundreds of times faster, see this page at NVidia.
http://www.nvidia.com/object/cuda_home.html


If we could use run-time-linking to select either the ATI or NVidia (PlayStation?) library at run-time then gcc would remain portable and 
offer the speedup on any platform that utilized a graphics card with 
a GPU (not just x86).

The middle end could attempt to determine which functions (or groups of 
code, inlinable, functions, loops, etc.) would be best to offload to the 
GPU (if a supported one were detected) and then resulting program would 
run much faster for most people by using the GPU as a coprocessor.

Thanks,
Rob
Comment 1 Andrew Pinski 2009-05-05 16:25:10 UTC
Yes GPU libraries would be nice but this needs a lot of work to begin with.  First you have to support the GPUs.  This also amounts to doubling the support.  If you really want them, since this is open source, start contributing.

Comment 2 Rob 2009-05-18 17:36:36 UTC
(In reply to comment #1)
> Yes GPU libraries would be nice but this needs a lot of work to begin with. 
> First you have to support the GPUs.  This also amounts to doubling the
> support. If you really want them, since this is open source, start
> contributing. 

I'm planning a full hardware upgrade in the coming months. I plan
to get an expensive Graphics Card to try this. Some of the newest
cards will run at over a PetaFLOP (only for "embarrassingly parallel"
code - http://en.wikipedia.org/wiki/Embarrassingly_parallel ).
Some of the newest Motherboards will accept _FOUR_ Graphics Cards.

It seems less expensive to use GPUs and recompile a few apps than 
trying to purchase a Motherboard with multiple CPUs or trying to 
find a chip faster than the 'i7'.

If we could "only double" our Computer's speed this endeavor
would be well worth doing. I suspect that Fortran's vector math
could be easily converted and benefit greatly.

Look for this feature in gcc in a few years (Sooner with everyone's help).

Rob
Comment 3 Rob 2009-05-20 13:10:07 UTC
> Some of the newest cards will run at over a PetaFLOP ...
I meant a TeraFLOP :( .
Comment 4 Rob 2009-10-07 11:21:47 UTC
(In reply to comment #1)
> Yes GPU libraries would be nice but this needs a lot of work to begin with. 
> First you have to support the GPUs.  This also amounts to doubling the support.
>  If you really want them, since this is open source, start contributing.


Here is a contribution from my buds at NVidia ...


Quote from the Article:

"... support for native execution of C++. For the first time in history, a GPU can run C++ code with no major issues or performance penalties ..."


nVidia GT300's Fermi architecture unveiled: 512 cores, up to 6GB GDDR5 
http://www.brightsideofnews.com/news/2009/9/30/nvidia-gt300s-fermi-architecture-unveiled-512-cores2c-up-to-6gb-gddr5.aspx


That should be more than 3/4 of the job done; only took 6 months.

Rob