This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug middle-end/40028] RFE - Add GPU acceleration library to gcc
- From: "rob1weld at aol dot com" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: 18 May 2009 17:36:37 -0000
- Subject: [Bug middle-end/40028] RFE - Add GPU acceleration library to gcc
- References: <bug-40028-13830@http.gcc.gnu.org/bugzilla/>
- Reply-to: gcc-bugzilla at gcc dot gnu dot org
------- Comment #2 from rob1weld at aol dot com 2009-05-18 17:36 -------
(In reply to comment #1)
> Yes GPU libraries would be nice but this needs a lot of work to begin with.
> First you have to support the GPUs. This also amounts to doubling the
> support. If you really want them, since this is open source, start
> contributing.
I'm planning a full hardware upgrade in the coming months. I plan
to get an expensive Graphics Card to try this. Some of the newest
cards will run at over a PetaFLOP (only for "embarrassingly parallel"
code - http://en.wikipedia.org/wiki/Embarrassingly_parallel ).
Some of the newest Motherboards will accept _FOUR_ Graphics Cards.
It seems less expensive to use GPUs and recompile a few apps than
trying to purchase a Motherboard with multiple CPUs or trying to
find a chip faster than the 'i7'.
If we could "only double" our Computer's speed this endeavor
would be well worth doing. I suspect that Fortran's vector math
could be easily converted and benefit greatly.
Look for this feature in gcc in a few years (Sooner with everyone's help).
Rob
--
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40028