This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [RFC] [nvptx] Try to cope with cuLaunchKernel returning CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES
- From: Alexander Monakov <amonakov at ispras dot ru>
- To: Thomas Schwinge <thomas at codesourcery dot com>
- Cc: Nathan Sidwell <nathan at codesourcery dot com>, gcc-patches at gcc dot gnu dot org, Bernd Schmidt <bschmidt at redhat dot com>, Jakub Jelinek <jakub at redhat dot com>
- Date: Tue, 19 Jan 2016 17:07:17 +0300 (MSK)
- Subject: Re: [RFC] [nvptx] Try to cope with cuLaunchKernel returning CUDA_ERROR_LAUNCH_OUT_OF_RESOURCES
- Authentication-results: sourceware.org; auth=none
- References: <1453195932 dot 96 dot 0 dot 59001766349 dot issue17226 at mentor dot com> <87oacheqlz dot fsf at hertz dot schwinge dot homeip dot net> <alpine dot LNX dot 2 dot 20 dot 1601191600540 dot 24832 at monopod dot intra dot ispras dot ru>
On Tue, 19 Jan 2016, Alexander Monakov wrote:
> > ... to determine an optimal number of threads per block given the number
> > of registers (maybe just querying CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK
> > would do that already?).
>
> I have implemented that for OpenMP offloading, but also since CUDA 6.0 there's
> cuOcc* (occupancy query) interface that allows to simply ask the driver about
> the per-function launch limit.
Sorry, I should have mentioned that CU_FUNC_ATTRIBUTE_MAX_THREADS_PER_BLOCK is
indeed sufficient for limiting threads per block, which is trivially
translatable into workers per gang in OpenACC. IMO it's also a cleaner
approach in this case, compared to iterative backoff (if, again, the
implementation is free to do that).
When mentioning cuOcc* I was thinking about finding an optimal number of
blocks per device, which is a different story.
Alexander