See also: CUDADevelopersGuide

Detection of CUDA

CUDA detection at run-time by any script ( such as recon-all ) is possible by executing a binary file called cudadetect. If it exits with 0, CUDA is available. Otherwise, not.

The following pseudocode is how one can use the detection scheme in their own scripts:

if $status == 0
  <cuda_enabled_binary> <options>
  <normal_binary> <options>

There are a few reasons why cudadetect might not detect your CUDA setup.

* The LD_LIBRARY_PATH or DYLD_LIBRARY_PATH doesn't have the PATH where CUDA libraries reside. Usually they reside in /usr/local/cuda/lib . So check to see whether these environment variables have the CUDA libraries path. Better to put it in .profile or .cshrc

* cudadetect works by calling dlopen() system call which calls a function from arbitrary shared library. In this case, cudadetect tries to call cudaGetDeviceCount from libcudart. If either the name of the library or the name of the function change, cudadetect might not work. Currently, cudadetect works for CUDA version 2.0 but effort will be taken to ensure that cudadetect calls the correct library and API is called whenever CUDA is upgraded. See also the section in CUDADevelopersGuide about how to install libcuda on a machine that doesnt have a cuda gpu card (to support building cuda code on such machines).

Changes to enable CUDA during configuration ( )

This section gives an overview of how is changed and what stuff it exports. More often than not, one need not tamper with -- it is already changed to accommodate CUDA.

* CUDA support is enabled by using a --with-cuda=<path of cuda> switch to ./configure. Suppose, the CUDA installation is in /usr/local/cuda ( which is the default path, usually ), then the ./configure command would have the switch --with-cuda=/usr/local/cuda. But note that will look for the existence of /usr/local/cuda and use it if found, so --with-cuda=/usr/local/cuda is not strictly necessary. The script checks to see if it can find nvcc, the CUDA compiler. This is a sanity check and if it finds nvcc, the ./configure assumes CUDA exists and exports certain things which are helpful in compiling CUDA programs ( and which are to be used in )

Flags which exports

These flags are typically used in the of any binary which has a CUDA replacement.

#ifdef FS_CUDA
  #include "cudaproc.h"
#endif // FS_CUDA

And then specific rules in can be used to compile the .cu file where the cuda_procedure() usually resides. cudaproc.h contains the definition of the cuda_procedure() function. More in the next section.

If there are a large number of functions which are to be replaced this way, a separate source file should be written with _cuda suffix. For instance, if foo_file.c is re-written with functions calling CUDA code, the file is named foo_file_cuda.c.

Tweaking of the binary

Listing of of mri_em_register to illustrate how changes:



bin_PROGRAMS = mri_em_register
mri_em_register_LDADD= $(addprefix $(top_builddir)/, $(LIBS_MGH))

## ----
## ----

# BUILDCUDA is defined if finds CUDA
# rules for building cuda files
        $(NVCC) -o $@ -c $< $(NVCCFLAGS) $(AM_CFLAGS) $(MNI_CFLAGS)
bin_PROGRAMS += mri_em_register_cuda
mri_em_register_cuda_SOURCES = mri_em_register.c \
mri_em_register_cuda_CFLAGS = $(AM_CFLAGS) $(CUDA_CFLAGS) -DFS_CUDA
mri_em_register_cuda_CXXFLAGS = $(AM_CFLAGS) $(CUDA_CFLAGS) -DFS_CUDA
mri_em_register_cuda_LDADD = $(addprefix $(top_builddir)/, $(LIBS_CUDA_MGH)) $(CUDA_LIBS)
mri_em_register_cuda_LDFLAGS = $(OS_LDFLAGS) 
mri_em_register_cuda_LINK = $(LIBTOOL) --tag=CC $(AM_LIBTOOLFLAGS) \
        $(LIBTOOLFLAGS) --mode=link $(CCLD) $(mri_em_register_cuda_CFLAGS) \
        $(CFLAGS) $(mri_em_register_cuda_LDFLAGS) $(LDFLAGS) -o $@

# Our release target. Include files to be excluded here. They will be
# found and removed after 'make install' is run during the 'make
# release' target.
include $(top_srcdir)/Makefile.extra

As one can notice, the of a CUDA enabled code differs from normal only in the if BUILDCUDA.. endif block.

Notes :

Steps to CUDA-ize a binary

Suppose you have a CUDA replacement for a binary ( like the mri_em_register case above). Following steps are like guidelines:

# rules for building cuda files
        $(NVCC) -o $@ -c $< $(NVCCFLAGS) $(AM_CFLAGS) 
bin_PROGRAMS += dummy_cuda
dummy_SOURCES = dummy.c \
dummy_LDADD = $(addprefix $(top_builddir)/, $(LIBS_CUDA_MGH)) $(CUDA_LIBS)
        $(LIBTOOLFLAGS) --mode=link $(CCLD) $(dummy_CFLAGS) \
        $(CFLAGS) $(dummy_LDFLAGS) $(LDFLAGS) -o $@

At the top of your 'main' program, you will want to acquire the CUDA device. Users can set the FREESURFER_CUDA_DEVICE environment variable to specify which one of their numerous GPUs should be used (they can find out the device numbers using the deviceQuery SDK program). Your 'main' routine should be modified as follows:

// In the preamble
#ifdef FS_CUDA
#include "devicemanagement.h"

// And for main
int main( int argc, char **argv ) {
  // Variable declarations
  // ...

#ifdef FS_CUDA

  // Original program continues
  // ...

If your program just uses accelerated routines from libutils_cuda, this is all you need to do.


When it is necessary for CUDA code to replace or optimize a routine in the dev/utils directory (ie, libutils), then this situation is handled as follows:

For example, mrifilter.c contains CUDA code, within #define FS_CUDA blocks. When libutils_cuda.a is built, all the utils files are built with FS_CUDA defined, so the programmer just replaces the cpu code with the gpu code where needed.

Then, a binary, like mri_em_register_cuda or mri_ca_register_cuda, just links against libutils_cuda.a (see the examples above).


A lot of ideas on Autotools and CUDA integration were taken from beagle-lib.

DevelopersGuide/CUDAEnabling (last edited 2010-04-23 17:33:26 by NickSchmansky)