Linking Scheme code to data-parallel CUDA-C code

View/ Open
Date
2014-02-28Author
Chowdhury, AKM Rasheduzzamn
Type
ThesisDegree Level
MastersMetadata
Show full item recordAbstract
In Compute Unified Device Architecture (CUDA), programmers must manage memory operations, synchronization, and utility functions of Central Processing Unit programs that control and issue data-parallel general purpose programs running on a Graphics Processing Unit (GPU). NVIDIA Corporation developed the CUDA framework to enable and develop data-parallel programs for GPUs to accelerate scientific and engineering applications by providing a language extension of C called CUDA-C. A foreign-function interface comprised of Scheme and CUDA-C constructs extends the Gambit Scheme compiler and enables linking of Scheme and data-parallel CUDA-C code to support high-performance parallel computation with reasonably low overhead in runtime. We provide six test cases — implemented both in Scheme and CUDA-C — in order to evaluate performance of our implementation in Gambit and to show 0–35% overhead in the usual case. Our work enables Scheme programmers to develop expressive programs that control and issue data-parallel programs running on GPUs, while also reducing hands-on memory management.
Degree
Master of Science (M.Sc.)Department
Computer ScienceProgram
Computer ScienceSupervisor
Dutchyn, ChristopherCommittee
Tse, John S.; Jamali, Nadeem; Roy, ChanchalCopyright Date
December 2013Subject
Data parallelism
GPGPU
Scheme
Skeletons
CUDA
Linking