TY - GEN
T1 - Porting irregular reductions on heterogeneous CPU-GPU configurations
AU - Huo, Xin
AU - Ravi, Vignesh T.
AU - Agrawal, Gagan
PY - 2011
Y1 - 2011
N2 - Heterogeneous architectures are playing a significant role in High Performance Computing (HPC) today, with the popularity of accelerators like the GPUs, and the new trend towards the integration of CPUs and GPUs. Developing applications that can effectively use these architectures is a major challenge. In this paper, we focus on one of the dwarfs in the Berkeley view on parallel computing, which are the irregular applications arising from unstructured grids. We consider the problem of executing these reductions on heterogeneous architectures comprising a multi-core CPU and a GPU. We have developed a Multi-level Partitioning Framework, which has the following features: 1) it supports GPU execution of irregular reductions even when the dataset size exceeds the size of the device memory, 2) it can enable pipelining of partitioning performed on the CPU, and the computations on the GPU, and 3) it supports dynamic distribution of work between the multi-core CPU and the GPU. Our extensive evaluation using two different irregular applications demonstrates the effectiveness of our approach.
AB - Heterogeneous architectures are playing a significant role in High Performance Computing (HPC) today, with the popularity of accelerators like the GPUs, and the new trend towards the integration of CPUs and GPUs. Developing applications that can effectively use these architectures is a major challenge. In this paper, we focus on one of the dwarfs in the Berkeley view on parallel computing, which are the irregular applications arising from unstructured grids. We consider the problem of executing these reductions on heterogeneous architectures comprising a multi-core CPU and a GPU. We have developed a Multi-level Partitioning Framework, which has the following features: 1) it supports GPU execution of irregular reductions even when the dataset size exceeds the size of the device memory, 2) it can enable pipelining of partitioning performed on the CPU, and the computations on the GPU, and 3) it supports dynamic distribution of work between the multi-core CPU and the GPU. Our extensive evaluation using two different irregular applications demonstrates the effectiveness of our approach.
UR - http://www.scopus.com/inward/record.url?scp=84858060473&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=84858060473&partnerID=8YFLogxK
U2 - 10.1109/HiPC.2011.6152715
DO - 10.1109/HiPC.2011.6152715
M3 - Conference contribution
AN - SCOPUS:84858060473
SN - 9781457719516
T3 - 18th International Conference on High Performance Computing, HiPC 2011
BT - 18th International Conference on High Performance Computing, HiPC 2011
PB - IEEE Computer Society
T2 - 18th International Conference on High Performance Computing, HiPC 2011
Y2 - 18 December 2011 through 21 December 2011
ER -