TY - GEN
T1 - Moha
T2 - 2020 International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2020
AU - Xing, Haoyuan
AU - Agrawal, Gagan
AU - Ramnath, Rajiv
N1 - Funding Information:
This work was funded by NSF awards SHF-1526386, CCF-1629392, CCF-2007793 and OAC-2034850.
Publisher Copyright:
© 2020 IEEE.
Copyright:
Copyright 2021 Elsevier B.V., All rights reserved.
PY - 2020/11
Y1 - 2020/11
N2 - Heterogeneous, dense computing architectures consisting of several accelerators, such as GPUs, attached to general-purpose CPUs are now integral High-Performance Computing (HPC) systems. However, these architectures pose severe memory and I/O constraints to computations involving in-situ analytics. This paper introduces MoHA, a framework for in-situ analytics that is designed to efficiently use the limited resources available on heterogeneous platforms. MoHA achieves this efficiency through the extensive use of bitmaps as a compressed or summary representation of simulation outputs. Our specific contributions in this paper include the design of bitmap generation and storage methods suitable for GPUs, the design and efficient implementation of a set of key operators for MoHA, and demonstrations of how several real queries on real datasets can be implemented using these operators. We demonstrate that MoHA reduces I/O transfer as well as overall processing time when compared to a baseline that does not use compressed representations.
AB - Heterogeneous, dense computing architectures consisting of several accelerators, such as GPUs, attached to general-purpose CPUs are now integral High-Performance Computing (HPC) systems. However, these architectures pose severe memory and I/O constraints to computations involving in-situ analytics. This paper introduces MoHA, a framework for in-situ analytics that is designed to efficiently use the limited resources available on heterogeneous platforms. MoHA achieves this efficiency through the extensive use of bitmaps as a compressed or summary representation of simulation outputs. Our specific contributions in this paper include the design of bitmap generation and storage methods suitable for GPUs, the design and efficient implementation of a set of key operators for MoHA, and demonstrations of how several real queries on real datasets can be implemented using these operators. We demonstrate that MoHA reduces I/O transfer as well as overall processing time when compared to a baseline that does not use compressed representations.
KW - Accelerator architecture
KW - Data compression
KW - High performance computing
KW - Indexes
KW - Query processing
KW - Scientific computing
UR - http://www.scopus.com/inward/record.url?scp=85102369106&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102369106&partnerID=8YFLogxK
U2 - 10.1109/SC41405.2020.00086
DO - 10.1109/SC41405.2020.00086
M3 - Conference contribution
AN - SCOPUS:85102369106
T3 - International Conference for High Performance Computing, Networking, Storage and Analysis, SC
BT - Proceedings of SC 2020
PB - IEEE Computer Society
Y2 - 9 November 2020 through 19 November 2020
ER -