TY - JOUR
T1 - Programming Irregular Applications
T2 - Runtime Support, Compilation and Tools
AU - Saltz, Joel
AU - Chang, Chialin
AU - Edjlali, Guy
AU - Hwang, Yuan Shin
AU - Moon, Bongki
AU - Ponnusamy, Ravi
AU - Sharma, Shamik
AU - Sussman, Alan
AU - Uysal, Mustafa
AU - Agrawal, Gagan
AU - Das, Raja
AU - Havlak, Paul
PY - 1997
Y1 - 1997
N2 - In this chapter, we present a summary of the runtime support, compiler and tools development efforts in the CHAOS group at the University of Maryland. The principal focus of the CHAOS group's research has been to develop tools, compiler runtime support and compilation techniques to help scientists and engineers develop high-speed parallel implementations of codes for irregular scientific problems (i.e. problems that are unstructured, sparse, adaptive or block structured). We have developed a series of runtime support libraries (CHAOS, CHAOS+ +) that carry out the preprocessing and data movement needed to efficiently implement irregular and block structured scientific algorithms on distributed memory machines and networks of workstations. Our compilation research has played a major role in demonstrating that it is possible to develop data parallel compilers able to make effective use of a wide variety of runtime optimizations. We have also been exploring ways to support interoperability between sequential and parallel programs written using different languages and programming paradigms.
AB - In this chapter, we present a summary of the runtime support, compiler and tools development efforts in the CHAOS group at the University of Maryland. The principal focus of the CHAOS group's research has been to develop tools, compiler runtime support and compilation techniques to help scientists and engineers develop high-speed parallel implementations of codes for irregular scientific problems (i.e. problems that are unstructured, sparse, adaptive or block structured). We have developed a series of runtime support libraries (CHAOS, CHAOS+ +) that carry out the preprocessing and data movement needed to efficiently implement irregular and block structured scientific algorithms on distributed memory machines and networks of workstations. Our compilation research has played a major role in demonstrating that it is possible to develop data parallel compilers able to make effective use of a wide variety of runtime optimizations. We have also been exploring ways to support interoperability between sequential and parallel programs written using different languages and programming paradigms.
UR - http://www.scopus.com/inward/record.url?scp=7244235085&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=7244235085&partnerID=8YFLogxK
U2 - 10.1016/S0065-2458(08)60707-X
DO - 10.1016/S0065-2458(08)60707-X
M3 - Article
AN - SCOPUS:7244235085
SN - 0065-2458
VL - 45
SP - 105
EP - 153
JO - Advances in Computers
JF - Advances in Computers
IS - C
ER -