Efficient parallel algorithms can be made robust

Paris C. Kanellakis, Alex A. Shvartsman

Research output: Chapter in Book/Report/Conference proceedingConference contribution

25 Scopus citations


The efficient parallel algorithms proposed for many fundamental problems, such as list ranking, computing preorder numberings and other functions on trees, or integer sorting, are very sensitive to processor failures. The requirement of efficiency (commonly formalized using Parallel-time x Processors as a cost measure) has led to the design of highly tuned PRAM algorithms which, given the additional constraint of simple processor failures, unfortunately become inefficient or even incorrect. We propose a new notion of robustness, that combines efficiency with fault tolerance. For the common case of fail-stop errors, we develop a general (and easy to implement) technique to make robust many efficient parallel algorithms, e.g., algorithms for all the problems listed above. More specifically, for any dynamic pattern of fail-stop errors with at least one surviving processor, our method increases the original algorithm cost by at most a multiplicative factor polylogarithmic in the input size.

Original languageEnglish (US)
Title of host publicationProc Eighth ACM Symp Princ Distrib Comput
PublisherPubl by ACM
Number of pages11
ISBN (Print)0897913264
StatePublished - 1989
Externally publishedYes
EventProceedings of the Eighth Annual ACM Symposium on Principles of Distributed Computing - Edmonton, Alberta, Can
Duration: Aug 14 1989Aug 16 1989

Publication series

NameProceedings of the Annual ACM Symposium on Principles of Distributed Computing


ConferenceProceedings of the Eighth Annual ACM Symposium on Principles of Distributed Computing
CityEdmonton, Alberta, Can

ASJC Scopus subject areas

  • Software
  • Hardware and Architecture
  • Computer Networks and Communications


Dive into the research topics of 'Efficient parallel algorithms can be made robust'. Together they form a unique fingerprint.

Cite this