Scalable Parallel Computing: Technology, Architecture, Programming
Main Article Content
Abstract
Kai Hwang and Zhiwei Xu
McGraw-Hill, Boston, 1998, 802 pp.
ISBN 0-07-031798-4, $97.30
This text is an in depth introduction to the concepts of Parallel Computing. Designed for use in university level computer science courses, the text covers scalable architecture and parallel programming of symmetric muli-processors, clusters of workstations, massively parallel processors, and Internet-based metacomputing platforms. Hwang and Xu give an excellent overview in these topics while keeping the text easily comprehensible.
The text is organized into four parts. Part I covers scalability and clustering. Part II deals with the technology used to construct a parallel system. Part III pertains to the architecture of scalable systems. Finally, Part IV presents methods of parallel programming on various platforms and languages.
The first chapter presents different models on scalability as divided into resources, applications, and technology. It defines three abstract models (PRAM, BSP, and phase parallel models) and five physical models (PVP, SMP, MPP, COW, and MPP systems). Chapter 2 introduces the ideas behind parallel programming including processes, tasks, threads and environments. Chapter 3 introduces performance issues and metrics.
As an introduction to Part II, Chapter 4 introduces the history of microprocessor types and their applications in the architectures of current systems. Chapter 5 deals with the issues of distributed memory. It discusses several models such as UMA, NORMA, CC-NUMA, COMA, and DSM. Chapter 6 presents gigabit networks, switched interconnects, and other various high-speed networking architectures to construct clusters. Chapter 7 discusses the overheads created by parallel computing, such as threads, synchronization, and efficient communication between nodes.
Part III, Chapter 8, 9 and 11, give comparisons between various types of scalable systems (SMP, CC-NUMA, Clusters, and MPP). The comparisons are based on hardware architecture, the system software, and special features that make each system unique. Chapter 10 compares various research and commercial clusters with an in depth study of the Berkeley NOW, IBM SP2, and Digital TruCluster systems.
Chapter 12 introduces the concepts of Part IV with details into parallel programming paradigms. Chapter 13 discusses communications between the processors using message passing programming (such as MPI and PVM libraries). Chapter 14 studies the data parallel approach with an emphasis in Fortran 90 and HPF.
With attention to detail through examples, Hwang and Xu have created a well-written introduction to Parallel Computing. The authors are distinguished for their contributions in this field. This text is based on cutting-edge research, providing the current theories that are in use in industry today.
Bin Cong, Shawn Morrison and Michael Yorg,
Department of Computer Science
California Polytechnic State University at San Luis Obispo