Asynchrony in Parallel Computing: From Dataflow to Multithreading
Main Article Content
Abstract
The paper presents an overview of parallel computing models, architectures, and research projects that are based on asynchronous instruction scheduling. It starts with pure dataflow computing models and presents an historical development of several ideas (i.e. single-token-per-arc dataflow, tagged-token dataflow, explicit token store, threaded dataflow, large-grain dataflow, RISC dataflow, cycle-by-cycle interleaved multithreading, block interleaved multithreading, simultaneous multithreading) that resulted in modern multithreaded superscalar processors. The paper shows that unification of von Neumann and dataflow models is possible and preferred to treating them as two unrelated, orthogonal computer paradigms. Today's dataflow research incorporates more explicit notions of state into the architecture, and von Neumann models using many dataflow techniques to improve the latency hiding aspects of modern multi threaded systems.
Article Details
Issue
Section
Overview Papers