Main Article Content
Practical Aspects of High-Level Parallel Programming Computational Science applications are more and more complex to develop and require more and more computing power. Parallel and grid computing are solutions to the increasing need for computing power. High level languages offer a high degree of abstraction which ease the development of complex systems. Being based on formal semantics, it is even possible to certify the correctness of critical parts of the applications. Algorithmic skeletons, parallel extensions of functional languages, such as Haskell and ML, or parallel logic and constraint programming, parallel execution of declarative programs such SQL queries, etc. have produced methods and tools that improve the price/performance ratio of parallel software, and broaden the range of target applications. This special issue of presents recent work of researchers in these fields. These articles are extended and revised versions of papers presented at the first international workshop on Practical Aspects of High-Level Parallel Programming (PAPP), affiliated to the International Conference on Computational Science (ICCS 2004). The PAPP workshops focus on practical aspects of high-level parallel programming: design, implementation and optimization of high-level programming languages and tools (performance predictors working on high-level parallel/grid source code, visualisations of abstract behaviour, automatic hotspot detectors, high-level GRID resource managers, compilers, automatic generators, etc.), applications in all fields of computational science, benchmarks and experiments. The PAPP workshops are aimed both at researchers involved in the development of high level approaches for parallel and grid computing and computational science researchers who are potential users of these languages and tools. One concern in the development of parallel programs is to predict the performances of the programs from the source code in order to be able to optimize the programs or to fit the resources needed by the programs to the resources offered by the architecture. In their paper, Evaluating the performance of pipeline-structured parallel programs with skeletons and process algebra, Anne Benoît et al., propose a framework to evaluate the performance of structured parallel programs with skeletons and process algebra. Frédéric Gava in External Memory in Bulk-Synchronous Parallel ML provides an extension of the Bulk Synchronous Parallel ML library by input/output operations on disks, together with an extension of the Bulk Synchronous Parallel model. Another direction of research is to set constraints on the resources used by the programs. Stephen Gilmore et al. designed and developed the Camelot language which is a resource-bounded functional programming language which compiles to Java byte code to run on the Java Virtual Machine. Their paper Extending resource-bounded functional programming languages with mutable state and concurrency extends Camelot to include language support for Camelot-level threads and extends the existing Camelot resource-bounded type system to provide safety guarantees about the heap usage of Camelot threads. Franck Pommereau's previous work is about high-level Petri nets with a notion of time, called causal time, used for the specification and the verification of systems with time constraints. In his paper Petri nets as Executable Specifications of High-Level Timed Parallel Systems he presents a step forward the use of this formalism for execution purposes: an algorithm for the execution of a restricted class of high-level Petri nets with causal time. High-level programming languages aim at easing the programming of systems. This should not hinder the predictability and the efficiency of programs. Joél Falcou and Jocelyn Sérot designed a high-level library C++ for the programming of the SIMD component of the Power PC processors, which is much simpler to use that lower level specific libraries but with a very good efficiency. Their EVE library is thus a very good practical choice for the programming of such hardware. I would like to thank all the people who made the PAPP workshop possible: the organizers of the ICCS conference, the other members of the programme committee: Rob Bisseling (Univ. of Utrecht, The Netherlands), Matthieu Exbrayat (Univ. of Orléans, France), Sergei Gorlatch (Univ. of Muenster, Germany), Clemens Grelck (Univ. of Luebeck, Germany), Kevin Hammond (Univ. of St. Andrews, UK), Zhenjiang Hu (Univ. of Tokyo, Japan), Quentin Miller (Miller Research Ltd., UK), Susanna Pelagatti (Univ. of Pisa, Italy), Alexander Tiskin (Univ. of Warwick, UK). I also thank the other referees for their efficient help: Martin Alt, Frédéric Gava and Sven-Bodo Scholz. Finally I thank all authors who submitted papers for their interest in the workshop, the quality and variety of research topics they proposed.
Laboratoire d'Informatique Fondamentale d'Orléeans,
University of Orléans, rue Léonard de Vinci, B.P. 6759
F-45067 ORLEANS Cedex 2, France