Parallel Numerics and Applications

Main Article Content

Roman Trobec
Peter Zinterhof
Marián Vajtersic
Andreas Uhl

Abstract

A further increase in computer performance and pooling of existing computing resources is possible by distribution and parallel execution of the work. Current trends in the development of information technologies lean signifficantly toward the interconnection of computers in networks and clusters that already incorporate autonomous computing. This special issue of JSCPE entitled Parallel Numerics and Applications is focused on some important areas where a significant amount of computer power is needed. Theoretical results have been tested by various applications of parallel or distributed algorithms in computationally demanding computer simulations, e-business, medicine and multimedia.

Soon after initiating the joint research activities programme PACT (Programming Environments, Algorithms, Applications, Compilers and Tools for Parallel Computation) in Vienna in 1995, bringing together researchers in the Central European region, Parallel Numerics started annual workshops to demonstrate new ideas and application results. On average, thirty contributions are presented annually and published in the Workshop Proceedings. The diversity of authors, ranging from university professors and professionals in different fields of industry to postgraduate students from Italy, Austria, Slovenia, Czech Republic, Poland, Slovakia and Hungary, ensures a wide coverage of relevant theoretical problems and practical applications. Invited speakers of world renown contribute their experience to broaden the scope of Parallel Numerics.

The 5th Workshop was organized in Slovenia in October 2002. The time was ripe to present the accumulated results of these workshops to a wider audience in the special issue form. We selected the ten best contributions and invited authors to update their work with latest results and to improve further the technical content of their contributions. The special issue is divided into four chapters that cover some specialized areas of Scientific Computing: Numerical Integration, Matrix Algebra, Computer Simulations, and Image and Video Coding.

Numerical Integration arises in various fields of numerical and applied mathematics. With the advent of parallel computers, the numerical computation of integrals can be done in more dimensions and for a large set of lattice points in a reasonable time. A collection of papers in this area ranges from the theoretical norm estimation for a parallelizable integration over moredimensional unit cube through integration routines based on adaptive subdivision strategies, up to the practical level where some software aspects of a construction of parallel routines for the integration are presented.

Zinterhof introduces in his paper entitled Numerical Integration and Hilbert-Schmidt-Norms the approximation of inner products by finite sums in the setting of Hilbert Spaces with Reproducing Kernel. The Hilbert-Schmidt-Norms of certain operators leads to new estimators for the numerical processes. It is shown that classical cubature rules are special cases of this approach.

Schurer proposes in the paper Optimal Communication Interval for Parallel Adaptive Integration some new and efficient methods for parallel adaptive integration routines. The key concept is the application of cubature rules to subdomains of the original integration domain. Important results concerning the crucial communication frequency and a series of experimental results are given.

D'Ambra, di Serafino, and Lapegna present in their paper Parallel Components for Multidimensional Quadrature: Some Experiences interesting experimental results of an implementation of a parallel multidimensional quadrature routine, based on the BLACS message-passing library and its integration into a programming environment ASSIST (A Software development System based upon Integrated Skeleton Technology). Their approach allows to reuse, for example, a quadrature routine, preserving all its features, by using a general methodology for encapsulating existing parallel MPI-based software into other programming environments.

Matrix Algebra represents a field which is well predestinated to parallel computation. Matrix problems are possessing a high degree of parallelism which can be exploited on both fine and coarse grain levels. The matrix algebra covers a broad number of application fields: this Issue brings selected contributions related to two of them.

The paper Parallel Computation with Structured Matrices in Linear Modeling of Multidimensional Signals of Oksa, Becka, and Vajtersic concerns parallel matrix multiplication. Matrix-matrix product of Gramian of special Toeplitz-block matrices is required here for the linear prediction of multidimensional signals. A new parallel algorithm is developed for this problem, where the Fast Fourier Transform computations of large data blocks are replaced by shorter ones. In this way, signifcant speedup values have been achieved when running it on the IBM Power4 Regatta parallel computer system.

Solution of linear algebraic systems is presented in the paper Parallel Birge and Qi Factorization of Three-Stage Stochastic Linear Programs, presented by Halada, Benkner and Lucka. A special linear system resulting from the formulation of the two- and three-level stochastic programs is needed to be solved repeatedly. The algorithm for its solution is based on the Birge and Qi factorization method. This method is parallelized efficiently and the experiments show that the scalability of the solution can be preserved.

Computer Simulations are applied in order to allow the analysis of results from some natural phenomena that are too expensive or dangerous to perform and measure in reality. Three papers from this area show the variety of different possibilities for computer experiments. Korosec, Silc, and Robic describe in their paper An Experimental Study of an Ant-colony Algorithm for the Mesh-partitioning Problem an interesting application of a metaheuristic search technique for solving general optimization problems, which can be applied in different areas of science, for example for even load partitioning in finite element applications. The results prove that the method proposed has the potential to be used effiectively for mesh partitioning problems. Sterk, Trobec, and Praprotnik in their paper Numerical Schemes for Fluid Flow and Heat Transfer in Medical Simulations, give a detailed implementation of a numerical algorithm for calculation of fluid flow, based on fnite differences and applicable, among others, also in medicine and heart surgery.

Avbelj, Sterk, and Trobec in the paper A Grid Computing Application: Optimal Electrode Positioning in Body Surface Electrocardiography, describe a method for optimal selection of electrode positions on a simple 2-D torso model using computer experiments and an exhaustive search. More realistic models will require much more computation, while communication in parallel implementation will remain trivial.

Image and Video Coding is quite a new application of parallel computing. Numerical methods have to be used in cases where some advanced techniques are needed, i.e. compression based on motion compensation. Most video compression algorithms can be viewed as an abstract procedure which processes a set of primitive tasks in a certain order. Hence, there exists a natural approach to data-driven parallelization that distribute the source data and primitive tasks corresponding to the data distribution among the multiple processors.

Pschernig and Uhl describe in their paper Parallel Algorithms for Overlapped Block-Matching Motion Compensation (OBMC) the method that enhances the prediction results and improve the video compression significantly, however, on the account of a high additional computational cost. Authors investigate different possibilities for the parallelization of some variants of OBMC algorithms. Results showe that the best method bases on the distribution of complete video frame blocks, resulting in high scalability if the number of blocks is in the same range as the number of processors.

Multimedia compression algorithms usually take discrete cosine or wavelet transformed input data and produce a stream of bits as output. Kutil investigate in his paper Optimization of Bitstream Assembly in Parallel Multimedia Compression how to optimally collect and assembly the sequences of bit-stream parts that correspond to the partitioned and distributed input video data. Sequential optimizations and parallelization of the assembly process were used in the solution of this problem.

Four different methods based on the well-known SPIHT algorithm were tested and compared with the respect to the communication overhead and calculation complexity. We are convinced that the contents of this special issue will be of great interest to scientists working on parallel computing, doctoral students, teachers, engineers and mathematicians specializing in numerical applications and computer simulations of natural phenomena. We are grateful to the authors of the articles for their valuable contributions, to the Programme Committee members for their timely review of the articles, and to Marjan Sterk for his excellent technical and editorial assistance.

Roman Trobec, Jozef Stefan Institute, Ljubljana
Peter Zinterhof, University of Salzburg
Marian Vajtersic, University of Bratislava
Andreas Uhl, University of Salzburg


Article Details

Section
Introduction to the Special Issue