Introduction to the Special Issue on Distributed Computing with Applications in Bioengineering

Main Article Content

Karolj Skala
Roman Trobec
Enis Afgan

Abstract

Biomedical Engineering, usually known as Bioengineering, is among the fastest developing and one of most important interdisciplinary fields today. It connects natural and technical sciences, for all of which biological and medical phenomena, computation, and data management play important roles in science and industry. Distributed computing and parallel algorithms have proved to be effective in solving the problem with high computational complexity in a wide range of domains, including areas of computational bioengineering. This special edition collects high-quality research papers in the field of application of distributed computing systems in bioengineering applications. This issue is the achievement of the developments of modern computer and biomedical technology. Most bioengineering or bioinformatics methods need to access and analyses large amounts of data. For that it is necessary to achieve an effective way that from Big data creates knowledge and useful applications. The nature of such scientific data/information/knowledge demands the use of a powerful computing and intelligent database management systems. Areas of current and future research include advanced topics in this special issue.

 The paper from Juhasz et al. present a novel, GPU-based streaming architecture that has the potential for drastically reducing execution times and at the same time providing simultaneous 2D and 3D visualization facilities. The system uses a highly-optimised and re-configurable pipeline of CPU and GPU cores that attempts to exploit the tremendous computing power whenever possible. The system can process live data arriving from an EEG device or data stored in EEG data files. The computer drives a large display wall system of four 46" monitors that provides a 4K-resolution drawing surface for visualizing raw EEG data, potential maps and various 3D views of the patient's head.

Authors from the group of M. Gusev presents the scalable balancer in Cloud computing extended to Dew Computing level. Given a successful implementation of scalable low level load balancer, implemented on the network layer. The scalability is proved with series of experiments. The experiments showed that it adds small latency of several milliseconds and thus it slightly reduces the performance when the distributed system is underutilized.

Rybicki et al. demonstrate how to realize a Software-as-a-Service solution for a variety of science software using container technologies. The presented solution utilized DARIAH-DE research infrastructure based on OpenStack and UNICORE grid to deliver an extensible solution for the digital humanities domain.

The paper from authors Forer et al. describe the extension of Cloudflow to support ApacheSpark without any adaptions to already implemented pipelines. The described performance evaluation demonstrates that Spark can bring an additional boost for analyzing next generation sequencing(NGS) data to the eld of genetics. The Cloudflow framework is open source and freely available at https://github.com/genepi/cloudflow .

The paper by Memon et al. describes an approach for executing computational jobs across HPC and HTC resources while operating on geographically dispersed data via a global federated file system. The solution is realized as a new framework that integrates UNICORE and GFFS to provide a standards-based environment to support large scale data intensive computations frequent in today's biomedical analyses.

The paper "A Parallel algorithm for the state space exploration", authored by L. Allal et al., proposes a new automatic verification technique based on model checking that determines whether a given system satisfies its specification. Such a technique suffers from the state explosion problem when traversing all possible states of systems. A new synchronized parallel algorithm (SPA) of exploration is proposed based on a fixed number of threads. Exhaustive comparative studies between the standard parallel exploration algorithm in SPIN and the new SPA show that the SPA performs slightly better regarding the execution time and memory complexity.

Smart systems in telemedicine frequently use intelligent sensor devices at large scale. Practitioners can monitor non-stop the vital parameters of hundreds of patients in real-time. The most important pillars of remote patient monitoring services are communication and data processing. Large scale data processing is done mainly using workflows. In the paper Eszter et al. give a brief overview of the different check pointing techniques and propose two new provenance based check pointing algorithms which uses the information stored in the workflow structure to dynamically change the frequency of check pointing and can be efficiently used for dynamic health care smart systems.

Article Details

Section
Proposal for Special Issue Papers