Skip to main content

2006 | Buch

Computational Science – ICCS 2006

6th International Conference, Reading, UK, May 28-31, 2006. Proceedings, Part II

herausgegeben von: Vassil N. Alexandrov, Geert Dick van Albada, Peter M. A. Sloot, Jack Dongarra

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Third International Workshop on Simulation of Multiphysics Multiscale Systems

Numerical Modeling of Plasma – Flow Interaction

In the frame of the internal project PUMA (Plasma Used to Master Aerodynamics), ONERA is conducting fundamental studies of plasma-flow interactions. In this paper, the ionic wind created by corona discharges is studied in the case of a subsonic flow over a flat plate. The proposed mechanism of the ionic wind proposed is the addition of momentum by collisions between charged and neutral particles. In order to evaluate the effect of plasma on aerodynamics, a kinetic modeling of the discharge is coupled with a Fluid Dynamics code.

Jean-Charles Matéo-Vélez, Francois Rogier, Frédéric Thivet, Pierre Degond
Numerical Methods for Reacting Gas Flow Simulations

In this study various numerical schemes for simulating 2D laminar reacting gas flows, as typically found in Chemical Vapor Deposition (CVD) reactors, are proposed and compared. These systems are generally modeled by means of many stiffly coupled elementary gas phase reactions between a large number of reactants and intermediate species. The purpose of this study is to develop robust and efficient solvers for the stiff heat-reaction system. The velocities are assumed to be given. For non-stationary CVD simulation, an optimal combination in terms of efficiency and robustness between time integration, nonlinear solvers and linear solvers has to be found. Besides stability, which is important due to the stiffness of the problem, the preservation of non-negativity of the species is crucial. It appears that this extra condition on time integration methods is much more restrictive towards the time-step than stability.

S. van Veldhuizen, C. Vuik, C. R. Kleijn
Reduced Flame Kinetics Via Rate-Controlled Constrained Equilibrium

The dynamical evolution of a chemically reacting system is governed by the equations of chemical kinetics, which exhibit a wide range of time scales thus giving rise to stiff equations. In the rate-controlled constrained equilibrium method (RCCE), the dynamical evolution of the system is governed by the kinetics of the species associated with the slower timescales (kinetically controlled), while the remaining species are calculated via a constrained minimisation of the Gibbs free energy of the mixture. This permits the derivation of a general set of differential-algebraic equations, which apply to any reduced system given a particular selection of kinetically controlled species. In this paper, it is shown how the differential-algebraic formulation of RCCE can be derived from first principles, in the form of an extension of the computation of chemical equilibrium via miminisation of the free energy. Subsequently, RCCE is employed to reduce a comprehensive combustion mechanism and to calculate the burning velocity of premixed H

2

-O

2

and CH

4

-air flames under a range of pressures and equivalence ratios.

Stelios Rigopoulos
Flow Patterns in the Vicinity of Triple Line Dynamics Arising from a Local Surface Tension Model

We model and simulate numerically a droplet impact onto a solid substrate. The triple line dynamics modelling is implicit (as opposed to classical explicit mobility relations), it is based on the Shikhmurzaev equations. These equations include generalized Navier slip type boundary conditions with extra local surface tension gradient terms. Numerical results when spreading and recoiling are presented. A particular attention is paid to flow patterns near the contact line.

J. Monnier, I. Cotoi
A Multilevel-Multigrid Approach to Multiscale Electromagnetic Simulation

The time-dependent Maxwell’s equations are solved for mobile device applications using a multilevel-multigrid finite-difference time-domain (FDTD) method. For three-dimensional models that simulate system level details of mobile devices, the smallest features are in the nanometre (10

− − 9

m) range, leading to a time-step size in the attosecond (10

− − 18

s) range. The feature sizes of mobile devices are in the centimetre (10

− − 2

m) range, while for health and safety studies that include human models features are in the metre range.

Peter Chow, Tetsuyuki Kubota, Takefumi Namiki
Scalable Simulation of Electromagnetic Hybrid Codes

New discrete-event formulations of physics simulation models are emerging that can outperform models based on traditional time-stepped techniques. Detailed simulation of the Earth’s magnetosphere, for example, requires execution of sub-models that are at widely differing timescales. In contrast to time-stepped simulation which requires tightly coupled updates to entire system state at regular time intervals, the new discrete event simulation (DES) approaches help evolve the states of sub-models on relatively indepen-dent timescales. However, parallel execution of DES-based models raises challenges with respect to their scalability and performance. One of the key challenges is to improve the computation granularity to offset synchronization and communication overheads within and across processors. Our previous work was limited in scalability and runtime performance due to the parallelization challenges. Here we report on optimizations we performed on DES-based plasma simulation models to improve parallel performance. The mapping of model to simulation processes is optimized via aggregation techniques, and the parallel runtime engine is optimized for communication and memory efficiency. The net result is the capability to simulate hybrid particle-in-cell (PIC) models with over 2 billion ion particles using 512 processors on supercomputing platforms.

Kalyan Perumalla, Richard Fujimoto, Homa Karimabadi
Numerical Modelling of Poroviscoelastic Grounds in the Time Domain Using a Parallel Approach

In this paper, we present a parallelized finite element code developed to study wave propagation phenomena, specifically in porous soils problems which usually require millions of degrees of freedom. The parallelization technique uses an algebraic grid partitioning managed by a Single Program Multiple Data (SPMD) programming model. Message Passing Interface (MPI) library specification is the standard used to exchange data between processors. The architecture of the code is explained and numerical results show its performance.

Arnaud Mesgouez, Gaëlle Lefeuve-Mesgouez, André Chambarel, Dominique Fougère
Numerical Modeling of Tidal Effects and Hydrodymanics in the Po River Estuary

The present work contributes to the numerical simulation of complex turbulent multiphasic fluid flows encountered in estuarine channels. A numerical solution is based on Reynolds averaged Navier-Stokes equations using the mass preserving model based on the so-called Raviart-Thomas finite element on the unstructured mesh in the horizontal plane. In the vertical, the computational domain is divided into number of layers at predefined heights and the method uses a conventional conforming finite element scheme, with the advantage that the lowermost and uppermost layers variable height allow a faithful representation of the time-varying bed and free surface, respectively. A robust up-to-date algorithm is used for computing the eddy viscosity from the efficient

k

ε

turbulence model for variable density fluid flows. Finally, the capability and the predicting performance of the model are successfully evaluated by applying it to the simulation of the Po River Estuary (PRE) in Italy.

Célestin Leupi, Michel Deville, Mustafa Siddik Altinakar
Adaptive Mesh Refinement and Domain Decomposition: A Framework to Study Multi-physical and Multi-scale Phenomena. First Application to Reacting Gas Flows

The aim of this work is to extend the Adaptive Mesh Refinement (AMR) technique. AMR is an ideal platform for coupling different discretisations and models, as each grid level can use a different solver. Compact conditions have to be defined and these are problems which are at the core of Domain Decomposition methods. An application to a planar shock interacting with a circular diffusion H2-air flame is presented.

J. Ryan
Time Splitting and Grid Refinement Methods in the Lattice Boltzmann Framework for Solving a Reaction-Diffusion Process

The paper shows how to combine together the Lattice Boltzmann Methods with the time splitting and the grid refinement techniques, in order to solve reaction-diffusion processes including very fast reaction dynamics, i.e. with time and length scales that vary in a wide range of values. The method is applied to the reaction prototype problem: M

0

← M + L

$\rightleftharpoons$

ML with semi-infinite diffusion conditions and in presence of an electrode where Nernst + flux balance conditions are considered. Two important geometries are considered, planar and spherical, and off-lattice boundary conditions are set up, for general irregular and curved boundaries. We discuss the need, for some cases, of applying the time splitting and the grid refinement approach to have a numerical scheme more easily handled and to substantially reduce the computational time. Furthermore, we point out the physico-chemical conditions to apply the time splitting and the grid refinement to optimise accuracy and performance. In particular, we stress: a) the range of values of the relaxation BGK parameter to have the best performance in solving the pure diffusive scheme and b) the best values of the grid refinement factor to preserve a good accuracy and drastically reduce the time of computation and the memory usage.

Davide Alemani, Bastien Chopard, Josep Galceran, Jacques Buffle
Mesoscopic Simulations of Unsteady Shear-Thinning Flows

The capability of the lattice Boltzmann method as an accurate mesoscopic solver for unsteady non-Newtonian flows is shown by investigating pulsatile shear-thinning blood flow in a three-dimensional idealised vessel. The non-Newtonian behaviour of blood flow is modelled by the Carreau-Yasuda viscosity model. Higher velocity and shear stress magnitudes, relative to Newtonian cases, are observed for the shear-thinning simulations in response to changes in the shear-rate dependent Womersley parameter. We also investigate the flexibility of the method through the shear-thinning behaviour of the lattice Boltzmann relaxation parameter at different Deborah numbers.

Abdel Monim Artoli, Adélia Sequeira
A Multiphysics Model of Capillary Growth and Remodeling

We report on an enhanced computational framework for simulating flow-tissue interactions that significantly expands the capabilities of our previous model [1]. We adhere to the basic structural concept of the so-called

intussusceptive

growth and remodeling which does not only generate capillaries and terminal vessels but also rebuilds them into a highly perfused system [2]. Present enhancements comprise calculation and visualization in three dimensions, refined tissue and fluid mechanics, and the transport of molecules that act as biochemical growth or signaling factors. Our present model explains formation of capillary meshes and bifurcations, and the emergence of feeding and draining microvessels in an interdigitating pattern that avoids arterio-venous shunts. In addition, it predicts detailed hydrodynamic properties and transport characteristics for oxygen, metabolites or signaling molecules. In comparison to the previous work, the complexity of our approach is dramatically increased by using a

multiphysics

modeling environment, where many independent computational components are combined and the data structure is unified.

Dominik Szczerba, Gábor Székely, Haymo Kurz
Liquid Computations and Large Simulations of the Mammalian Visual Cortex

Large artificial Hodgkin-Huxley neural networks are examined. The structures discussed in this article simulate the cortex of the mammalian visual system. We use a modular architecture of the cortex divided into sub-regions. Results of parallel simulations based on the liquid computing theory are presented in some detail. Separation ability of groups of neural microcircuits is observed. We show that such property may be useful for explaining some edge or contrast detection phenomena.

Grzegorz M. Wojcik, Wieslaw A. Kaminski
Which Meshes Are Better Conditioned: Adaptive, Uniform, Locally Refined or Locally Adjusted?

Adaptive, locally refined and locally adjusted meshes are preferred over uniform meshes for capturing singular or localised solutions. Roughly speaking, for a given degree of freedom a solution associated with adaptive, locally refined and locally adjusted meshes is more accurate than the solution given by uniform meshes. In this work, we answer the question which meshes are better conditioned. We found, for approximately same degree of freedom (same size of matrix), it is easier to solve a system of equations associated with an adaptive mesh.

Sanjay Kumar Khattri, Gunnar Fladmark
Parallel Simulation of Three–Dimensional Bursting with MPI and OpenMP

This work presents a mathematical model and its parallel implementation via two parallel paradigms for the simulation of three–dimensional bursting phenomena. The mathematical model consists of four nonlinearly coupled partial differential equations and includes fast and slow subsystems. The differential equations have been discretized by means of a linearly–implicit finite difference method in equally–spaced grids. The resulting system of equations at each time level has been solved by means of an optimized Preconditioned Conjugate Gradient (PCG) method. The proposed mathematical model has been implemented via: (1) a message passing paradigm based on the standard MPI and (2) a shared address space paradigm based on SPMD OpenMP. The two implementations have been evaluated on two current parallel architectures, i.e., a cluster of biprocessors Xeon and an SGI Altix 3700 Bx2 based on Itanium. It is shown that better performance and scalability are obtained on the second platform.

S. Tabik, L. F. Romero, E. M. Garzón, J. I. Ramos
Numerical Simulation of Phase Transformations in Shape Memory Alloy Thin Films

A unified variational framework and finite element simulations of phase transformation dynamics in shape memory alloy thin films are reported in this paper. The computational model is based on an approach which combines the lattice based kinetics involving the order variables and non-equilibrium thermodynamics. Algorithmic and computational issues are discussed. Numerical results on phase nucleation under mechanical loading are reported.

Debiprosad Roy Mahapatra, Roderick V. N. Melnik
A Virtual Test Facility for Simulating Detonation-Induced Fracture of Thin Flexible Shells

The fluid-structure interaction simulation of detonation- and shock-wave-loaded fracturing thin-walled structures requires numerical methods that can cope with large deformations as well as topology changes. We present a robust level-set-based approach that integrates a Lagrangian thin shell finite element solver with fracture and fragmentation capabilities with an Eulerian Cartesian detonation solver with optional dynamic mesh adaptation. As an application example, the rupture of a thin aluminum tube due to the passage of an ethylene-oxygen detonation wave is presented.

Ralf Deiterding, Fehmi Cirak, Sean P. Mauch, Daniel I. Meiron
Data-Driven Inverse Modelling of Ionic Polymer Conductive Composite Plates

Analytical solutions of the partial differential equations (PDEs) governing the behavior of ionic polymer plates have not been yet obtained and therefore only time consuming discrete numerical methods can be used instead. To avoid the computational cost of numerical solutions this paper introduces a solution construction method that exploits analytical approximation basis functions borrowed from solutions of single physics formulations associated with rectangular ionic polymer plates for artificial muscle applications. This is achieved by utilizing an inverse approach that exploits global optimization. An objective function is constructed to express the error between the experimental and analytical values of the selected state variables. Minimization of this objective function yields an efficient determination of the unknown free coefficients. Comparisons between the determined approximations and the experimental data along with computational efficiency improvements conclude this paper.

John G. Michopoulos, Moshen Shahinpoor

Innovations in Computational Science Education

Exploiting Real-Time 3d Visualisation to Enthuse Students: A Case Study of Using Visual Python in Engineering

We describe our experience teaching programming and numerical methods to engineering students using Visual Python to exploit three dimensional real time visualisation. We describe the structure and content of this teaching module and evaluate the module after its delivery. We find that the students enjoy being able to visualise physical processes (even if these have effectively only 1 or 2 spatial degrees of freedom) and this improves the learning experience.

Hans Fangohr
Involving Undergraduates in Computational Science and Engineering Research: Successes and Challenges

The undergraduate years of one’s educational career are among the most formative in terms of education and research outlook. The university education should both broaden the student’s perspective, while simultaneously sharpening critical thinking skills. Though classroom experience is beneficial, it is incomplete. The purpose of this paper is to address some of the issues associated with providing computational science and engineering education at the undergraduate level by surveying efforts made at the University of Utah. Specifically, we will describe a program that allows students to become involved in research during their undergraduate years, and will provide two success stories demonstrating its efficacy. We also discuss efforts being made at associated undergraduate curriculum reform.

R. M. Kirby, C. R. Johnson, M. Berzins
A Project Based Approach to Teaching Parallel Systems

For several years we have delivered advanced undergraduate courses related to computational science using a traditional approach of lectures, laboratory exercises and assignments. In recent years, however, we have moved away from this towards project based approaches. In this paper we discuss our attempts to structure a course in parallel systems around a group project that required the students design, build and evaluate their own message passing environment.

Alistair P. Rendell
Learning by Doing: Software Projects in CSE Education

Software development is one of the main routine activities in Computational Science and Engineering (CSE). Nevertheless, there is a huge gap between software engineering techniques available and established today in most fields where mainstream software is developed on the one hand and the typical extent of their application in a CSE context on the other hand. CSE curricula often reflect this tendency by not including software engineering topics adequately. This contribution reports experiences with a new course format called “student project” in the CSE master’s program at TU München. There, for about half a year, a group of 4-8 students cooperate on a software development project – this time dealing with molecular dynamics. Although it is one objective to get a well performing code, the project’s focus is on the consequent application of software engineering and project management practices.

Martin Bernreuther, Hans-Joachim Bungartz
Computational Math, Science, and Technology (CMST): A Strategy to Improve STEM Workforce and Pedagogy to Improve Math and Science Education

Integrated approach to education in science, technology, engineering, and mathematics (STEM) has recently become a common practice. Technologies such as wireless graphing calculators, electronic whiteboards, and computers help engage students in classroom activities. Modeling and simulation software allow experimentation without usual prerequisites while enabling a pedagogical approach to display interconnectedness of math and science. Controlled simulations and layered approach to learning seem to offer long-sought support for inquiry-based curricula in public schools. Today’s jobs require multiple skills, necessitating a broader college education. While students with multiple majors can be seen on college campuses, the overall number of degree holders in technical fields has gone down steadily. A handful of institutions have established multidisciplinary programs to adjust to these concerns and to better serve diverse interests of students.

O. Yaşar, L. Little, R. Tuzun, K. Rajasethupathy, J. Maliekal, M. Tahar
The School of Computational Science at Florida State University

Florida State University has recently launched a new program in Computational Science. In this paper, we describe the need for computational science, how we defined computational science, and explain the rational behind our computational science curriculum.

Gordon Erlebacher, Janet Peterson
Teaching the Foundations of Computational Science on the Undergraduate Level

This paper describes a new course introduced at University of Erlangen-Nürnberg to teach the foundations of computational science for undergraduates at the end of their second year. The course is a mandatory part of the curriculum in three different computer science-related degree programs. The paper discusses the motivation for introducing the course, the topics chosen for it, and the teaching scheme developed for this unique course.

C. Freundl, H. Köstler, U. Rüde
Computational Science: An Intermingling of Science, Mathematics, and Computer Science

The development of a full undergraduate program in computational science, with both a broad-swath academic plan and four specialization plans, one each in biochemistry, chemistry, earth sciences, and physics, is described. The reasons behind the establishment of the program are presented, and the evolution of the program over the past five years is described. Some of the initial difficulties encountered in setting up such a cross-disciplinary and relatively flexible program of this type are also pointed out.

Frederick R. W. McCourt
A Framework for Conceptually Modelling the Domain Knowledge of an Instructional System

This paper presents a solution for conceptually modelling the teaching domain knowledge for computer-assisted instructional systems. The model consists of a theoretical framework and a knowledge representation approach. This model might be adopted in order to build a well-defined conceptual structure for the domain knowledge of a CAI system. The paper also presents an authoring system which implements the features of the modelling methods. This approach can offer a better solution to the problem of knowledge structuring and organising for computer-assisted instructional systems.

Emilia Pecheanu, Luminita Dumitriu, Diana Stefanescu, Cristina Segal
Platyhelminthes Are [Re]constructed Recursively

In this paper we propose a progressive technique for teaching recursion with the aim of making easier the understanding of this concept by students of other areas than computer science. Since knowledge is intended to be actively constructed by the students and not passively absorbed from textbooks and lecturers, the adopted teaching technique is derived from constructivism. This paper presents a study of the results obtained in 2005 by two groups of students from Mathematics and Statistics degrees.

Alberto de la Encina, Mercedes Hidalgo-Herrero, Olga Marroquín-Alonso

Fifth International Workshop on Computer Graphics and Geometric Modeling (CGGM 2006)

Extensions for 3D Graphics Rendering Engine Used for Direct Tessellation of Spline Surfaces

In current 3D graphics architectures, the bus between the triangle server and the rendering engine GPU is clogged with triangle vertices and their many attributes (normal vectors, colors, texture coordinates).

We have developed a new 3D graphics architecture using data compression to unclog the bus between the triangle server and the rendering engine. This new architecture has been described in [1]. In the present paper we describe further developments of the newly proposed architecture.

The current paper shows several interesting extensions of our architecture such as backsurface rejection, NURBS real time tesselation and a description of a surface based API. We also show how the implementation of our architecture operates on top of the pixel shaders.

Dr. Adrian Sfarti, Prof. Brian A. Barsky, Todd J. Kosloff, Egon Pasztor, Alex Kozlowski, Eric Roman, Alex Perelman
An Evolution Computation Based Approach to Synthesize Video Texture

Texture synthesis is one of the hottest areas in computer graphics, computer vision and image processing fields, and video texture synthesis is one subset of it. We bring forward a new method on video texture synthesis, in which evolution computing technique is introduced into the processes of synthesizing videos. In the method, by analyzing and processing a finite source video clip, Infinite video sequences obtained can be played smoothly in vision. Comparing with many existing video texture synthesis algorithms, this method can not only get high-quality video results without complicated pre-processing of source video, but also improve the efficiency of synthesis.

Yu Meng, Wen-hui Li, Yan Wang, Wu Guo, Wei Pang
Deformation of Dynamic Surfaces

Based on dynamic fourth order partial differential equations, we present an iterative finite difference algorithm. With C++ language and OpenGL graphics library, we implement the finite difference algorithm into a user interface and develop shape control parameters, density, damping coefficient, boundary tangents and external forces into user handles for dynamic manipulation of surface deformations. Using the developed user interface, we investigate how these user handles influence the deformation of dynamic surfaces.

L. H. You, Jian J. Zhang
A New Smoothing Algorithm for Quadrilateral and Hexahedral Meshes

Mesh smoothing (or r-refinement) are used in computer aided design, interpolation, numerical solution of partial differential equations, etc. We derive a new smoothing called parallelogram smoothing. The new smoothing tries to fit a given domain by the parallelograms. We present several numerical examples and compare our results against the traditional Laplacian smoothing. Presented numerical work shows that the new approach is superior to the Laplacian smoothing.

Sanjay Kumar Khattri
The Calculation of Parametric NURBS Surface Interval Values Using Neural Networks

Three dimensional coordinate values of parametric NURBS (Non-Uniform Rational B-Splines) surfaces are obtained from two dimensional parameters

u

and

v

. An approach for generating surfaces produces a model by giving a fixed increase to

u

and

v

values. However, the ratio of three dimensional parameters increases and fixed increase of

u

and

v

values is not always the same. This difference of ratio costs unrequired sized breaks. In this study an artificial neural network method for simulation of a NURBS surface is proposed. Free shaped NURBS surfaces and various three dimensional object simulations with different patches can be produced using a method projected as network training with respect to coordinates which are found from interval scaled parameters. Experimental results show that this method in imaging modeled surface can be used as a simulator.

Erkan Ülker, Ahmet Arslan
Characterizing and Covering Some Subclasses of Orthogonal Polygons

A

grid n

-ogon

is a

n

-vertex orthogonal polygon that may be placed in a

$\frac{n}{2}\times \frac{n}{2}$

unit square grid and that does not have collinear edges. Given a grid

n

-ogon

P

, let |Π(

P

)| be the number of rectangles that results when we partition

P

by extending the edges incident to reflex vertices towards its interior.

P

is called

Fat

if |Π(

P

)| is maximal for all grid

n

-ogons;

P

is called

Thin

if |Π(

P

)| is minimal for all grid

n

-ogons.

Thin

s with area 2

r

+1 are called

Min-Area

. We will show that

$\lceil\frac{n}{6}\rceil$

vertex guards are necessary to guard a

Min-Area

grid

n

-ogon and present some problems related to

Thin

s.

Ana Mafalda Martins, António Leslie Bajuelos
Techniques for Computing Viewpoint Entropy of a 3D Scene

Viewpoint entropy is a metric that allows measuring the visibility goodness of a scene from a camera position. In this work, we analyze different software and hardware assisted techniques to compute the viewpoint entropy. The main objective of this study is to identify which of these techniques can be used in real time for 3D scenes of high complexity. Our results show that interactivity can be obtained with occlusion query technique and that for real time we need a hybrid software and hardware technique.

Pascual Castelló, Mateu Sbert, Miguel Chover, Miquel Feixas
3D Object Repair Using 2D Algorithms

A number of three-dimensional algorithms have been proposed to solve the problem of patching surfaces to rectify and extrapolate missing information due to model problems or bad geometry visibility during data capture. On the other hand, a number of similar yet more simple and robust techniques apply to 2D image data and are used for texture restoration. In this paper we make an attempt to bring these two-dimensional techniques to the 3D domain due to their obvious advantage of simplicity and controllability. Creating a depth image with the help of a voxelisation algorithm will allow us to apply a variety of image repair algorithms in order to mend a 3D object. The use of three variations of the texture synthesis algorithm is investigated. Constrained texture synthesis and its variations using the Haar wavelet and image decomposition methods are also proposed in order to preserve patterns appearing on the object while trying to maintain its geometry intact.

Pavlos Stavrou, Pavlos Mavridis, Georgios Papaioannou, Georgios Passalis, Theoharis Theoharis
Extraction of Ridges-Valleys for Feature-Preserving Simplification of Polygonal Models

We propose a new method for simplifying polygonal models while retaining salient features. We extract ridge and valley features defined by means of curvature derivatives on the model surface. We combine this extraction with a simplification technique so as to construct coarse models with features preserved. Experimental results have better quality and smaller geometric error than these of previous methods.

Soo-Kyun Kim, Sun-Jeong Kim, Chang-Hun Kim
Multiresolution 3D Rendering on Mobile Devices

We present a client/server system that is able to display 3D scenes on handheld devices. At the server we extract the geometry that is visible for each client and send it. We also extract texture and material information. The clients, running on mobile devices, use that information to render realistic images. Our geometry extraction algorithm employs multiresolution and view-dependent simplification. We present results of our system running on PocketPC 2003 PDAs.

Javier Lluch, Rafa Gaitán, Miguel Escrivá, Emilio Camahort
Multiresolution Remeshing Using Weighted Centroidal Voronoi Diagram

We present a novel method for multiresolution remeshing of irregular mesh. First, the original mesh (two-manifold any genus) is decomposed into several patches, each patch is homeomorphic to a 2D triangle. The goal of this decomposition process is that the decomposed patches are size-equally. First, a mesh is manually cut into a few disk-like patches. With the help of weighted centroidal Voronoi diagram (WCVD), each patch is then automatically partitioned into more triangular patches with nearly equal size. Recursively subdividing these triangular patches, we finally get a semi-regular mesh.

Chao-Hung Lin, Chung-Ren Yan, Ji-Hsen Hsu, Tong-Yee Lee
Metric 3D Surface Mesh Generation Using Delaunay Criteria

This paper presents a technique of incorporating anisotropic metric into the Delaunay triangulation algorithm for unstructured mesh generation on 3D parametric surfaces. Both

empty circumcircle

and

inner angles

criteria of Delaunay retriangulation can be successfully used with the developed method of coordinate transformation with little adjustments. We investigate the efficiency of mesh generation process for different criteria and the quality of obtained meshes.

Tomasz Jurczyk, Barbara Głut
A Multiresolution Model for Non-photorealistic Rendering of Trees

Modeling and rendering of trees has recently received a lot of attention. Models have been developed that allow photorealistic rendering of trees at interactive frame rates. However, little attention has been devoted to expressive rendering of these models. In this work we present a multiresolution model designed specifically to speed up painterly rendering of trees. Our method proposes a novel clustering technique based on the computation of nested convex hulls. We employ variable multiresolution to obtain efficient models that contain higher resolution representations for the outside of the tree and lower resolution representations for the inner parts. This variable multiresolution method mimics the techniques used by traditional artists to paint trees.

Celso Campos, Ricardo Quirós, Joaquin Huerta, Emilio Camahort, Roberto Vivó, Javier Lluch
Model Creation by Velocity Controlled Surface Deformation

We present a scheme for the semiautomatic creation of 3D models through velocity-controlled surface deformations. Our surface representation consists of oriented points with volatile inter-neighboring point linkage. The surface is resampled in order to maintain an even distribution of points. Points created during resampling inherit their characteristics from their progenitors. Our surface representation and resampling behavior support detailed irregular surfaces with smooth transitions such as those of organic entities. Surface velocities are set by the combination of two types of operators, selection and velocity assignment, with operator application managed by a finite state machine. We demonstrate our scheme with the creation of some branched, fruit-like and mushroom-like models.

Risto Rangel-Kuoppa, David Mould
Rendering of Unorganized Points with Octagonal Splats

Point sets from range scanners are noisy and do not contain additional data such as normals, which makes it difficult to create a mesh for rendering. We use the moving least-squares (MLS) approximation to get normals and differential information, and approximate the surface near each point by an MLS surface. From this we obtain a splat of a new octagonal shape which approximates the local curvature at a point and allows fast and high-quality rendering. We show several results which demonstrate the practicality and benefits of our algorithm.

Sun-Jeong Kim, Chang-Geun Song
Statistical Based Vectorization for Standard Vector Graphics

In this paper a novel algorithm for raster to vector conversion is presented. The technique is mainly devoted to vectorize digital picture maintaining an high degree of photorealistic appearance specifically addressed to the human visual system. The algorithm makes use of an advanced segmentation strategy based on statistical region analysis together with a few ad-hoc heuristics devoted to track boundaries of segmented regions. The final output is rendered by Standard Vector Graphics. Experimental results confirm the effectiveness of the proposed approach both in terms of perceived and measured quality. Moreover, the final overall size of the vectorized images outperforms existing methods.

Sebastiano Battiato, Giovanni Maria Farinella, Giovanni Puglisi
Robustly Computing Intersection Curves of Two Canal Surfaces with Quadric Decomposition

This paper revisits the intersection problems of two canal surfaces with a new quadric decomposition we proposed for canal surfaces. It reduces computing intersection curves of two canal surfaces to computing intersection curves of two revolute quadrics. Furthermore, Bounding Cylinder Clipping is proposed for efficient intersection determination. Compared to the existing method, our method can (

i

) run more robustly and efficiently; (

ii

) represent the final intersection curves as a piecewise closed-form RQIC; and (

iii

) give a simple shape analysis.

Jinyuan Jia, Ajay Joneja, Kai Tang
Triangle Strip Multiresolution Modelling Using Sorted Edges

This paper presents a new multiresolution model based exclusively on the triangle strip primitive. The model is independent of the initial set of triangle strips and the simplification method used to obtain the lower resolution versions of the original model. The information stored in the data structures is sorted to minimise the time needed to recover a Level of Detail (LoD). The orientation of triangles in each strip is maintained as the model is simplified, so back-face culling can be used. The main result is a dramatic improvement in the rendering time.

Ó. Belmonte Fernández, S. Aguado González, S. Sancho Chust
Improvement Construction for Planar G2 Transition Curve Between Two Separated Circles

In this paper, we use the undetermined coefficient method to find a desirable pair of cubic Bezier spirals and a desirable pair of quintic PH spirals to generate planar G2 transition curve between two separated circles. The G2 transition curve can be gotten by the rooting formula, which simplifies the computation, and the ratio of two radii has no restriction, which extends the application area.

Zhong Li, Lizhuang Ma, Mingxi Zhao, Zhihong Mao
B-Spline Curve Fitting Using Dominant Points

Motivated by an insight that properly selected points, called

dominant

points, can play an important role in producing better curve approximation, we propose a new approach for B-spline curve fitting to a sequence of points. The approach is substantially different from the conventional ones in knot placement and dominant point selection. It can generate a B-spline curve in good quality with less deviation. Adopted in the error-bounded curve approximation, it can play an important role in generating B-spline curves with much less control points.

Hyungjun Park, Joo-Haeng Lee
Quality and Performance Evaluation of Ray-Space Interpolation for Free Viewpoint Video Systems

Ray-space is the main technology to realize Free Viewpoint Video (FVV) systems with complicated scenes. One of the key technologies to make the ray-space feasible is interpolation. In this paper, two fast interpolation methods based on feature points and directionality detection are compared with block based matching method, and the experimental results show that the two methods improve visual quality as well as

PSNR

s of rendered virtual viewpoint image greatly while computational burden is also reduced significantly.

Fan Liangzhong, Yu Mei, Yu Zhou, Jiang Gangyi
Framework for Adaptive Sampling of Point-Based Surfaces Using Geometry and Color Attributes

Point-based rendering has offered a powerful alternative to triangle meshes when it comes to the rendering of highly complex objects consisting of densely sampled point clouds due to its flexibility and simplicity. The technological advance of 3D scanners has made it possible to acquire color as well as geometry data of highly complex objects. However, scanning and acquisition systems often produce surfaces that are much more dense than actually required for the intended application. Mobile devices, computer games and distributed virtual environments must often operate on systems where rendering and transmission capacity is highly constrained and therefore require strict control over the level of detail used in models. In this research, we present a framework for adaptive sampling of point-based surfaces using both geometry and color information.

Duck Bong Kim, Eui Chul Kang, Kwan H. Lee, Renato B. Pajarola

Fourth International Workshop on Computer Algebra Systems and Applications (CASA 2006)

Normalizing Relational Database Schemas Using Mathematica

In this paper, basic relational database (DB) normalization algorithms are implemented efficiently as Mathematica modules. It was observed that, Mathematica provided a straightforward platform as opposed to previous ones, mainly Prolog based tools which required complex data structures such as linked list representations with pointers. A Java user interface called JMath-Norm was designed to execute the Mathematica modules in a systematic way. For this purpose, Mathematica’s Java link facility (JLink) is utilized to drive the Mathematica kernel. JMath-Norm provides an effective interactive tool in an educational setting for teaching DB normalization theory.

Ali Yazici, Ziya Karakaya
Extending Maple Capabilities for Solving and Displaying Inequalities

Solving inequalities is a very important topic in computational algebra. This paper presents a new

Maple

package,

IneqGraphics

, for displaying the two-dimensional solution sets of several inequalities of real variables. The package also deals with inequalities involving complex variables by displaying the corresponding solutions on the complex plane. The package provides graphical solutions to many inequalities (such as those involving polynomial, rational, logarithmic, exponential and trigonometric functions) that cannot be solved by using the standard

Maple

commands, which are basically reduced to linear inequalities. In addition, our outputs are consistent with

Maple

’s notation and results in the sense that the package provides a similar output for those cases that can also be solved with

Maple

. To show the performance of the package, several illustrative and interesting examples are described.

A. Iglesias, R. Ipanaqué
Phase Response Curves, Delays and Synchronization in Matlab

MatCont

W. Govaerts, B. Sautois
A Hybrid Approach for Normal Factorization of Polynomials

The problem considered here is an integral part of computations for algebraic control problems. The paper introduces the notion of normal factorization of polynomials and then presents a new hybrid algorithm for the computation of this factorization. The advantage of such a factorization is that it handles the determination of multiplicities and produces factors of lower degree and with distinct roots. The presented algorithm has the ability to specify the roots of the polynomials without computing them explicitly. Also it may be used for investigating the clustering of the roots of the polynomials. The developed procedure is based on the use of algorithms determining the greatest common divisor of polynomials. The algorithm can be implemented symbolically for the specification of well separated roots and numerically for the specification of roots belonging in approximate clusters.

Nicos Karcanias, Marilena Mitrouli, Dimitrios Triantafyllou
Computer Algebra for the Formation of Structural Matrices of Piezoceramic Finite Elements

This paper deals with the description of a theoretical background of systematic computer algebra methods for the formation of structural matrices of piezoceramic finite elements. The efficiency of computer algebra application was compared here with the numerical integration methods of forming the structural matrices of the finite elements. To this end, the computer algebra system VIBRAN was used. Two popular finite elements for modelling piezoceramic actuators of the sector-type and the triangular one are discussed. All structural matrices of the elements were derived, using the computer algebra technique with the following automatic program code generation.

Algimantas Čepulkauskas, Regina Kulvietienė, Genadijus Kulvietis
Symbolic Analysis of Economical Models with Mathematica

Functional equations is a very powerful technique to obtain consistent models in Economics. Unfortunately, there is only a few computer tools for solving functional equations. In this paper a

Mathematica

package,

FSolve

, is applied to the symbolic analysis of some economical models for price and advertising policies. The procedure is as follows: firstly, we make some assumptions about the functional structure of the functions describing the models. Such assumptions are given in terms of functional equations that account for some economical properties. Then, the package is applied to compute the solutions of these equations and check for inconsistencies. The interesting cases of the monopoly and duopoly models are discussed.

A. Gálvez, A. Iglesias
Polarizable Theta-Stable Parabolic Subalgebras and K ℂ-Saturation in the Non-compact Real Forms of G 2 and F 4

A general method for finding theta-stable parabolic subalgebras was given in 2004 . In this paper, we develop

LiE

subroutines to find representatives of conjugacy classes of polarizable theta-stable parabolic subalgebras. Using a theorem of Tauvel, we implement algorithms for computing the

K

-saturation of the nilradical of such parabolic subalgebras. Furthermore, we provide a tool for testing a long standing conjecture that relates the wave front set of a representation of a real or p-adic group to special nilpotent orbits. Incidently, the conjecture is true for classical groups.

Steven Glenn Jackson, Alfred G. Noël
Dynamic Load Balancing with MatlabMPI

A number of packages have recently been developed to enable the execution of MATLAB programs on parallel processors. For many applications, an integrated load balancing functionality is also necessary to realize the full benefits of parallelization. This paper describes a toolkit based on MatlabMPI to ease the parallelization and integration of load balancing into MATLAB applications that contain computationally-intensive loops with independent iterations. Modifications to existing code to incorporate the toolkit are minimal. Performance tests of two nontrivial MATLAB programs with the toolkit on a general-purpose Linux cluster indicate that significant speedups are achievable.

Ricolindo L. Cariño, Ioana Banicescu, Wenzhong Gao
Maple Implementation of the Chor-Rivest Cryptosystem

A Maple implementation of the Chor-Rivest cryptosystem is presented. The main problems in this implementation are: to determine discrete logarithms over

GF

(

q

h

),

q

≈ 200 and

h

≈ 25, and to use the arithmetic of the finite fields in order to encrypt and decrypt messages.

L. Hernández Encinas, J. Muñoz Masqué, A. Queiruga Dios
Development of TRIP: Fast Sparse Multivariate Polynomial Multiplication Using Burst Tries

Flat vector representation of sparse multivariate polynomials is introduced in the computer algebra system TRIP with specific care to the cache memory. Burst tries are considered as an intermediate storage during the sparse multivariate polynomial multiplication by paying attention to the memory allocations. Timing and memory consumption are examined and compared with other recursive representations and other computer algebra systems.

Mickaël Gastineau, Jacques Laskar
A Symbolic Approach to Quantum Computation Simulation

This paper presents

SQCS

: a

Mathematica

package for the symbolic simulation of quantum computation. It can be a useful tool in the development and testing of quantum algorithms as well as in the teaching and learning of quantum computation.

The strength of this symbolic approach is here illustrated with Grover’s quantum algorithm for database search.

António Pereira, Rosália Rodrigues
Development of SyNRAC
–Real Quantifier Elimination Based on Cylindrical Algebraic Decomposition and Visialization–

We present newly implemented functions in

SyNRAC

, which is a Maple package for solving real algebraic constraints derived from various engineering problems. The current version of

SyNRAC

has added quantifier elimination (QE) by cylindrical algebraic decomposition (CAD), a general QE procedure. We also show a visualization tool for representing the possble region of an output quantifier-free formula for the two-dimensional case.

Hitoshi Yanami, Hirokazu Anai
Automated Discovery in Elementary Extrema Problems

This paper describes an enhancement of GDI, a dynamic geometry environment that uses symbolic techniques for performing automatic proof and discovery in elementary geometry. We report a successful add–on that automatically discovers critical points in elementary extrema problems specified via diagrams. Based in classical one variable calculus, the new module, Optimus, is a step ahead in the cooperation between dynamic geometry and computer algebra systems. It can be used in technology rich environments for mathematics education, adding new calculus abilities to dynamic geometry software.

Francisco Botana, José L. Valcarce
Stabilizing Second-Order Linear Dynamic Systems Via Hybrid Output Feedback Controls

This paper considers the open problem whether there exists a finite-state hybrid output feedback control to asymptotically stabilize a second-order linear dynamic system. More precisely, for second-order linear time-invariant systems which are not stabilizable via a single static output feedback, we find two different output feedback gains and a switching law orchestrating the feedback gains such that the closed-loop system is asymptotically stable.

Liguo Zhang, Yangzhou Chen, Pingyuan Cui
Computation of the Adjoint Matrix

The best method for computing the adjoint matrix of an order

n

matrix in an arbitrary commutative ring requires

O

(

n

β

 + 1/3

log

n

log log

n

) operations, provided that the complexity of the algorithm for multiplying two matrices is

γn

β

 + 

o

(

n

β

). For a commutative domain – and under the same assumptions – the complexity of the best method is 6

γn

β

/(2

β

–2)+

o

(

n

β

). In the present work a new method is presented for the computation of the adjoint matrix in a commutative domain. Despite the fact that the number of operations required is now 1.5 times more, than that of the best method, this new method permits a better parallelization of the computational process and may be successfully employed for computations in parallel computational systems.

Alkiviadis Akritas, Gennadi Malaschonok
MathBlackBoard as Effective Tool in Classroom

MathBlackBoard is a Java program that has unique GUI (Graphical User Interface). By using GUI, MathBlackBoard users can input, edit and manipulate mathematical expressions easily. And MathBlackBoard has its own kernel. However, it has functions to use other computer algebra system as computational engine.

In this paper, we describe an example of using MathBlackBoard. We have added a new function which is designed to be used in junior high school classroom. And we have used it in classrooms.

Deguchi Hiroaki, Hashiba Hirokazu

Tools for Program Development and Analysis in Computational Science

Finding Inefficiencies in OpenMP Applications Automatically with Periscope

Performance optimization of parallel programs can be a time-consuming and difficult task. Therefore, tools are desirable that help application developers by automatically locating inefficiencies. We present Periscope, a system for automated performance analysis based on the notion of performance

properties

.

We present the overall architecture of Periscope, which consists of a set of analysis agents and show how properties of OpenMP applications are detected. We describe the set of OpenMP properties we have defined so far and the data model used in the specification of these properties. Practical tests on the feasibility of our approach are performed with a number of OpenMP applications.

Karl Fürlinger, Michael Gerndt
Analysis of the Spatial and Temporal Locality in Data Accesses

Cache optimization becomes increasingly important for achieving high computing performance, especially on current and future chip-multiprocessor (CMP) systems, which usually show a rather higher cache miss ratio than uni-processors. For such optimization, information about the access locality is needed in order to help the user in the tasks of data allocation, data transformation, and code transformation which are often used to enhance the utilization of cached data towards a better cache hit rate.

In this paper we demonstrate an analysis tool capable of detecting the spatial and temporal relationship between memory accesses and providing information, such as access pattern and access stride, which is required for applying some optimization techniques like address grouping, software prefetching, and code transformation. Based on the memory access trace generated by a code instrumentor, the analysis tool uses appropriate algorithms to detect repeated address sequences and the constant distance between accesses to the different elements of a data structure. This allows the users to pack data with spatial locality in the same cache block so that needed data can be loaded into the cache at the same time. In addition, the analysis tool computes the push back distance which shows how a cache miss can be avoided by reusing the data before replacement. This helps to reduce cache misses increasing therefore the temporal reusability of the working set.

Jie Tao, Siegfried Schloissnig, Wolfgang Karl
A Performance Profile and Test Tool for Development of Embedded Software Using Various Report Views

In this paper, we suggest a performance profiling and testing tool that offers developers convenient environments to profile or test an embedded software’s performance and to analyze the results through various graphic report views. Because the suggested tool does not need any additional hardware, it is efficient in aspects of costs and management for profiling or testing an embedded software’s performance. The tool consists of a code analyzer, a test suite generator, and a report generator. The code analyzer expresses a software’s structure as a parse tree and decides a position that additional codes for profiling must be inserted into. The test suite generator offers a test script wizard for users to easily make a test driver. The report generator converts a string-type result to an XML-based class instance in order to raise reusability for the result. To offer various report views we divide the instance into two sections, which are for result data and for visual views. Therefore, users can get various report views by associating the two sections according to their intention.

Yongyun Cho, Chae-Woo Yoo
SCE Toolboxes for the Development of High-Level Parallel Applications

Users of Scientific Computing Environments (SCE) benefit from faster high-level software development at the cost of larger run time due to the interpreted environment. For time-consuming SCE applications, dividing the workload among several computers can be a cost-effective acceleration technique. Using our PVM and MPI toolboxes,

Matlab

$^{\rm {\sc {\textregistered}}}$

and Octave users in a computer cluster can parallelize their interpreted applications using the native cluster programming paradigm — message-passing. Our toolboxes are complete interfaces to the corresponding libraries, support all the compatible datatypes in the base SCE and have been designed with performance and maintainability in mind. Although in this paper we focus on our new toolbox, MPITB for Octave, we describe the general design of these toolboxes and of the development aids offered to end users, mention some related work, mention speedup results obtained by some of our users and introduce speedup results for the NPB-EP benchmark for MPITB in both SCE’s.

J. Fernández, M. Anguita, E. Ros, J. L. Bernier
Introducing the Open Trace Format (OTF)

This paper introduces the new Open Trace Format. The first part provides a small overview about Trace Format Libraries in general and existing Formats/Libraries and their features. After that the important requirements are discussed. In particular it concerns efficient parallel and selective access to trace data. The following part presents design decisions and features of OTF comprehensively. Finally, there is some early evaluation of OTF. It features comparison of storage size for several examples as well as sequential and parallel I/O benchmarks. At the end, a conclusion will summarize the results and give some outlook.

Andreas Knüpfer, Ronny Brendel, Holger Brunst, Hartmut Mix, Wolfgang E. Nagel
Dynamic Instrumentation of Distributed Java Applications Using Bytecode Modifications

Java’s features such as system platform independence, dynamic and network oriented architecture, robustness as well as growing number of common standards make it a language of choice for many projects. However an increasing complexity of created software and requirement for high stability and high quality of applications make it desirable for a developer to inspect, monitor, debug or in any way alter Java programs behaviour on-the-fly. The main goal of this paper is to present the design of a system for instrumenting Java classes at runtime. This system is to aid developer in modifying program by adding fragments of code at specific locations that implement some new functionality. This allows programmer to enhance classes with logging, monitoring, caching or any other capabilities that are required at run-time.

Włodzimierz Funika, Pawel Swierszcz
Fine-Grained Instrumentation and Monitoring of Legacy Applications in a Service-Oriented Environment

Legacy applications are still heavily used in Grid systems. In modern service-oriented Grid environments such applications are adapt- ed to work as services. Monitoring of applications is often necessary for various purposes, such as debugging, performance analysis, fault-tolerance, etc. In this paper we present a framework for monitoring of legacy applications wrapped as services. We adapt the OCM-G system for monitoring of distributed legacy applications (such as MPI) to work as part of a broader service-oriented monitoring infrastructure GEMINI. We address monitoring and instrumentation at both service-level and legacy-level. Our instrumentation is fine-grained, i.e. we support instrumentation and monitoring at the level of individual code regions.

Bartosz Baliś, Marian Bubak, Krzysztof Guzy
Monitoring of WS-Based Applications

The paper presents a Java-related monitoring platform for distributed applications which are oriented towards the use of Web Services (WS). We focus on the building a monitoring platform based on a well-defined interface (OMIS) between a monitoring system organized as middleware and tools that use the facilities provided by the monitoring middleware for WS-based applications. We aim at the observation and analysis of SOAP messages (data) exchanged with a Web Service (requests and responses). Our system is intended to monitor the characteristics of the remote call, especially: request time, transport time, cache access time, response time.

Lechoslaw Trebacz, Piotr Handzlik, Wlodzimierz Funika, Marcin Smetek
Using Sequential Debugging Techniques with Massively Parallel Programs

Debugging is a crucial part of the software development process. Especially massively-parallel programs impose huge difficulties to program analyis and debugging due to their higher complexity compared to sequential programs. For debugging and analysing parallel programs there are several tools available, but many of these fail in case of massively-parallel programs with potentially thousands of processes.

In this work we introduce the

single process debugging strategy

, a scalable debugging strategy for massively-parallel programs. The goal of this strategy is to make debugging large scale programs as simple and straight-forward as debugging sequential programs. This is achieved by adapting and combining several techniques which are well known from sequential debugging. In combination, these techniques give the user the possibility to execute and investigate small fractions of a possibly huge parallel program, without having to (re-)execute the entire program.

Christian Schaubschläger, Dieter Kranzlmüller, Jens Volkert

Collaborative and Cooperative Environments

Workflow for Integrated Object Detection in Collaborative Video Annotation Environments

Annotation tools for media are getting more and more important. The application for these kind of applications are very manifold. This paper describes some ongoing work on a video annotation environment supporting the media production and post-production process. Based on a former approach, some new development like integrated audio conferencing with recording facilities for audio based work instructions and an automatized video segmentation module are presented. Beside these technical improvements an object based approach for the video annotation process is presented.

Lars Grunewaldt, Kim Möller, Karsten Morisse
RMIX: A Dynamic, Heterogeneous, Reconfigurable Communication Framework

RMIX is a dynamic, heterogeneous, reconfigurable communication framework that allows software components to communicate using various RMI/RPC protocols, such as ONC RPC, Java RMI and SOAP, by facilitating dynamically loadable provider plug-ins to supply different protocol stacks. With this paper, we present a native (C-based), flexible, adaptable, multi-protocol RMI/RPC communication framework that complements the Java-based RMIX variant previously developed by our partner team at Emory University. Our approach offers the same multi-protocol RMI/RPC services and advanced invocation semantics via a C-based interface that does not require an object-oriented programming language. This paper provides a detailed description of our RMIX framework architecture and some of its features. It describes the general use case of the RMIX framework and its integration into the Harness metacomputing environment in form of a plug-in.

Christian Engelmann, Al Geist
Developing Collaborative Social Software

The Internet has an increasing role in facilitating communication between people and groups of people. As access to the Internet and World Wide Web is widely available, collaborative services enabled over the Internet are also burgeoning. In this paper, we present the current issues and our techniques for developing collaborative social software. We discuss online communities in the context of social collaborative systems. We then describe our approach to the development of supporting software for online communities and collaboration.

Ismail Bhana, David Johnson
An Efficient and Reflective Event Filtering for Context-Awareness in Ubiquitous Computing

In the ubiquitous computing system fast and reliable event delivery is important to properly adapt to the dynamically changing environment. Such requirement could be satisfied by message-based middleware based on the publish/subscribe model. In this paper we propose a scheme that allows reflective filtering through the reasoning process. The objective of the reflective filtering is to prevent waste in the space of the event queue of the event channel due to useless events and reduce the network load. Performance evaluation reveals that the proposed scheme displays a significant improvement over the existing event service, Java messaging service (JMS) and omnievent service, in terms of event delivery time.

Kyu Bong Cho, Sung Keun Song, Hee Yong Youn, Gyung Leen Park
Creation and Control of Interactive Virtual Environments

Within the confines of a Virtual Environment (VE) almost anything is possible. It is easy to establish the benefits such an application could provide throughout the many walks of life, and yet current VE development remains within the domain of Virtual Reality application programmers. We describe methods that enhance VE development, first by providing scene creation for non-programmers, and second through a scene management entity that controls interaction within the environment. We explore methods for interacting through the scene to enable multiuser collaboration, and detail sample applications making use of this approach.

Adrian Haffegee, Priscilla Ramsamy, Ronan Jamieson, Vassil Alexandrov
Using Haptics to Improve Immersion in Virtual Environments

Current immersive Virtual Reality (VR) system strategies do not fully support dynamic Human Computer Interaction (HCI) and since there is a growing need for better immersion, due consideration should be given to integrate additional modalities for improved HCI. While feedback in Virtual Environments (VE) is predominantly provided to the user through the visual and auditory channels, additional modalities such as haptics can increase the sense of presence and efficiency in VE simulations. Haptic interfaces can enhance the VE interaction by enabling users to “touch” and “feel” virtual objects that are simulated in the environment. This paper examines the reasons behind its integration based on the limitations of present immersive projection system.

Priscilla Ramsamy, Adrian Haffegee, Ronan Jamieson, Vassil Alexandrov
A Novel Navigation Algorithm for Locomotion Interfaces with Programmable Platforms

This paper describes a novel navigation algorithm using a locomotion interface with two 6-DOF parallel robotic manipulators. The suggested novel navigation system can induce user’s real walking and generate realistic visual feedback during navigation, using robotic manipulators. For realistic visual feedback, the virtual environment is designed with three components; 3D object modeler for buildings and terrains, scene manager and communication manager component. The walking velocity of the user is directly translated to VR actions for navigation. Finally, the functions of the RPC interface are utilized for each interaction mode. The suggested navigation system can allow a user to explore into various virtual terrains with real walking and realistic visual feedback.

Jungwon Yoon, Jeha Ryu

Second International Workshop on Bioinformatics Research and Applications (IWBRA06)

Efficient and Practical Algorithms for Deducing the History of Recombination in Populations

A phylogenetic network (or Ancestral Recombination Graph) is a generalization of a tree, allowing structural properties that are not tree-like. With the growth of genomic and population data, much of which does not fit ideal tree models, and the increasing appreciation of the genomic role of such phenomena as recombination (crossing-over and gene-conversion), recurrent and back mutation, horizontal gene transfer, and mobile genetic elements, there is greater need to understand the algorithmics and combinatorics of phylogenetic networks.

Dan Gusfield
Chordal Graphs in Computational Biology – New Insights and Applications

Recent advances in experimental techniques resulted in accumulation of vast amounts of information, which is often represented by various types of biological networks. Therefore it is not surprising that increasingly more complex graph-theoretical tools are developed to analyze such biological networks and extract biologically meaningful patterns. In this talk I will describe the research done in my group directed towards computational analysis of biological networks.

Teresa M. Przytycka
Exemplar Longest Common Subsequence

In the paper we investigate the computational and approximation complexity of the Exemplar Longest Common Subsequence of a set of sequences (ELCS problem), a generalization of the Longest Common Subsequence problem, where the input sequences are over the union of two disjoint sets of symbols, a set of mandatory symbols and a set of optional symbols. We show that different versions of the problem are

APX

-hard even for instances with two sequences. Moreover, we show that the related problem of determining the existence of a feasible solution of the Exemplar Longest Common Subsequence of two sequences is

NP

-hard. On the positive side, efficient algorithms for the ELCS problem over instances of two sequences where each mandatory symbol can appear totally at most three times or the number of mandatory symbols is bounded by a constant are given.

Paola Bonizzoni, Gianluca Della Vedova, Riccardo Dondi, Guillaume Fertin, Stéphane Vialette
Synonymous Codon Substitution Matrices

Observing differences between DNA or protein sequences and estimating the true amount of substitutions from them is a prominent problem in molecular evolution as many analyses are based on distance measures between biological sequences. Since the relationship between the observed and the actual amount of mutations is very complex, more than four decades of research have been spent to improve molecular distance measures. In this article we present a method called SynPAM which can be used to estimate the amount of synonymous change between sequences of coding DNA. The method is novel in that it is based on an empirical model of codon evolution and that it uses a maximum-likelihood formalism to measure synonymous change in terms of codon substitutions, while reducing the need for assumptions about DNA evolution to an absolute minimum. We compared the SynPAM method with two established methods for measuring synonymous sequence divergence. Our results suggest that this new method not only shows less variance, but is also able to capture weaker phylogenetic signals than the other methods.

Adrian Schneider, Gaston H. Gonnet, Gina M. Cannarozzi
SEPA: Approximate Non-subjective Empirical p-Value Estimation for Nucleotide Sequence Alignment

In the bioinformatics literature, pairwise sequence alignment methods appear with many variations and diverse applications. With this abundance, comes not only an emphasis on speed and memory efficiency, but also a need for assigning confidence to the computed alignments through

p

-value estimation, especially for important segment pairs within an alignment. This paper examines an empirical technique, called

SEPA

, for approximate

p

-value estimation based on statistically large number of observations over randomly generated sequences. Our empirical studies show that the technique remains effective in identifying biological correlations even in sequences of low similarities and large expected gaps, and the experimental results shown here point to many interesting insights and features.

Ofer Gill, Bud Mishra
Multiple Sequence Alignment by Ant Colony Optimization and Divide-and-Conquer

Multiple sequence alignment is a common task in molecular biology and bioinformatics. Obtaining an accurate alignment of protein sequences is a difficult computational problem because many heuristic techniques cannot achieve optimality in a reasonable running time. A novel multiple sequence alignment algorithm based on ant colony optimization and divide-and-conquer technique is proposed. The algorithm divides a set of sequences into several subsections vertically by bisecting the sequences recursively using the ant colony optimization method. We also present two methods that adaptively adjust the parameters and update the pheromones to avoid local optimal traps. Experimental results show that the algorithm can achieve high quality solution and significantly reduce the running time.

Yixin Chen, Yi Pan, Juan Chen, Wei Liu, Ling Chen
COMBAT: Search Rapidly for Highly Similar Protein-Coding Sequences Using Bipartite Graph Matching

Comparing vertebrate genomes requires efficient cross-species sequence alignment programs. We describe COMBAT, a new mer-based method which can search rapidly for highly similar translated genomic sequences, with the stable-marriage algorithm with incomplete lists (SMI) as a filter scheme. We apply the COMBAT program to the comparative analysis of the human with the most recent bovine genome assemblies, and 84%~95% of the homologous blocks identified by this program are confirmed by BLASTZ.

Bing Sun, Jacob T. Schwartz, Ofer H. Gill, Bud Mishra
Missing Values Estimation in Microarray Data with Partial Least Squares Regression

Microarray data usually contain missing values, thus estimating these missing values is an important preprocessing step. This paper proposes an estimation method of missing values based on Partial Least Squares (PLS) regression. The method is feasible for microarray data, because of the characteristics of PLS regression. We compared our method with three methods, including ROWaverage, KNNimpute and LLSimpute, on different data and various missing probabilities. The experimental results show that the proposed method is accurate and robust for estimating missing values.

Kun Yang, Jianzhong Li, Chaokun Wang
Boost Feature Subset Selection: A New Gene Selection Algorithm for Microarray Dataset

Gene selection is usually the crucial first step in microarray data analysis. One class of typical approaches is to calculate some discriminative scores using data associated with a single gene. Such discriminative scores are then sorted and top ranked genes are selected for further analysis. However, such an approach will result in redundant gene set since it ignores the complex relationships between genes. Recent researches in feature subset selection began to tackle this problem by limiting the correlations of the selected feature set. In this paper, we propose a novel general framework BFSS: Boost Feature Subset Selection to improve the performance of single-gene based discriminative scores using bootstrapping techniques. Features are selected from dynamically adjusted bootstraps of the training dataset. We tested our algorithm on three well-known publicly available microarray data sets in the bioinformatics community. Encouraging results are reported in this paper.

Xian Xu, Aidong Zhang
A Hybrid Feature Selection Approach for Microarray Gene Expression Data

Due to the huge number of genes and comparatively small number of samples from microarray gene expression data, accurate classification of diseases becomes challenging. Feature selection techniques can improve the classification accuracy by removing irrelevant and redundant genes. However, the performance of different feature selection algorithms based on different theoretic arguments varies even when they are applied to the same data set. In this paper, we propose a hybrid approach to combine useful outcomes from different feature selection methods through a genetic algorithm. The experimental results demonstrate that our approach can achieve better classification accuracy with a smaller gene subset than each individual feature selection algorithm does.

Feng Tan, Xuezheng Fu, Hao Wang, Yanqing Zhang, Anu Bourgeois
A Self-supervised Learning Framework for Classifying Microarray Gene Expression Data

It is important to develop computational methods that can effectively resolve two intrinsic problems in microarray data: high dimensionality and small sample size. In this paper, we propose a self-supervised learning framework for classifying microarray gene expression data using Kernel Discriminant-EM (KDEM) algorithm. This framework applies self-supervised learning techniques in an optimal nonlinear discriminating subspace. It efficiently utilizes a large set of unlabeled data to compensate for the insufficiency of a small set of labeled data and it extends linear algorithm in DEM to kernel algorithm to handle nonlinearly separable data in a lower dimensional space. Extensive experiments on the

Plasmodium falciparum

expression profiles show the promising performance of the approach.

Yijuan Lu, Qi Tian, Feng Liu, Maribel Sanchez, Yufeng Wang
Pooling Evidence to Identify Cell Cycle–Regulated Genes

Most of the biological studies have embraced statistical approaches to make inferences. It is common to have several independent experiments to test the same null hypothesis. The goal of research on pooling evidence is to combine the results of these tests to ask if there is evidence from the collection of studies to reject the null hypothesis. In this study, we evaluated four different pooling techniques (Fisher, Logit, Stouffer and Liptak) to combine the evidence from independent microarray experiments in order to identify cell cycle-regulated genes. We were able to identify a better set of cell cycle-regulated genes using the pooling techniques based on our benchmark study on budding yeast (

Saccharomyces cerevisiae

). Our gene ontology study on time series data of both the budding yeast and the fission yeast (

Schizosaccharomyces pombe

) showed that the GO terms that are related to cell cycle are significantly enriched in the cell cycle-regulated genes identified using pooling techniques.

Gaolin Zheng, Tom Milledge, E. Olusegun George, Giri Narasimhan
Discovering Sequence-Structure Patterns in Proteins with Variable Secondary Structure

Proteins that share a similar function often exhibit conserved sequence patterns. Sequence patterns help to classify proteins into families where the exact function may or may not be known. Research has shown that these domain signatures often exhibit specific three-dimensional structures. We have previously shown that sequence patterns combined with structural information, in general, have superior discrimination ability than those derived without structural information. However in some cases, divergent backbone configurations and/or variable secondary structure in otherwise well-aligned proteins make identification of conserved regions of sequence and structure problematic. In this paper, we describe improvements in our method of designing biologically meaningful sequence-structure patterns (SSPs) starting from a seed sequence pattern from any of the existing sequence pattern databases. Improved pattern precision is achieved by including conserved residues from coil regions that are not readily apparent from examination of multiple sequence alignments alone. Pattern recall is improved by systematically comparing the structure of all known true family members and to include all the allowable variations in the pattern residues.

Tom Milledge, Gaolin Zheng, Giri Narasimhan
Clustering Support Vector Machines and Its Application to Local Protein Tertiary Structure Prediction

Support Vector Machines (SVMs) are new generation of machine learning techniques and have shown strong generalization capability for many data mining tasks. SVMs can handle nonlinear classification by implicitly mapping input samples from the input feature space into another high dimensional feature space with a nonlinear kernel function. However, SVMs are not favorable for huge datasets with over millions of samples. Granular computing decomposes information in the form of some aggregates and solves the targeted problems in each granule. Therefore, we propose a novel computational model called Clustering Support Vector Machines (CSVMs) to deal with the complex classification problems for huge datasets. Taking advantage of both theory of granular computing and advanced statistical learning methodology, CSVMs are built specifically for each information granule partitioned intelligently by the clustering algorithm. This feature makes learning tasks for each CSVMs more specific and simpler. Moreover, CSVMs built particularly for each granule can be easily parallelized so that CSVMs can be used to handle huge datasets efficiently. The CSVMs model is used for predicting local protein tertiary structure. Compared with the conventional clustering method, the prediction accuracy for local protein tertiary structure has been improved noticeably when the new CSVM model is used. The encouraging experimental results indicate that our new computational model opens a new way to solve the complex classification for huge datasets.

Jieyue He, Wei Zhong, Robert Harrison, Phang C. Tai, Yi Pan
Extracting Protein-Protein Interactions from the Literature Using the Hidden Vector State Model

In the field of bioinformatics in solving biological problems, the huge amount of knowledge is often locked in textual documents such as scientific publications. Hence there is an increasing focus on extracting information from this vast amount of scientific literature. In this paper, we present an information extraction system which employs a semantic parser using the Hidden Vector State (HVS) model for protein-protein interactions. Unlike other hierarchical parsing models which require fully annotated treebank data for training, the HVS model can be trained using only lightly annotated data whilst simultaneously retaining sufficient ability to capture the hierarchical structure needed to robustly extract task domain semantics. When applied in extracting protein-protein interactions information from medical literature, we found that it performed better than other established statistical methods and achieved 47.9% and 72.8% in recall and precision respectively.

Deyu Zhou, Yulan He, Chee Keong Kwoh
A Multilevel Approach to Identify Functional Modules in a Yeast Protein-Protein Interaction Network

Identifying functional modules is believed to reveal most cellular processes. There have been many computational approaches to investigate the underlying biological structures [9, 4, 10, 6]. A spectral clustering method plays a critical role identifying functional modules in a yeast protein-protein network in [6, 4]. We present an unweighted-graph version of a multilevel spectral algorithm which more accurately identifies protein complexes with less computational time.

S. Oliveira, S. C. Seok
Towards Detecting Protein Complexes from Protein Interaction Data

High-throughput methods for detecting protein-protein interactions (PPI) have given researchers an initial global picture of protein interactions on a genomic scale. These interactions connect proteins into a large protein interaction network (PIN). However, both the size of the data sets and the noise in the data pose big challenges in effectively analyzing the data. In this paper, we investigate the problem of protein complex detection, i.e., finding biologically meaningful subsets of proteins, from the noisy protein interaction data. We identify the difficulties and propose a “seed-refine” approach, including a novel subgraph quality measure, an appropriate heuristics for finding good seeds and a novel subgraph refinement method. Our method considers the properties of protein complexes and the noisy interaction data. Experiments show the effectiveness of our method.

Pengjun Pei, Aidong Zhang
High-Throughput SNP Genotyping by SBE/SBH

Despite much progress over the past decade, current Single Nucleotide Polymorphism (SNP) genotyping technologies still offer an insufficient degree of multiplexing when required to handle user-selected sets of SNPs. In this paper we propose a new genotyping assay architecture combining multiplexed solution-phase single-base extension (SBE) reactions with sequencing by hybridization (SBH) using universal DNA arrays such as all

k

-mer arrays. Our contributions include a study of multiplexing algorithms for SBE/SBH genotyping assays and preliminary experimental results showing the achievable multiplexing rates. Simulation results on datasets both randomly generated and extracted from the NCBI dbSNP database suggest that the SBE/SBH architecture provides a flexible and cost-effective alternative to genotyping assays currently used in the industry, enabling genotyping of up to hundreds of thousands of user-specified SNPs per assay.

Ion I. Măndoiu, Claudia Prăjescu
Tag SNP Selection Based on Multivariate Linear Regression

The search for the association between complex diseases and single nucleotide polymorphisms (SNPs) or haplotypes has been recently received great attention. For these studies, it is essential to use a small subset of informative SNPs (tag SNPs) accurately representing the rest of the SNPs. Tagging can achieve budget savings by genotyping only a limited number of SNPs and computationally inferring all other SNPs and compaction of extremely long SNP sequences (obtained, e.g., from Affimetrix Map Array) for further fine genotype analysis. Tagging should first choose tags from the SNPs under consideration and then knowing the values of chosen tag SNPs predict (or statistically cover) the non-tag SNPs. In this paper we propose a new SNP prediction method based on rounding of multivariate linear regression (MLR) analysis in sigma-restricted coding. When predicting a non-tag SNP, the MLR method accumulates information about all tag SNPs resulting in significantly higher prediction accuracy with the same number of tags than for the previously known tagging methods. We also show that the tag selection strongly depends on how the chosen tags will be used – advantage of one tag set over another can only be considered with respect to a certain prediction method. Two simple universal tag selection methods have been applied: a (faster) stepwise and a (slower) local-minimization tag selection algorithms. An extensive experimental study on various datasets including 6 regions from HapMap shows that the MLR prediction combined with stepwise tag selection uses significantly fewer tags (e.g., up to two times less tags to reach 90% prediction accuracy) than the state-of-art methods of Halperin et al. [8] for genotypes and Halldorsson et al.[7] for haplotypes, respectively. Our stepwise tagging matches the quality of while being faster than STAMPA [8]. The code is publicly available at http://alla.cs.gsu.edu/~software.

Jingwu He, Alex Zelikovsky
Minimum Multicolored Subgraph Problem in Multiplex PCR Primer Set Selection and Population Haplotyping

In this paper we consider the minimum weight multicolored subgraph problem (MWMCSP), which is a common generalization of minimum cost multiplex PCR primer set selection and maximum likelihood population haplotyping. In this problem one is given an undirected graph

G

with non-negative vertex weights and a color function that assigns to each edge one or more of

n

given colors, and the goal is to find a minimum weight set of vertices inducing edges of all

n

colors. We obtain improved approximation algorithms and hardness results for MWMCSP and its variant in which the goal is to find a minimum number of vertices inducing edges of at least

k

colors for a given integer

k

n

.

M. T. Hajiaghayi, K. Jain, L. C. Lau, I. I. Măndoiu, A. Russell, V. V. Vazirani
Phasing of 2-SNP Genotypes Based on Non-random Mating Model

Emerging microarray technologies allow genotyping of long genome sequences resulting in huge amount of data. A key challenge is to provide an accurate phasing of very long single nucleotide polymorphism (SNP) sequences. In this paper we explore phasing of genotypes with 2 SNPs adjusted to the non-random mating model and then apply it to the haplotype inference of complete genotypes using maximum spanning trees. The runtime of the algorithm is

O

(

nm

(

n

+

m

)), where

n

and

m

are the number of genotypes and SNPs, respectively. The proposed phasing algorithm (2SNP) can be used for comparatively accurate phasing of large number of very long genome sequences. On datasets across 79 regions from HapMap [7] 2SNP is several orders of magnitude faster than GERBIL and PHASE while matching them in quality measured by the number of correctly phased genotypes, single-site and switching errors. For example, 2SNP requires 41 s on Pentium 4 2Ghz processor to phase 30 genotypes with 1381 SNPs (ENm010.7p15:2 data from HapMap) versus GERBIL and PHASE requiring more than a week of runtime and admitting no less errors than 2SNP. 2SNP software is publicly available at http://alla.cs.gsu.edu/~software/2SNP.

Dumitru Brinza, Alexander Zelikovsky
Event Models for Tumor Classification with SAGE Gene Expression Data

Serial Analysis of Gene Expression (SAGE) is a relatively new method for monitoring gene expression levels and is expected to contribute significantly to the progress in cancer treatment by enabling a precise and early diagnosis. A promising application of SAGE gene expression data is classification of tumors. In this paper, we build three event models (the multivariate Bernoulli model, the multinomial model and the normalized multinomial model) for SAGE data classification. Both binary classification and multicategory classification are investigated. Experiments on two SAGE datasets show that the multivariate Bernoulli model performs well with small feature sizes, but the multinomial performs better at large feature sizes, while the normalized multinomial performs well with medium feature sizes. The multinomial achieves the highest overall accuracy.

Xin Jin, Anbang Xu, Guoxing Zhao, Jixin Ma, Rongfang Bie
Genomes Containing Duplicates Are Hard to Compare

In this paper, we are interested in the algorithmic complexity of computing (dis)similarity measures between two genomes when they contain duplicated genes. In that case, there are usually two main ways to compute a given (dis)similarity measure

M

between two genomes

G

1

and

G

2

: the first model, that we will call the

matching model

, consists in computing a one-to-one correspondence between genes of

G

1

and genes of

G

2

, in such a way that

M

is optimized in the resulting permutation. The second model, called the

exemplar model

, consists in keeping in

G

1

(resp.

G

2

) exactly

one

copy of each gene, thus deleting all the other copies, in such a way that

M

is optimized in the resulting permutation. We present here different results concerning the algorithmic complexity of computing three different similarity measures (number of common intervals, MAD number and SAD number) in those two models, basically showing that the problem becomes

NP

-completeness for each of them as soon as genomes contain duplicates. In the case of MAD and SAD, we actually prove that, under both models, both MAD and SAD problems are APX-hard.

Cedric Chauve, Guillaume Fertin, Romeo Rizzi, Stéphane Vialette
Rearrangement of Noisy Genomes

Measures of distance between genomic maps are inflated by high levels of noise due to incorrectly resolved paralogy and error at the mapping, sequencing and alignment levels. Comparison is also hampered by lack of information on gene orientation and lack gene order. We suggest a suite of algorithms for genome rearrangement analysis in the presence of noise and incomplete information, and test its robustness as noise levels increase.

Chunfang Zheng, David Sankoff
Simple Reconstruction of Binary Near-Perfect Phylogenetic Trees

We consider the problem of reconstructing near-perfect phylogenetic trees using binary character states (referred to as BNPP). A perfect phylogeny assumes that every character mutates at most once in the evolutionary tree, yielding an algorithm for binary character states that is computationally efficient but not robust to imperfections in real data. A near-perfect phylogeny relaxes the perfect phylogeny assumption by allowing at most a constant number

q

of additional mutations. In this paper, we develop an algorithm for constructing optimal phylogenies and provide empirical evidence of its performance. The algorithm runs in time

O

((72

κ

)

q

nm

+

nm

2

) where

n

is the number of taxa,

m

is the number of characters and

κ

is the number of characters that share four gametes with some other character. This is fixed parameter tractable when

q

and

κ

are constants and significantly improves on the previous asymptotic bounds by reducing the exponent to

q

. Furthermore, the complexity of the previous work makes it impractical and in fact no known implementation of it exists. We implement our algorithm and demonstrate it on a selection of real data sets, showing that it substantially outperforms its worst-case bounds and yields far superior results to a commonly used heuristic method in at least one case. Our results therefore describe the first practical phylogenetic tree reconstruction algorithm that finds guaranteed optimal solutions while being easily implemented and computationally feasible for data sets of biologically meaningful size and complexity.

Srinath Sridhar, Kedar Dhamdhere, Guy E. Blelloch, Eran Halperin, R. Ravi, Russell Schwartz
Reconstructing Ancestor-Descendant Lineages from Serially-Sampled Data: A Comparison Study

The recent accumulation of serially-sampled viral sequences in public databases attests to the need for development of algorithms that infer phylogenetic relationships among such data with the goal of elucidating patterns and processes of viral evolution. Phylogenetic methods are typically applied to contemporaneous taxa, and result in the taxa being placed at the tips or leaves of the tree. In a serial sampling scenario an evolutionary framework may offer a more meaningful alternative in which the rise, persistence, and extinction of different viral lineages is readily observable. Recently, algorithms have been developed to study such data. We evaluate the performance of 5 different methods in correctly inferring ancestor-descendant relationships by using empirical and simulated sequence data. Our results suggest that for inferring ancestor-descendant relationships among serially-sampled taxa, the MinPD program is an accurate and efficient method, and that traditional ML methods, while marginally more accurate, are far less efficient.

Patricia Buendia, Timothy M. Collins, Giri Narasimhan
Robustness of Greedy Type Minimum Evolution Algorithms

For a phylogeny reconstruction problem, Desper and Gascuel [2] proposed Greedy Minimum Evolution algorithm (in short, GME) and Balanced Minimum Evolution algorithm (in short, BME). Both of them are faster than the current major algorithm, Neighbor Joining (in short, NJ); however, less accurate when an input distance matrix has errors. In this paper, we prove that BME has the same optimal robustness to such errors as NJ but GME does not. Precisely, we prove that if the maximum distance error is less than a half of the minimum edge length of the target tree, then BME reconstruct it correctly. On the other hand, there is some distance matrix such that maximum distance error is less than

$\frac{2}{\sqrt{n}}$

of the minimum edge length of the target tree, for which GME fails to reconstruct the target tree.

Takeya Shigezumi
Space and Time Efficient Algorithms for Planted Motif Search

We consider the (

l

,

d

) Planted Motif Search Problem, a problem that arises from the need to find transcription factor-binding sites in genomic information. We propose the algorithms PMSi and PMSP which are based on ideas considered in PMS1 [10]. These algorithms are exact, make use of less space than the known exact algorithms such as PMS and are able to tackle instances with large values of

d

. In particular algorithm PMSP is able to solve the challenge instance (17,6), which has not reported solved before in the literature.

Jaime Davila, Sudha Balla, Sanguthevar Rajasekaran
Predictability of Rules in HIV-1 Protease Cleavage Site Analysis

Symbolic rules play an important role in HIV-1 protease cleavage site prediction. Recently, some studies have done on extraction of the prediction rules with some success. In this paper, we demonstrated a decompositional approach for rule extraction from nonlinear neural networks. We also compared the prediction rules to the ones extracted by other approaches and methods. Empirical experiments are also shown.

Hyeoncheol Kim, Tae-Sun Yoon, Yiying Zhang, Anupam Dikshit, Su-Shing Chen
Statistical Feature Selection from Chaos Game Representation for Promoter Recognition

The accuracy of promoter recognition depends upon not only the appropriate representation of the promoter sequence but also the essential features of the sequence. These two important issues are addressed in this paper. Firstly, a promoter sequence is captured in form of a Chaos Game Representation (CGR). Then, based on the concept of Mahalanobis distance, a new statistical feature extraction is introduced to select a set of the most significant pixels from the CGR. The recognition is performed by a supervised neural network. This proposed technique achieved 100% accuracy when it is tested with the E.coli promoter sequences using a leave-one-out method. Our approach also outperforms other techniques.

Orawan Tinnungwattana, Chidchanok Lursinsap
Blue Matter: Strong Scaling of Molecular Dynamics on Blue Gene/L

This paper presents strong scaling performance data for the Blue Matter molecular dynamics framework using a novel n-body spatial decomposition and a collective communications technique implemented on both MPI and low level hardware interfaces. Using Blue Matter on Blue Gene/L, we have measured scalability through 16,384 nodes with measured time per time-step of under 2.3 milliseconds for a 43,222 atom protein/lipid system. This is equivalent to a simulation rate of over 76 nanoseconds per day and represents an unprecedented time-to-solution for biomolecular simulation as well as continued speed-up to fewer than three atoms per node. On a smaller, solvated lipid system with 13,758 atoms, we have achieved continued speedups through fewer than one atom per node and less than 2 milliseconds/time-step. On a 92,224 atom system, we have achieved floating point performance of over 1.8 TeraFlops/second on 16,384 nodes. Strong scaling of fixed-size classical molecular dynamics of biological systems to large numbers of nodes is necessary to extend the simulation time to the scale required to make contact with experimental data and derive biologically relevant insights.

Blake G. Fitch, Aleksandr Rayshubskiy, Maria Eleftheriou, T. J. Christopher Ward, Mark Giampapa, Yuri Zhestkov, Michael C. Pitman, Frank Suits, Alan Grossfield, Jed Pitera, William Swope, Ruhong Zhou, Scott Feller, Robert S. Germain
DigitalTree: A Tool for Displaying Biological Data in Tree Structure

We developed a computational tool, DigitalTree, for displaying a two-dimensional weighted tree of biological data. Existing methods, Radial Layout, Hyperbolic Layout, Phyllotactic Placement, and Force-Directing were integrated into DigitalTree. DigitalTree preserves the distances between nodes in the tree and the trees topology. The tool has been applied in analyzing microarray data and phylogenetic trees. DigitalTree is available for download at

http://digbio.missouri.edu/~digtree

.

Robin Kramer, Victor Olman, Ying Xu, Dong Xu
HiSP: A Probabilistic Data Mining Technique for Protein Classification

In this work, we propose a new computational technique to solve the protein classification problem. The goal is to predict the functional family of novel protein sequences based on their motif composition. In order to improve the results obtained with other known approaches, we propose a new data mining technique for protein classification based on Bayes’ theorem, called Highest Subset Probability (HiSP). To evaluate our proposal, datasets extracted from Prosite, a curated protein family database, are used as experimental datasets. The computational results have shown that the proposed method outperforms other known methods for all tested datasets and looks very promising for problems with characteristics similar to the problem addressed here.

Luiz Merschmann, Alexandre Plastino
Cross-Ontological Analytics: Combining Associative and Hierarchical Relations in the Gene Ontologies to Assess Gene Product Similarity

Gene and gene product similarity is a fundamental diagnostic measure in analyzing biological data and constructing predictive models for functional genomics. With the rising influence of the gene ontologies, two complementary approaches have emerged where the similarity between two genes/gene products is obtained by comparing gene ontology (GO) annotations associated with the gene/gene products. One approach captures GO-based similarity in terms of hierarchical relations within each gene ontology. The other approach identifies GO-based similarity in terms of associative relations across the three gene ontologies. We propose a novel methodology where the two approaches can be merged with ensuing benefits in coverage and accuracy.

C. Posse, A. Sanfilippo, B. Gopalan, R. Riensche, N. Beagley, B. Baddeley
A GO-Based Method for Assessing the Biological Plausibility of Regulatory Hypotheses

Many algorithms have been proposed for deriving regulatory networks from microarray gene expression data. The performance of such algorithms is often measured by how well the resulting network can recreate the gene expression data that it was derived from. However, this kind of performance does not necessarily mean that the regulatory hypotheses in the network are biologically plausible. We therefore propose a method for assessing the biological plausibility of regulatory hypotheses using prior knowledge in the form of regulatory pathway databases and Gene Ontology-based annotation of gene products. A set of templates is derived by generalising from known interactions to typical properties of interacting gene product pairs. By searching for matches in this set of templates, the plausibility of regulatory hypotheses can be assessed. We evaluate to what degree the collection of templates can separate true from false positive interactions, and we illustrate the practical use of the method by applying it to an example network reconstruction problem.

Jonas Gamalielsson, Patric Nilsson, Björn Olsson
Delays in Biological Regulatory Networks (BRN)

In this article, we propose a refinement of the modeling of genetic regulatory networks based on the approach of René Thomas. The notion of delays of activation/inhibition are added in order to specify which variable is faster affected by a change of its regulators. The formalism of linear hybrid automata is well suited to allow such refinement. We then use HyTech for two purposes: (1) to find automatically all paths from a specified initial state to another one and (2) to synthesize constraints on the delay parameters in order to follow any specific path.

Jamil Ahmad, Adrien Richard, Gilles Bernot, Jean-Paul Comet, Olivier Roux
Phase Transitions in Gene Knockdown Networks of Transitive RNAi

Since gene silencing by RNA interference (RNAi) has been observed when inexact matches exist in siRNA-mRNA binding, the number of mismatched nucleotides allowed by nature becomes an important quantity in characterizing RNAi specificity. We use scale-free graphs to model the knockdown interactions among different genes and estimate the allowable flexibility by examining transitive RNAi, which amplifies siRNA and cyclically silences targets. Simulation results in

S. pombe

indicate that continually increasing the number of mismatches risks transcriptome-wide knockdown and eventually turns RNAi from defensive to self-destructive. At the phase transition, the number of mismatches indicates a critical value beyond which tRNAi would cause an organism instable. This critical value suggests an upper limit of no more than 6 nt mismatches in the binding.

Shibin Qiu, Terran Lane

Third International Workshop on Practical Aspects of High-Level Parallel Programming (PAPP 2006)

Compile-Time Energy Optimization for Parallel Applications in On-Chip Multiprocessors

Energy consumption is becoming one of the key optimization objects in on-chip multiprocessor. Minimizing the energy consumption without parallel performance loss is concerned. In this paper, we focus on a DVFS-enabled on-chip multiprocessor architecture, which allows dynamically adjusting each processor’s voltage/frequency or shut down unused processors so to obtain energy savings. A detailed analytical model is provided and validated by experiments. Experimental results show energy saving can be up to 10.34% without performance loss.

Juan Chen, Huizhan Yi, Xuejun Yang, Liang Qian
Using SBASCO to Solve Reaction-Diffusion Equations in Two-Dimensional Irregular Domains

The SBASCO programming environment provides the developer of parallel and distributed applications with high-level programming capabilities. This is achieved as a result of the combination of two technologies: algorithmic skeletons and software components. This paper is a case study on the use of SBASCO. Specifically, we present a scientific application to study the propagation of reaction waves in two-dimensional irregular domains which can be divided into overlapping rectangular regions. Domain decomposition techniques are used to solve a system of two non-linear reaction-diffusion equations. The structure of the application is established by means of a high-level skeleton, which captures all the communication and synchronization details that take place in parallel component interaction, thereby releasing the programmer from coding them. In addition, the use of software components facilitates the development process and allows the creation of more flexible and adaptable software.

Manuel Díaz, Sergio Romero, Bartolomé Rubio, Enrique Soler, José M. Troya
Higher Order Flattening

We extend the

flattening transformation,

which turns nested into flat data parallelism, to the full higher-order case, including lambda abstractions and data parallel arrays of functions. Our central observation is that flattening needs to transform the closures used to represent functional values. Thus, we use closure conversion before flattening and introduce

array closures

to represent arrays of functional values.

Roman Leshchinskiy, Manuel M. T. Chakravarty, Gabriele Keller
Combining Measurement and Stochastic Modelling to Enhance Scheduling Decisions for a Parallel Mean Value Analysis Algorithm

In this paper we apply the high-level modelling language PEPA to the performance analysis of a parallel program with a pipeline skeleton which computes the Mean Value Analysis (MVA) algorithm for queueing networks.

Gagarine Yaikhom, Murray Cole, Stephen Gilmore
Joint Structured/Unstructured Parallelism Exploitation in muskel

Structured parallel programming promises to raise the level of abstraction perceived by programmers when implementing parallel applications. In the meanwhile, however, it restricts the freedom of programmers to implement arbitrary parallelism exploitation patterns. In this work we discuss a data flow implementation methodology for skeleton based structured parallel programming environments that easily integrates arbitrary, user-defined parallelism exploitation patterns while preserving most of the benefits typical of structured parallel programming models.

M. Danelutto, P. Dazzi
Co-Array Collectives: Refined Semantics for Co-Array Fortran

Co-array notation provides a compact syntax for programming parallel programs. Co-array Fortran (CAF) introduced and implements this notation, and CAF is currently proposed as an extension to the Fortran language standard. We believe that co-array notation requires a revised semantic definition beyond that specified by CAF for both pragmatic reasons within Fortran and to make the notation attractive for incorporation into other programming languages. The revised semantics make the language model easier to understand and reduces the potential for programmer error. Furthermore, these revised semantics allow CAF to be extended to capture collective operations in co-array notation.

Matthew J. Sottile, Craig E Rasmussen, Richard L. Graham
An Approach to Buffer Management in Java HPC Messaging

One of the most challenging aspects to designing a Java messaging system for HPC is the intermediate buffering layer. The lower and higher levels of the messaging software use this buffering layer to write and read messages. The Java New I/O package adds the concept of direct buffers, which—coupled with a memory management algorithm—opens the possibility of efficiently implementing this buffering layer. In this paper, we present our buffering strategy, which is developed to support efficient communications and derived datatypes in MPJ Express—our implementation of the Java MPI API. We evaluate the performance of our buffering layer and demonstrate the usefulness of direct byte buffers.

Mark Baker, Bryan Carpenter, Aamir Shafi

Wireless and Mobile Systems

A Low Complexity and Robust Frequency Offset Estimation Algorithm for OFDM-Based WLAN Systems

This paper addresses the frequency offset estimation problem in the presence of the timing error for OFDM-based WLAN systems. When the timing error exists, the correlation value used for the frequency offset estimation could be reduced significantly due to the timing error, resulting in considerable degradation in estimation performance. In this paper, using the coherence phase bandwidth (CPB) and a threshold, a novel frequency offset estimation algorithm is proposed and based on which, an efficient timing error estimation algorithm is also proposed for the re-estimation of the frequency offset.

Sanghun Kim, Seokho Yoon, Hyoung-Kee Choi, Sun Yong Kim
Simplified Signal Detection for BLAST Architecture with ML and DFE Detectors

In this paper, a simplified signal detection technique is proposed for V-BLAST systems. In this proposed scheme, since the performance of V-BLAST system depends on the first sub-stream detection capability,

V

probable streams are detected according to the first detected sub-stream of DFE detector and most probable stream is selected by likelihood test. And to reduce the computational complexity of DFE detector, a simple DFE detection scheme is also proposed. It has been shown that the proposed technique can detect the transmitted data more accurately than conventional DFE decoding scheme, and has very lower complexity than ML detector.

Myung-Sun Baek, Byung-Su Kang, So-Young Yeo, Young-Hwan You, Hyoung-Kyu Song
Scenario Decomposition Based Analysis of Next Generation Mobile Services

The next-generation mobile services have not been studied actively yet because there is no clear definition what the next generation mobile communication and its services will be. In this paper, we propose a service analysis framework for eliciting key service functionalities and core technologies using given various future mobile service scenarios. The proposed analysis framework, called

scenario based service analysis process (2SAP)

, consists of 4 sequential analysis steps, a scenario transformation, a scenario decomposition, an analysis of functions for each situation, and an elicitation of service functionalities.

Dongchun Shin, Jinbae Kim, Seungwan Ryu, Donsung Oh, Joowan Lee, Minhyung Kang
A Power Saving Scheme for Integrated WLAN and Cellular Networks

For the integration of WLAN and cellular networks, there have been recently several efforts to provide an optimized system design. However, it is noted from the existing schemes that in idle state, WLAN interface is supposed to be periodically turned on to receive periodic beacons from access points with cellular interface turned on at the same time for the paging from base station, resulting in significant power consumption. Therefore, in order to save the power consumption, when WLAN interface goes to idle state, we propose to turn off the WLAN interface without any periodic wake-up while at the same time, the existing paging of cellular networks are utilized. It is confirmed via both analytical modeling and simulation that the proposed scheme achieves better performance as compared to techniques currently adopted in WLAN in terms of power consumption.

SuKyoung Lee
The Optimal Subchannel and Bit Allocation Problem for OFDM

The advantages of the orthogonal frequency division multiplexing (OFDM) are high spectral efficiency, resiliency to RF interference, and lower multi-path distortion. To further utilize vast channel capacity of the multiuser OFDM, one has to find the efficient adaptive subchannel and bit allocation among users. In this paper, we propose an 0-1 integer programming model formulating the optimal subchannel and bit allocation problem of the multiuser OFDM. We provide an efficient Lagrangian dual formulation with fast solution approach. Simulation results are provided to show the effectiveness of the 0-1 integer programming model. MATLAB simulation on a system employing M-ary quardarature amplitude modulation (MQAM) assuming a frequency-selective channel consisting of five independent Rayleigh multipaths are carried with the optimal subchannel and bit allocation solution generated by 0-1 integer programming model.

Taehyung Park, Sungbin Im
Bluetooth Broadcasting Performance: Reliability and Throughput

This paper studies the performance of Bluetooth broadcasting scheme. The transmission of a Bluetooth broadcast packet is repeated several times to increase the reliability of broadcast. We have analyzed the effects of baseband ACL packet type preference, on the broadcast performance in terms of reliability and effective throughput, over different channel characteristics (i.e. bit error rate). As the result of our analysis, we determined the optimal packet type and re-transmission count combinations that can provide the highest effective throughput values for various practical BER ranges. These results can be used at Bluetooth baseband layer to dynamically adapt to varying channel conditions and to achieve a good broadcast performance.

Kaan Dogan, Guray Gurel, A. Kerim Kamci, Ibrahim Korpeoglu
An Optimized Two Factor Authenticated Key Exchange Protocol in PWLANs

Recently, Park and Park proposed a new two factor authenticated key exchange (PP-TAKE) protocol that can be applied to low-power PDAs in Public Wireless LANs (PWLANs). The current paper proposes an efficient TAKE protocol based upon the PP-TAKE protocol. The computational cost of the proposed TAKE protocol is less than that of the PP-TAKE protocol, as is the number of steps needed to communicate is one fewer in which needs only three steps.

Eun-Jun Yoon, Kee-Young Yoo
Adaptive Clustering with Virtual Subnets Support in Ad Hoc Networks

This paper concerns how the virtual subnet mechanism is engaged with the hierarchical architecture in ad hoc networks. The employment of virtual subnets performed at the Data Link Layer can prevent the reveal of broadcast storm and further improve the efficiency of traffic between members of specific group in ad hoc networks. A convenient method is proposed to select crucial backbone nodes which will furnish the ability of filtering frames in virtual subnet against the unnecessary flooding, termed CVA (Clustering with Virtual subnets support Algorithm). The chosen nodes include two different types, station nodes and port nodes, which act functionally as the switch and the ports of the switch to filter frames, respectively. We evaluate CVA with the metrics of average nodes in the backbone and duration of the protocols. The simulation result manifests the proper performance while supporting the virtual subnets in ad hoc networks.

Tzu-Chiang Chiang, Ming-Hui Tsai, Yueh-Min Huang
A Service Management Architecture for NEMO in IPv4 and IPv6 Networks

The current design of Mobile IPv4 and Mobile IPv6 protocols doesn’t provide the compatibility between networks of different versions. A Network Mobility(NEMO) protocol also provides the limited mobility in IPv6 networks. However, we can easily expect that the current IPv4 network will coexist with IPv6 network in the near future internet environment. So, in this paper, we propose the architecture of mobility between networks of different versions which is supported the NEMO basic protocol. This new mechanism can manage the mobility through a Tunnel Agent(TA). And the TA maintains a NEMO tunnel when a Mobile Router(MR) moves from IPv6 to IPv4 network.

Jin Ho Kim, Choong Seon Hong, Dae Sun Kim
Overlapped Detection Via Approximate Entropy Estimation Against Flooding Attack in Mobile Sensor Networks

To achieve security in sensor networks, it is important to be able to defend against flooding attack recently considered as an extremely threatening attack. In this paper, we propose a flooding attack detection method as the first defense step, through approximate entropy estimation reflecting resource constraints of sensors. Each detector performs both basic estimation for its own region and overlapped estimation for its own and neighbor regions, against the mobility of attack node. Also, in order to enhance the accuracy of detection even in the various deployments of attack agents, we deploy hierarchically detectors according to network topology. This detector by entropy estimation is simplified by only multiplication calculation instead of logarithm, in addition to providing higher estimation precision of entropy compared to the conventional entropy estimation. Our simulation results indicate that this hierarchical defense is a feasible method, being especially promising for accurate decision through overlapped detection even in frequent handoffs of mobile attack agents.

Mihui Kim, Kijoon Chae
Implementation of Next Generation Mobile Service: The Context-Aware Follow-Me Service

With the advent of mobile communication technologies, next generation mobile service is evolving toward

user-centric

services, in that each user can get a personalized mobile service based on his/her needs. In this paper, we propose the

context-aware follow me (CAF)

service as a prototype next generation user-centric mobile service. The CAF service is designed to provide optimized service based on various context information such as user’s location and preference, capability of devices located around a user and characteristics of services in use. We also present an example CAF service scenario, an overall system architecture, a test-bed system and a sample CAF service flows.

Jungsook Bae, Seungwan Ryu, JaeYong Lee, ByungChul Kim
Multi-piconet Formation to Increase Channel Utilization in IEEE 802.15.3 High-Rate WPAN

IEEE 802.15.3 high-rate Wireless Personal Area Networks (WPANs) is a short range communication technology, in which devices (DEVs) of a piconet communicate with a Piconet Coordinator (PNC). An extension of the communication area in WPAN via formation of a multi-piconet is one of the most challenging issues. Although a concept of multi-piconet has been defined in the IEEE 802.15.3 WPAN standard, a detailed formation mechanism of multi-piconet is not presented. In this paper, we propose efficient multi-piconet formation algorithms, Piconet Coordinator To Device (PCTD) and Device To Piconet Coordinator (DTPC). We evaluate the performance of the algorithms via simulations. The proposed algorithms are shown to extend the communication area with one to three levels and to provide sufficient Channel Time Allocations (CTAs) for subsidiary piconets in IEEE 802.15.3 WPAN.

Ssang-Bong Jung, Soon-Bin Yim, Tae-Jin Lee, Sun-Do June, Hyeon-Seok Lee, Tai-Gil Kwon, Jin-Woong Cho
Unencapsulated Mobile Multicast Routing for Next Generation Video Networks

Mobile multimedia communication will particularly rely on IP multicasting, as users commonly share frequency bands of limited capacities. This paper is dedicated to the problem of mobile Source Specific Multicast (SSM) senders. We propose extensions to multicast routing for transforming (morphing) previous delivery trees into optimal trees rooted at a relocated source. This extension scheme only requires basic signaling mechanisms, explicit joins and prunes. First evaluations grounded on real-world Internet topologies indicate network performance superior to traditional distribution schemes. As corresponding application we further on introduce a videoconferencing and communication software of distributed architecture. It is built as a simple, ready-to-use scheme for distributed presenting, recording and streaming multimedia content over next generation unicast or multicast networks.

Thomas C. Schmidt, Matthias Wählisch, Hans L. Cycon, Mark Palkow, Henrik Regensburg
Channel Estimation of High Rate WPAN System with Diversity Technique

A simple channel estimation technique for IEEE 802.15.3 HR WPAN system is proposed and assessed, which gives improved performance in multi-path channel. HR WPAN system can be designed for supporting multimedia traffic in the wireless home network. The demand for a high data rate and reliable technology will increase continuously. To satisfy the demand, we design a new architecture that applies the diversity technique to the HR WPAN system. The proposed scheme can estimate multi-path channel easily by simple operation using preamble, and also can be utilized in the situation of using one or two transmit antennas.

Byung-Su Kang, Myung-Sun Baek, Dong-Jun Cho, Young-Hwan You, Hyoung-Kyu Song
A Timestamp Tree-Based Cache Invalidation Report Scheme in Mobile Computing Environments

In mobile computing environments, cache invalidation techiniques are widely accepted. However, theses techniques require a large size of the invalidation report and show low cache utilization under high server update rate. We propose a new cache-level cache invalidation technique called TTCI(Timestamp Tree-based Cache Invalidation technique) to overcome the above problems. TTCI also supports selective tuning for a cache-level cache invalidation. We verify the effectiveness of our TTCI schemes through in-depth simulation experiments.

Hakjoo Lee, Jonghyun Suh, Sungwon Jung, Sooyoung Lee, Junguck Lee
Clustering Versus Evenly Distributing Energy Dissipation in Wireless Sensor Routing for Prolonging Network Lifetime

A novel Cluster Heads (CH) choosing algorithm based on both Minimal Spanning Tree and Maximum Energy resource on sensors, named MSTME, is provided for prolonging lifetime of wireless sensor networks. MSTME can satisfy three principles of optimal CHs: to have the most energy resource among sensors in local clusters, to group approximately the same number of closer sensors into clusters, and to distribute evenly in the networks in terms of location. Simulation shows the network lifetime in MSTME excels its counterparts in two-hop and multi-hop wireless sensor networks.

Guangyan Huang, Xiaowei Li, Jing He
On the Effect of Heterogeneous Traffic Sources on the Network Availability for Wireless Sensor Grids

Wireless sensor grids are poised for applications where pre-deployment information is highly available to optimize communication and computation costs. Using two widely known grid traversal schemes, we critique the effect of multiple services being rendered by a sensor network onto its availability. We derive an interesting result using Markovian model to appreciate the effect of load heterogeneity onto node and network lifetimes. Our mathematical model also analyzes synchronous sleep for its effect on energy conservation. We also propose a epicenter regulated, asynchronous sleep scheme which allows nodes to conserve energy yet relay data towards the sink effectively. The performance results show that the proposed scheme prolongs network availability in sensor grids.

Ali Hammad Akbar, Ki-Hyung Kim, Shaokai Yu, Won-Sik Yoon
Selective Handover Technique on Multihomed Mobile Network Environment

Mobile Networks could have multiple wireless network interfaces due to various wireless technologies provided by many ISPs. Recently, those issues of the multiple connections, Multihomed Mobile Network, were highlighted in IETF NEMO WG. However, considering multiple connections has some tradeoffs between efficiency and fault tolerance, since the network must handle the status of all connections. Moreover, current NEMO basic support architecture does not have functions to handle those multiple connec-tions when it encounters multiple new networks on new location. This article suggests a RA Message selection technique to support efficient Multihomed Mobile Network on NEMO basic support architecture even in case of the nested NEMO environment.

Kiyong Park, Sunyong Han, Jungwook Song
Collaborative Trust-Based Shortest Secure Path Discovery in Mobile Ad Hoc Networks

As routes in MANET are composed of only mobile terminals based on a multi-hop mechanism, MANET may encounter severe situations where intermediate nodes disturb delivering packets adversely. Accordingly, the stability of routing protocols is very important for operating the ad hoc networks efficiently and securely. However finding a secure end-to-end path is not easy, since the nodes in MANET are characterized by uncertain trust relationship each other. In this paper, a comprehensive mechanism for discovering the most secure and shortest paths, is proposed. This proposed mechanism is based on the Dijkstra algorithm, and regards

distance weight

and

trust weight

highly. The metric value which can determine the routing path is to be extracted from the

distance value

and the

trust level

. According to the simulation results, even though the communication cost increases up to about 8%, the security level of the shortest path is improved significantly.

Seungtak Oh, Chilgee Lee, Hyunseung Choo
An Efficient Neighbor Knowledge Based Broadcasting for Mobile Ad Hoc Networks

Although a flooding is the most commonly used method for one-to-all communications, it often results in huge traffic in mobile ad hoc networks. Several optimization schemes have been proposed to alleviate this problem. However, none of them offers a complete solution and has their own limitations, such as, lack of reliability, high latency or excessive control packet overhead. In this paper, we propose the FONIAH (Flooding based on One- hop Neighbor Information and Adaptive Holding) scheme to overcome these limitations.

Sung-Hee Lee, Young-Bae Ko
Maximum Lifetime Paths for the High Packet Delivery Ratio Using Fast Recovery in a Mobile Ad Hoc Network

Both movement of mobile nodes and path lifetime exhaustion cause to change the network topology in a mobile ad hoc network. Then, it is not easy to obtain the high packet delivery ratio. This paper proposes the Maximum Lifetime Paths (MLP) to transmit a large amount of data using nodes with long lifetime and using fast recovery. The proposed MLP establishes routes based on the shortest path and dynamically recovers routes using warning messages. The simulation experiment results show that MLP increases the packet delivery ratio and reduces the hop distance and the frequency of the route establishment.

HyoJin Kim, SeungJae Han, JooSeok Song

Erratum

Synonymous Codon Substitution Matrices
Adrian Schneider, Gaston H. Gonnet, Gina M. Cannarozzi
Compile-Time Energy Optimization for Parallel Applications in On-Chip Multiprocessors
Juan Chen, Huizhan Yi, Xuejun Yang, Liang Qian
Backmatter
Metadaten
Titel
Computational Science – ICCS 2006
herausgegeben von
Vassil N. Alexandrov
Geert Dick van Albada
Peter M. A. Sloot
Jack Dongarra
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-34382-0
Print ISBN
978-3-540-34381-3
DOI
https://doi.org/10.1007/11758525