Skip to main content

2006 | Buch

Computational Science – ICCS 2006

6th International Conference, Reading, UK, May 28-31, 2006, Proceedings, Part I

herausgegeben von: Vassil N. Alexandrov, Geert Dick van Albada, Peter M. A. Sloot, Jack Dongarra

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Inhaltsverzeichnis

Frontmatter

Keynote Abstracts

Metacomputing Revisited: Alternative Paradigms for Distributed Resource Sharing

Conventional distributed computing paradigms such as PVM and MPI(CH) have had mixed success when translated to computing environments that span multiple administrative and ownership domains. We analyze fundamental issues in distributed resource sharing particularly from the viewpoint of different forms of heterogeneity – i.e. not just in the computing platforms, but also in storage, network characteristics, availability, access protocols, robustness, and dynamicity. We argue that effective multidomain resource sharing in the face of such variability is critically dependent upon minimizing global state between providers, and between providers and clients. The H2O framework has made one step in this direction, by decoupling provider concerns from client requirements, and enabling clients to (re)configure resources as required. H2O is based on a “pluggable” software architecture to enable flexible and reconfigurable distributed computing. A key feature is the provisioning of customization capabilities that permit clients to tailor provider resources as appropriate to the given application, without compromising control or security. Through the use of uploadable “pluglets”, users can exploit specialized features of the underlying resource, application libraries, or optimized message passing subsystems on demand. The next generation of this framework takes the second step, to virtualize and homogenize resource aggregates at the client side, thereby further reducing the degree of coupling between providers, and between providers and clients. The system also supports dynamic environment preconditioning to automate many of the tasks required in multidomain resource sharing. The architecture and design philosophies of these software frameworks, their implementation, recent experiences, and planned enhancements are described, in the context of new paradigms and directions for metacomputing.

Vaidy Sunderam
Achieving Breakthrough Science with the Blue Gene/L Supercomputer

The Blue Gene project started in the final months of 1999. Five years later, during the final months of 2004, the first Blue Gene/L machines were being installed at customers. By then, Blue Gene/L had already established itself as the fastest computer in the planet, topping the TOP500 list with the breathtaking speed of over 70 Teraflops. Since the beginning of 2005, many other systems have been installed at customers, the flagship machine at Lawrence Livermore National Laboratory has greatly increased in size, and Blue Gene/L has established itself as a machine capable of breakthrough science. We here examine why Blue Gene/L is such an effective science machine, and report on some of the more significant research being performed by scientists using Blue Gene/L systems today. We also explain why this is just the beginning, and why there is more excitement ahead of us than behind us in the Blue Gene project.

José E. Moreira
Visualizing the Future

Computers are now extensively used throughout science, engineering, and medicine. Advances in computational geometric modeling, imaging, and simulation allow researchers to build and test models of increasingly complex phenomena and thus to generate unprecedented amounts of data. These advances have created the need to make corresponding progress in our ability to understand large amounts of data and information arising from multiple sources. In fact, to effectively understand and make use of the vast amounts of information being produced is one of the greatest scientific challenges of the 21st Century.

Visual computing, which relies on and takes advantage of, the interplay among techniques of visualization, computer graphics, virtual reality, and imaging and vision, is fundamental to understanding models of complex phenomena, which are often multi-disciplinary in nature. In this talk, I will first provide several examples of ongoing visual computing research at the Scientific Computing and Imaging (SCI) Institute as applied to problems in computational science, engineering, and medicine, then discuss future research opportunities and challenges.

Chris Johnson
IT Innovation: A New Era

This presentation will discuss the emerging discipline of IT Innovation, a discipline where unique value can be created at the intersection of Information Technology and Innovation. The presentation also shares the structure of an emerging Innovation Capability Maturity Framework and Model while discussing how computing trends such as multicore technology and virtualization will re-ignite Moore’s law and provide a platform for Innovation that is unparalleled in history.

Martin Curley
The AWE HPC Benchmark

Near the end of 2005, AWE, Aldermaston placed an order for a Cray XT3 system with 3936 dual-core nodes (over 40 Tflops peak) to replace its existing HPC system. This paper describes the design of the benchmark used during the preceding competitive procurement including separate capacity and capability requirements analysis. Details are presented of the evaluation process and the relative results of the six HPC systems evaluated. These include the more than 2-times speedup obtained by Cray by tuning the source code of the most important application.

Ron Bell

Modelling and Simulation in Economics and Finance

Newton’s Method for the Ellipsoidal l p Norm Facility Location Problem

We give the minisum ellipsoidal

l

p

norm facility location model. Our model includes both

p

≥ 2 and

p

<2 cases. We derive the optimality conditions for our model and transform the optimality conditions into a system of equations. We then solve the system by perturbed Newton’s method. Some numerical examples are presented.

Yu Xia
Financial Influences and Scale-Free Networks

We consider the problem of analyzing influences in financial networks by studying correlations in stock price movements of companies in the

S

&

P

500 index and measures of influence that can be attributed to each company. We demonstrate that under a novel and natural measure of influence involving cross-correlations of stock market returns and market capitalization, the resulting network of financial influences is Scale Free. This is further corroborated by the existence of an intuitive set of highly influential hub nodes in the network. Finally, it is also shown that companies that have been deleted from the

S

&

P

500 index had low values of influence.

Nitin Arora, Babu Narayanan, Samit Paul
Comparison of Simulation and Optimization Possibilities for Languages: DYNAMO and COSMIC & COSMOS – on a Base of the Chosen Models

On the base of the chosen models, the comparison of simulation and optimization possibilities for languages DYNAMO and COSMIC & COSMOS, is presented. The computational techniques, for example: integration facilities, optimization opportunities, are the main point of interest for this comparison.

Elżbieta Kasperska, Elwira Mateja-Losa, Damian Słota
Bond Pricing with Jumps and Monte Carlo Simulation

We derive a general form of the term structure of interest rates with jump. One-state models of Vasicek, CIR(Cox, Ingersol, and Ross), and the extended model of the Hull and White are introduced and the jump-diffusion models of the Ahn & Thompson and the Baz & Das as developed models are also investigated by using the Monte Carlo simulation which is one of the best methods in financial engineering to evaluate financial derivatives. We perform the Monte Carlo simulation with several scenarios even though it takes a long time to achieve highly precise estimates with the brute force method in terms of mean standard error which is one measure of the sharpness of the point estimates.

Kisoeb Park, Moonseong Kim, Seki Kim
On Monte Carlo Simulation for the HJM Model Based on Jump

We derive a form of the HJM model based on jump. Heath, Jarrow, and Morton(HJM) model is widely accepted as the most general methodology for term structure of interest rate models. We represent the HJM model with jump and give the analytic proof for the HJM model with jump. We perform the Monte Carlo simulation with several scenarios to achieve highly precise estimates with the brute force method in terms of mean standard error which is one measure of the sharpness of the point estimates. We have shown that bond prices in HJM jump-diffusion version models of the extended Vasicek and CIR models obtained by Monte Carlo simulation correspond with the closed form values.

Kisoeb Park, Moonseong Kim, Seki Kim

Modelling and Simulation of Complex Systems and in the Natural Sciences

Scalable Execution of Legacy Scientific Codes

This paper presents Weaves, a language neutral framework for scalable execution of legacy parallel scientific codes. Weaves supports scalable threads of control and multiple namespaces with selective sharing of state within a single address space. We resort to two examples for illustration of different aspects of the framework and to stress the diversity of its application domains. The more expressive collaborating partial differential equation (PDE) solvers are used to exemplify developmental aspects, while freely available Sweep3D is used for performance results. We outline the framework in the context of shared memory systems, where its benefits are apparent. We also contrast Weaves against existing programming paradigms, present use cases, and outline its implementation. Preliminary performance tests show significant scalability over process-based implementations of Sweep3D.

Joy Mukherjee, Srinidhi Varadarajan, Naren Ramakrishnan
Parallel Solvers for Flexible Approximation Schemes in Multiparticle Simulation

New finite difference schemes with flexible local approximation are applied to screened electrostatic interactions of spherical colloidal particles governed by the Poisson-Boltzmann equation. Local analytical approximations of the solution are incorporated directly into the scheme and yield high approximation accuracy even on simple and relatively coarse Cartesian grids. Several parallel iterative solution techniques have been tested with an emphasis on suitable parallel preconditioning for the nonsymmetric system matrix. In particular, flexible GMRES preconditioned with the distributed Schur Complement exhibits good solution time and scales well when the number of particles, grid nodes or processors increases.

Masha Sosonkina, Igor Tsukerman
Alternate Learning Algorithm on Multilayer Perceptrons

Multilayer perceptrons have been applied successfully to solve some difficult and diverse problems with the backpropagation learning algorithm. However, the algorithm is known to have slow and false convergence aroused from flat surface and local minima on the cost function. Many algorithms announced so far to accelerate convergence speed and avoid local minima appear to pay some trade-off for convergence speed and stability of convergence. Here, a new algorithm is proposed, which gives a novel learning strategy for avoiding local minima as well as providing relatively stable and fast convergence with low storage requirement. This is the alternate learning algorithm in which the upper connections, hidden-to-output, and the lower connections, input-to-hidden, alternately trained. This algorithm requires less computational time for learning than the backpropagation with momentum and is shown in a parity check problem to be relatively reliable on the overall performance.

Bumghi Choi, Ju-Hong Lee, Tae-Su Park
A Transformation Tool for ODE Based Models

This paper presents a tool for prototyping ODE (Ordinary Differential Equations) based systems in the area of computational modeling. The models, tailored during the project step of the system development, are recorded in MathML, a markup language built upon XML. This design choice improves interoperability with other tools used for mathematical modeling, mainly considering that it is based on Web architecture. The resulting work is a Web portal that transforms an ODE model documented in MathML to a C++ API that offers numerical solutions for that model.

Ciro B. Barbosa, Rodrigo W. dos Santos, Ronan M. Amorim, Leandro N. Ciuffo, Fairus Manfroi, Rafael S. Oliveira, Fernando O. Campos
Performance Comparison of Parallel Geometric and Algebraic Multigrid Preconditioners for the Bidomain Equations

The purpose of this paper is to discuss parallel preconditioning techniques to solve the elliptic portion (since it dominates computation) of the bidomain model, a non-linear system of partial differential equations that is widely used for describing electrical activity in the heart. Specifically, we assessed the performance of parallel multigrid preconditioners for a conjugate gradient solver. We compared two different approaches: the Geometric and Algebraic Multigrid Methods. The implementation is based on the PETSc library and we reported results for a 6-node Athlon 64 cluster. The results suggest that the algebraic multigrid preconditioner performs better than the geometric multigrid method for the cardiac bidomain equations.

Fernando Otaviano Campos, Rafael Sachetto Oliveira, Rodrigo Weber dos Santos

Modelling and Simulation of Complex Systems

Simulating and Modeling Secure Web Applications

Characterizing application servers performance becomes hard work when the system is unavailable or when a great amount of time and resources are required to generate the results. In this paper we propose the modeling and simulation of complex systems, such as application servers, in order to alleviate this limitation. Using simulations, and specifically coarse-grain simulations as we propose here, allows us to overcome the necessity of using the real system while taking only 1/10 of the time than that of the real system to generate the results. Our simulation proposal can be used to obtain server performance measurements, to evaluate server behavior with different configuration parameters or to evaluate the impact of incorporating additional mechanisms to the servers to improve their performance without the necessity of using the real system.

Ramon Nou, Jordi Guitart, Jordi Torres
A Treecode for Accurate Force Calculations

A novel algorithm for computing forces in N-body simulations is presented. This algorithm constructs a hierarchical tree data structure and uses multipole expansions for the inverse square law. It can be considered as a multipole-based variant of Barnes-Hut algorithm [2]. The key idea here is the representation of forces (gravitational or electrostatic) through a multipole series by using the relationship between ultraspherical and Legendre polynomials. The primary advantage of this algorithm is the accuracy it offers along with the ease of coding. This method can be used in the simulation of star clusters in astrophysical applications in which higher order moments are expensive and difficult to construct. This method can also be used in molecular dynamics simulations as an alternative to particle-mesh and Ewald Summation methods which suffer from large errors due to various differentiation schemes. Our algorithm has

O

(

p

3

N

log

N

) complexity where

p

is typically a small constant.

Kasthuri Srinivasan, Vivek Sarin
An Approximate Algorithm for the Minimal Cost Gateways Location, Capacity and Flow Assignment in Two-Level Hierarchical Wide Area Networks

In the paper the problem of designing two-level hierarchical structure of wide area network is considered. The goal is to select gateways location, channel capacities and flow routes in order to minimize total cost of leasing capacities of channels of 2nd level network, subject to delay constraint. An approximate algorithm is proposed. Some results following from computational experiment are reported.

Przemyslaw Ryba, Andrzej Kasprzak
Image-Based Robust Control of Robot Manipulators with Integral Actions

In this paper, we propose a robust visual feedback controller with inte gral action for tracking control of n-link robot manipulators in the presence of constant bounded parametric uncertainties. The proposed control input has robust ness to the parametric uncertainty and reduces tracking error in the steady-state. The stability of the closed-loop system is shown by Lyapunov method. The effectiveness of the proposed method is shown by simulation and ex periment results on the 5-link robot manipulators with two degree of freedom.

Min Seok Jie, Kang Woong Lee

Advanced Numerical Algorithms

Symmetric Runge-Kutta Methods with Higher Derivatives and Quadratic Extrapolation

In this paper we study the symmetry of Runge-Kutta methods with higher derivatives. We find conditions which provide this property for the above numerical methods. We prove that the family of E-methods constructed earlier consists of symmetric methods only, which lead to the quadratic extrapolation technique in practice.

Gennady Yu. Kulikov, Ekaterina Yu. Khrustaleva, Arkadi I. Merkulov
A Note on the Simplex Method for 2-Dimensional Second-Order Cone Programming

We transform a 2-dimensional second-order cone program to a standard linear program in order to pivot a vector based on its three states. Related properties of the transformation are given. Based on the transformation, we interpret the simplex method and its sensitivity analysis for the 2-dimensional second-order cone programming, especially the state changes. Finally, we give some applications of the 2-dimensional second-order cone programming.

Yu Xia
Local Linearization-Runge Kutta (LLRK) Methods for Solving Ordinary Differential Equations

A new class of stable methods for solving ordinary differential equations (ODEs) is introduced. This is based on combining the Local Linearization (LL) integrator with other extant discretization methods. For this, an auxiliary ODE is solved to determine a correction term that is added to the LL approximation. In particular, combining the LL method with (explicit) Runge Kutta integrators yields what we call LLRK methods. This permits to improve the order of convergence of the LL method without loss of its stability properties. The performance of the proposed integrators is illustrated through computer simulations.

H. De la Cruz, R. J. Biscay, F. Carbonell, J. C. Jimenez, T. Ozaki
The Study on the sEMG Signal Characteristics of Muscular Fatigue Based on the Hilbert-Huang Transform

Muscular fatigue refers to temporary decline of maximal power ability or contractive ability for muscle movement system. The signal of surface electromyographic signal (sEMG) can reflect the changes of muscular fatigue at certain extent. In many years, the application of signal of sEMG on evaluation muscular fatigue mainly focus on two aspects of time and frequency respectively. The new method Hilbert-Huang Transform(HHT) has the powerful ability of analyzing nonlinear and non-stationary data in both time and frequency aspect together. The method has self-adaptive basis and is better for feature extraction as we can obtain the local and instantaneous frequency of the signals. In this paper, we chose an experiment of the static biceps data of twelve adult subjects under the maximal voluntary contraction (MVC) of 80%. The experimental results proved that this method as a new thinking has an obvious potential for the biomedical signal analysis.

Bo Peng, Xiaogang Jin, Yong Min, Xianchuang Su
A New Approach for Solving Evolution Problems in Time-Parallel Way

With the advent of massively parallel computers with thousands of processors, a large amount of work has been done during the last decades in order to enable a more effective use of a higher number of processors, by superposing parallelism in time-domain, even though it is known that time-integration is inherently sequential, to parallelism in the space-domain [8]. Consequently, many families of predictor-corrector methods have been proposed, allowing computing on several time-steps concurrently [5], [6]. The aim of our present work is to develop a new parallel-in-time algorithm for solving evolution problems, based on particularities of a rescaling method that has been developed for solving different types of partial and ordinary differential equations whose solutions have a finite existence time [9]. Such method leads to a sliced-time computing technique used to solve independently rescaled models of the differential equation. The determining factor for convergence of the iterative process are the predicted values at the start of each time slice. These are obtained using “ratio-based” formulae. In this paper we extend successfully this method to reaction diffusion problems of the form

u

t

u

m

+

au

p

, with their solutions having a global existence time when

p

m

≤ 1. The resulting algorithm

RaPTI

provides perfect parallelism, with convergence being reached after few iterations.

Nabil R. Nassif, Noha Makhoul Karam, Yeran Soukiassian

Data Driven Computing

CGO: A Sound Genetic Optimizer for Cyclic Query Graphs

The increasing number of applications requiring the use of large join queries reinforces the search for good methods to determine the best execution plan. This is especially true, when the large number of joins occurring in a query prevent traditional optimizers from using dynamic programming.

In this paper we present the Carquinyoli Genetic Optimizer (CGO). CGO is a sound optimizer based on genetic programming that uses a subset of the cost-model of IBM®DB2®Universal Database

TM

(DB2 UDB) for selection in order to produce new generations of query plans. Our study shows that CGO is very competitive either as a standalone optimizer or as a fast post-optimizer. In addition, CGO takes into account the inherent characteristics of query plans like their cyclic nature.

Victor Muntés-Mulero, Josep Aguilar-Saborit, Calisto Zuzarte, Josep-L. Larriba-Pey
Multiscale Characteristics of Human Sleep EEG Time Series

We investigated the complexity of fluctuations in human sleep electroencephalograms (EEGs) by multifractals. We used human sleep EEG time series taken from normal, healthy subjects during the four stages of sleep and rapid eye movement (REM) sleep. Our findings showed that the fluctuation dynamics in human sleep EEGs could be adequately described by a set of scales and characterized by multifractals. Multifractal formalism, based on the wavelet transform modulus maxima, appears to be a good method for characterizing EEG dynamics.

In-Ho Song, In-Young Kim, Doo-Soo Lee, Sun I. Kim
A Hybrid Feature Selection Algorithm for the QSAR Problem

In this paper we discuss a hybrid feature selection algorithm for the Quantitative Structure Activity Relationship (QSAR) modelling. This is one of the goals in Predictive Toxicology domain, aiming to describe the relations between the chemical structure of a molecule and its biological or toxicological effects, in order to predict the behaviour of new, unknown chemical compounds. We propose a hybridization of the ReliefF algorithm based on a simple fuzzy extension of the value difference metric. The experimental results both on benchmark and real world applications suggest more stability in dealing with noisy data and our preliminary tests give a promising starting point for future research.

Marian Viorel Crăciun, Adina Cocu, Luminiţa Dumitriu, Cristina Segal
Sequential Probability Ratio Test (SPRT) for Dynamic Radiation Level Determination – Preliminary Assessment

A Sequential Probability Ratio Test (SPRT) algorithm for reliable and fast determination of a relative radiation level in a field environment has been developed. The background and the radioactive anomaly are assumed to follow the normal and Poisson distributions, respectively. The SPRT formulation has been derived and simplified based on these assumptions. The preliminary evaluation suggests that the algorithm, while offering confident estimations for the log-scaled radiation level, promises the additional advantage of reduction in sampling sizes, particularly in areas with a high radiation level.

Ding Yuan, Warnick Kernan
Knowledge-Based Multiclass Support Vector Machines Applied to Vertical Two-Phase Flow

We present a knowledge-based linear multi-classification model for vertical two-phase flow regimes in pipes with the transition equations of McQuillan & Whalley [1] used as prior knowledge. Using published experimental data for gas-liquid vertical two-phase flows, and expert domain knowledge of the two-phase flow regime transitions, the goal of the model is to identify the transition region between different flow regimes. The prior knowledge is in the form of polyhedral sets belonging to one or more classes. The resulting formulation leads to a Tikhonov regularization problem that can be solved using matrix or iterative methods.

Olutayo O. Oladunni, Theodore B. Trafalis, Dimitrios V. Papavassiliou

Advanced Numerical Algorithms and New Algorithmic Approaches to Computational Kernels and Applications

Performance Improvement of Sparse Matrix Vector Product on Vector Machines

Many applications based on finite element and finite difference methods include the solution of large sparse linear systems using preconditioned iterative methods. Matrix vector multiplication is one of the key operations that has a significant impact on the performance of any iterative solver. In this paper, recent developments in sparse storage formats on vector machines are reviewed. Then, several improvements to memory access in the sparse matrix vector product are suggested. Particularly, algorithms based on dense blocks are discussed and reasons for their superior performance are explained. Finally, the performance gain by the presented modifications is demonstrated.

Sunil R. Tiyyagura, Uwe Küster, Stefan Borowski
A New Reconstruction Algorithm in Spline Signal Spaces

In this research letter, we introduce a reconstruction formula in spline signal spaces which is a generalization of former results in [11]. A general improved A-P iterative algorithm is presented. We use the algorithm to show reconstruction of signals from weighted samples and also show that the new algorithm shows better convergence than the old one. The explicit convergence rate of the algorithm is obtained.

Chen Zhao, Yueting Zhuang, Honghua Gan
An Implicit Riemannian Trust-Region Method for the Symmetric Generalized Eigenproblem

The recently proposed Riemannian Trust-Region method can be applied to the problem of computing extreme eigenpairs of a matrix pencil, with strong global convergence and local convergence properties. This paper addresses inherent inefficiencies of an explicit trust-region mechanism. We propose a new algorithm, the Implicit Riemannian Trust-Region method for extreme eigenpair computation, which seeks to overcome these inefficiencies while still retaining the favorable convergence properties.

C. G. Baker, P. -A. Absil, K. A. Gallivan
Interval Arithmetic and Computational Science: Performance Considerations

Interval analysis is an alternative to conventional floating-point computations that offers guaranteed error bounds. Despite this advantage, interval methods have not gained widespread use in large scale computational science applications. This paper addresses this issue from a performance perspective, comparing the performance of floating point and interval operations for some small computational kernels. Particularly attention is given to the Sun Fortran interval implementation, although the strategies introduced here to enhance performance are applicable to other interval implementations. Fundamental differences in the operation counts and memory references requirements of interval and floating point codes are discussed.

Alistair P. Rendell, Bill Clarke, Josh Milthorpe
Floating-Point Computation with Just Enough Accuracy

Most mathematical formulae are defined in terms of operations on real numbers, but computers can only operate on numeric values with finite precision and range. Using floating-point values as real numbers does not clearly identify the precision with which each value must be represented. Too little precision yields inaccurate results; too much wastes computational resources.

The popularity of multimedia applications has made fast hardware support for low-precision floating-point arithmetic common in Digital Signal Processors (DSPs), SIMD Within A Register (SWAR) instruction set extensions for general purpose processors, and in Graphics Processing Units (GPUs). In this paper, we describe a simple approach by which the speed of these low-precision operations can be speculatively employed to meet user-specified accuracy constraints. Where the native precision(s) yield insufficient accuracy, a simple technique is used to efficiently synthesize enhanced precision using pairs of native values.

Hank Dietz, Bill Dieter, Randy Fisher, Kungyen Chang

Modelling and Simulation in the Natural Sciences

Independent Component Analysis Applied to Voice Activity Detection

In this paper we present the first application of Independent Component Analysis (ICA) to Voice Activity Detection (VAD). The accuracy of a multiple observation-likelihood ratio test (MO-LRT) VAD is improved by transforming the set of observations to a new set of independent components. Clear improvements in speech/non-speech discrimination accuracy for low false alarm rate demonstrate the effectiveness of the proposed VAD. It is shown that the use of this new set leads to a better separation of the speech and noise distributions, thus allowing a more effective discrimination and a tradeoff between complexity and performance. The algorithm is optimum in those scenarios where the loss of speech frames could be unacceptable, causing a system failure. The experimental analysis carried out on the AURORA 3 databases and tasks provides an extensive performance evaluation together with an exhaustive comparison to the standard VADs such as ITU G.729, GSM AMR and ETSI AFE for distributed speech recognition (DSR), and other recently reported VADs.

J. M. Górriz, J. Ramírez, C. G. Puntonet, E. W. Lang, K. Stadlthanner
Characterizing the Performance and Energy Attributes of Scientific Simulations

We characterize the performance and energy attributes of scientific applications based on nonlinear partial differential equations (PDEs). where the dominant cost is that of sparse linear system solution. We obtain performance and energy metrics using cycle-accurate emulations on a processor and memory system derived from the PowerPC RISC architecture with extensions to resemble the processor in the BlueGene/L. These results indicate that low-power modes of CPUs such as Dynamic Voltage Scaling (DVS) can indeed result in energy savings at the expense of performance degradation. We then consider the impact of certain memory subsystem optimizations to demonstrate that these optimizations in conjunction with DVS can provide faster execution time and lower energy consumption. For example, on the optimized architecture, if DVS is used to scale down the processor to 600MHz, execution times are faster by 45% with energy reductions of 75% compared to the original architecture at 1GHz. The insights gained from this study can help scientific applications better utilize the low-power modes of processors as well as guide the selection of hardware optimizations in future power-aware, high-performance computers.

Sayaka Akioka, Konrad Malkowski, Padma Raghavan, Mary Jane Irwin, Lois Curfman McInnes, Boyana Norris
Computation of Si Nanowire Bandstructures on Parallel Machines Through Domain Decomposition

This paper presents a methodology for calculating silicon nanowire (SiNW) bandstructures on parallel machines. A partition scheme is developed through domain decomposition and loading balance is considered for scheduling jobs on machines with different efficiencies. Using

sp

3

d

5

s

*

tight-binding model, the Hamiltonian matrix of the SiNW is constructed and its eigenvalues are extracted with the Implicitly Restarted Arnoldi Method. Parallel calculation performance is tested on identical machines finally and a linear speedup is gained as the number of nodes increases.

Tao Li, Ximeng Guan, Zhiping Yu, Wei Xue
Semi-Lagrangian Scale Selective Two-Time-Level Scheme for Hydrostatic Atmospheric Model

A semi-Lagrangian scale selective finite difference scheme for hydrostatic atmospheric model is developed. The principal characteristics of the scheme are solution of the trajectory equations for advection, explicit first order approximation of physically insignificant adjustment terms and implicit time splitting discretization of the principal physical modes. This approach allows the use of large time steps, keeps practically the second order of accuracy and requires at each time step the amount of calculations proportional to the number of spatial grid points. The performed numerical experiments show computational efficiency of the proposed scheme and accuracy of the predicted atmospheric fields.

Andrei Bourchtein, Ludmila Bourchtein, Maxim Naumov
Identifying Cost-Effective Common Subexpressions to Reduce Operation Count in Tensor Contraction Evaluations

Complex tensor contraction expressions arise in accurate electronic structure models in quantum chemistry, such as the coupled cluster method. Transformations using algebraic properties of commutativity and associativity can be used to significantly decrease the number of arithmetic operations required for evaluation of these expressions. Operation minimization is an important optimization step for the Tensor Contraction Engine, a tool being developed for the automatic transformation of high-level tensor contraction expressions into efficient programs. The identification of common subexpressions among a set of tensor contraction expressions can result in a reduction of the total number of operations required to evaluate the tensor contractions. In this paper, we develop an effective algorithm for common subexpression identification and demonstrate its effectiveness on tensor contraction expressions for coupled cluster equations.

Albert Hartono, Qingda Lu, Xiaoyang Gao, Sriram Krishnamoorthy, Marcel Nooijen, Gerald Baumgartner, David E. Bernholdt, Venkatesh Choppella, Russell M. Pitzer, J. Ramanujam, Atanas Rountev, P. Sadayappan

Modelling and Simulation in the Natural Sciences

Prediction of Readthroughs Based on the Statistical Analysis of Nucleotides Around Stop Codons

Readthrough is an unusual translational event in which a stop codon is skipped or misread as a sense codon. Translation then continues past the stop codon and results in an extended protein product. Reliable prediction of readthroughs is not easy since readthrough is in competition with standard decoding and readthroughs occur only at a tiny fraction of stop codons in the genome. We developed a program that predicts readthrough sites directly from statistical analysis of nucleotides surrounding all stop codons in genomic sequences. Experimental results of the program on 86 genome sequences showed that 80% and 100% of the actual readthrough sites were found in the top 3% and 10% prediction scores, respectively.

Sanghoon Moon, Yanga Byun, Kyungsook Han
Web Service for Finding Ribosomal Frameshifts

Recent advances in biomedical research have generated a huge amount of data and software to deal with the data. Many biomedical systems use heterogeneous data structures and incompatible software components, so integration of software components into a system can be hindered by incompatibilities between the components of the system. This paper presents an XML web service and web application program for predicting ribosomal frameshifts from genomic sequences. Experimental results show that the web service of the program is useful for resolving many problems with incompatible software components as well as for predicting frameshifts of diverse types. The web service is available at http://wilab.inha.ac.kr/fsfinder2.

Yanga Byun, Sanghoon Moon, Kyungsook Han
A Remote Sensing Application Workflow and Its Implementation in Remote Sensing Service Grid Node

In this article we describe a remote sensing application workflow in building a Remote Sensing Information Analysis and Service Grid Node at Institute of Remote Sensing Applications based on the Condor platform. The goal of the Node is to make good use of physically distributed resources in the field of remote sensing science such as data, models and algorithms, and computing resource left unused on Internet. Implementing it we use workflow technology to manage the node, control resources, and make traditional algorithms as a Grid service. We use web service technology to communicate with Spatial Information Grid (SIG) and other Grid systems. We use JSP technology to provide an independent portal. Finally, the current status of this ongoing work is described.

Ying Luo, Yong Xue, Chaolin Wu, Yincui Hu, Jianping Guo, Wei Wan, Lei Zheng, Guoyin Cai, Shaobo Zhong, Zhengfang Wang
Predictive Analysis of Blood Gasometry Parameters Related to the Infants Respiration Insufficiency

The paper presents application of artificial immune system in time series prediction of the medical data. Prediction mechanism used in the work is basing on the paradigm stating that in the immune system during the response there exist not only antigene – antibody connections but also antigene – antigene connections, which role is control of antibodies activity. Moreover in the work learning mechanism of the artificial immune network, and results of carried out tests are presented.

Wieslaw Wajs, Mariusz Swiecicki, Piotr Wais, Hubert Wojtowicz, Pawel Janik, Leszek Nowak
Protein Simulation Using Fast Volume Preservation

Since empirical force fields computation requires a heavy computational cost, the simulation of complex protein structures is a time consuming process for predicting their configuration. To achieve fast but plausible global deformations of protein, we present an efficient and robust global shape based protein dynamics model using an implicit volume preservation method. A triangulated surface of the protein is generated using a marching cube algorithm in pre-processing time. The normal mode analysis based on motion data is used as a reference deformation of protein to estimate the necessary forces for protein movements. Our protein simulator provides a nice test-bed for initial screening of behavioral analysis to simulate various types of protein complexes.

Min Hong, David Osguthorpe, Min-Hyung Choi

Advanced Numerical Algorithms

Third-Order Spectral Characterization of Termite’s Emission Track

A higher-order frequency-domain characterization of termite activity (feeding and excavating) has been performed by means of analyzing diagonal slices of the bi-spectrum. Five sets of signals of different qualities were acquired using a high sensitivity probe-accelerometer. We conclude that it is possible to establish a third-order pattern (

spectral track

) associated to the termite emissions, and resulting from the impulsive response of the sensor and the body or substratum through which the emitted waves propagate.

Juan-José González de-la-Rosa, I. Lloret, Carlos G. Puntonet, A. Moreno, J. M. Górriz
Parallel Optimization Methods Based on Direct Search

This paper is focused in the parallelization of Direct Search Optimization methods, which are part of the family of derivative-free methods. These methods are known to be quite slow, but are easily parallelizable, and have the advantage of achieving global convergence in some problems where standard Newton-like methods (based on derivatives) fail. These methods have been tested with the Inverse Additive Singular Value Problem, which is a difficult highly nonlinear problem. The results obtained have been compared with those obtained with derivative methods; the efficiency of the parallel versions has been studied.

Rafael A. Trujillo Rasúa, Antonio M. Vidal, Víctor M. García
On the Selection of a Transversal to Solve Nonlinear Systems with Interval Arithmetic

This paper investigates the impact of the selection of a transversal on the speed of convergence of interval methods based on the nonlinear Gauss-Seidel scheme to solve nonlinear systems of equations. It is shown that, in a marked contrast with the linear case, such a selection does not speed up the computation in the general case; directions for researches on more flexible methods to select projections are then discussed.

Frédéric Goualard, Christophe Jermann
Landscape Properties and Hybrid Evolutionary Algorithm for Optimum Multiuser Detection Problem

Optimum multiuser detection (OMD) for CDMA systems is an NP-complete problem. Fitness landscape has been proven to be very useful for understanding the behavior of combinatorial optimization algorithms and can help in predicting their performance. This paper analyzes the statistic properties of the fitness landscape of the OMD problem by performing autocorrelation analysis, fitness distance correlation test and epistasis measure. The analysis results, including epistasis variance, correlation length and fitness distance correlation coefficient in different instances, explain why some random search algorithms are effective methods for the OMD problem and give hints how to design more efficient randomized search heuristic algorithms for it. Based on these results, a multi-start greedy algorithm is proposed for multiuser detection and simulation results show it can provide rather good performance for cases where other suboptimum algorithms perform poorly.

Shaowei Wang, Qiuping Zhu, Lishan Kang
A Parallel Solution of Hermitian Toeplitz Linear Systems,

A parallel algorithm for solving complex hermitian Toeplitz linear systems is presented. The parallel algorithm exploits the special structure of Toeplitz matrices to obtain the solution in a quadratic asymptotical cost. Our parallel algorithm transfors the Toeplitz matrix into a Cauchy–like matrix. Working on a Cauchy–like system lets to work with real arithmetic. The parallel algorithm for the solution of a Cauchy–like matrix has a low amount of communication cost regarding other parallel algorithms that work directly on the Toeplitz system. We use a message–passing programming model. The experimental tests are obtained in a cluster of personal computers.

Pedro Alonso, Miguel O. Bernabeu, Antonio M. Vidal

Applications of Computing as a Scientific Paradigm

Speech Event Detection Using Support Vector Machines

An effective speech event detector is presented in this work for improving the performance of speech processing systems working in noisy environment. The proposed method is based on a trained support vector machine (SVM) that defines an optimized non-linear decision rule involving the subband SNRs of the input speech. It is analyzed the classification rule in the input space and the ability of the SVM model to learn how the signal is masked by the background noise. The algorithm also incorporates a noise reduction block working in tandem with the voice activity detector (VAD) that has shown to be very effective in high noise environments. The experimental analysis carried out on the Spanish SpeechDat-Car database shows clear improvements over standard VADs including ITU G.729, ETSI AMR and ETSI AFE for distributed speech recognition (DSR), and other recently reported VADs.

P. Yélamos, J. Ramírez, J. M. Górriz, C. G. Puntonet, J. C. Segura
BRUST: An Efficient Buffer Replacement for Spatial Databases

This paper presents a novel buffer management technique for spatial database management systems. Much research has been performed on various buffer management techniques in order to reduce disk I/O. However, many of the proposed techniques utilize the temporal locality of access patterns. In spatial database environments, there exists not only the temporal locality, where a recently access object will be accessed again in near future, but also spatial locality, where the objects in the recently accessed regions will be accessed again in the near future. Thus, in this paper, we present a buffer management technique, called BRUST, which utilizes both the temporal locality and spatial locality in spatial database environments.

Jun-Ki Min
Effects of O3 Adsorption on the Emission Properties of Single-Wall Carbon Nanotubes: A Density Functional Theory Study

In this study, we report density functional theory calculations to examine the effects of O

3

adsorption on the field emission properties of capped C(5,5) single-wall carbon nanotubes. Structural changes, adsorption energies, and the first ionization potential for possible adsorption sites are discussed, including an applied field in the calculations. The results suggest a suppression of the emission upon O

3

adsorption, explained by the charge transfer, while the favored adsorption for the etched structures rationalizes enhancement due to sharper tips upon opening of the carbon nanotube when ozonized, consistent with experimental observations.

B. Akdim, T. Kar, D. A. Shiffler, X. Duan, R. Pachter
Developing Metadata Services for Grid Enabling Scientific Applications

In a web-based scientific computing, the creation of parameter studies of scientific applications is required to conduct a large number of experiments through the dynamic graphic user interface, without paying the expense of great difficulty of use. The generated parameter spaces which include various problems are incorporated with the computation of the application in the computational environments on the grid. Simultaneously, for the grid-based computing, scientific applications are geographically distributed as the computing resources. In order to run a particular application on the certain site, we need a meaningful metadata model for applications as the adaptive application metadata service used by the job submission service. In this paper, we present how general XML approach and our design for the generation process of input parameters are deployed on the certain scientific application as the example and how application metadata is incorporated with the job submission service in SYNSEIS (SYNthetic SEISmogram generation) tool.

Choonhan Youn, Tim Kaiser, Cindy Santini, Dogan Seber

Applications of Computing as a Scientific Paradigm

In Silico Three Dimensional Pharmacophore Models to Aid the Discovery and Design of New Antimalarial Agents

Malaria is one of the most dangerous diseases affecting primarily poor people of tropical and subtropical regions. The search for novel drugs against specific parasites is an important goal for antimalarial drug discovery. This study describes how 3D pharmacophores for antimalarial activity could be developed from known antimalarials and be used as screening tools for virtual compound libraries to identify new antimalarial candidates with examples of indolo[2,1-b]quinazoline-6,12-diones (tryptanthrins) that exhibited

in vitro

activity below 100 ng/mL. These models mapped on the potent analogues and also onto other well-known antimalarial drugs of different chemical classes including quinolines, chalcones, rhodamine dyes, Pfmrk CDK inhibitors, malarial KASIII inhibitors, and plasmepsin inhibitors. The pharmacophores allowed search and identification of new antimalarials from in-house multi-conformer 3D CIS database and enabled custom designed synthesis of new potent analogues that are found to be potent against

in vitro

W2, D6, and TM91C235 strains of

P. falciparum

.

Apurba K. Bhattacharjee, Mark G. Hartell, Daniel A. Nichols, Rickey P. Hicks, John E. van Hamont, Wilbur K. Milhous
Fuzzy Logic Speech/Non-speech Discrimination for Noise Robust Speech Processing

This paper shows a fuzzy logic speech/non-speech discrimination method for improving the performance of speech processing systems working in noise environments. The fuzzy system is based on a Sugeno inference engine with membership functions defined as combination of two Gaussian functions. The rule base consists of ten fuzzy if then statements defined in terms of the denoised subband signal-to-noise ratios (SNRs) and the zero crossing rates (ZCRs). Its operation is optimized by means of a hybrid training algorithm combining the least-squares method and the backpropagation gradient descent method for training membership function parameters. The experiments conducted on the Spanish SpeechDat-Car database shows that the proposed method yields clear improvements over a set of standardized VADs for discontinuous transmission (DTX) and distributed speech recognition (DSR) and also over recently published VAD methods.

R. Culebras, J. Ramírez, J. M. Górriz, J. C. Segura
Classification of Surimi Gel Strength Patterns Using Backpropagation Neural Network and Principal Component Analysis

This paper proposes two practically and efficiently supervised and unsupervised classifications for surimi gel strength patterns. An supervised learning method, backpropagation neural network with three layers of 17-34-4 neurons for each later, is used. An unsupervised classification method consists of the data dimensionality reduction step via the PCA algorithm and classification step using correlation coefficient similarity measure. In the similarity measure step, each surimi gel strength pattern is compared with the surimi eigen-gel patterns, produced by the PCA step. In this paper, we consider a datum pattern as a datum dimension. The training data sets (12 patterns or 12 data dimensions) of surimi gel strength are collected from 4 experiments having different fixed setting temperature at 35

o

C, 40

o

C, 45

o

C, and 50

o

C, respectively. Testing data sets (48 patterns) are including original training set and their added Gaussian noise with 1, 3 and 5 points, respectively. From the experiments, two proposed methods can classify all testing data sets into its proper class.

Krisana Chinnasarn, David Leo Pyle, Sirima Chinnasarn
Optimal Matching of Images Using Combined Color Feature and Spatial Feature

In this paper we develop a new image retrieval method based on combined color feature and spatial feature. We introduce an ant colony clustering algorithm, which helps us develop a perceptually dominant color descriptor. The similarity between any two images is measured by combining the dominant color feature with its spatial feature. The optimal matching theory is employed to search the optimal matching pair of dominant color sets of any two images, and the similarity between the query image and the target image is computed by summing up all the distances of every matched pair of dominant colors. The algorithm introduced in this paper is well suited for creating small spatial color descriptors and is efficient. It is also suitable for image representation, matching and retrieval.

Xin Huang, Shijia Zhang, Guoping Wang, Heng Wang
A Novel Network Intrusion Attempts Prediction Model Based on Fuzzy Neural Network

Identifying the intrusion attempts of the monitored systems is extremely vital for the next generation intrusion detection system. In this paper, a novel network intrusion attempts prediction model (FNNIP) is developed, which is based on the observation of network packet sequences. A new fuzzy neural network based on a novel BP learning algorithm is designed and then applied to the network intrusion attempts predicting scheme. After given the analysis of the features of the experimental data sets, the experiment process is detailed. The experimental results show that the proposed Scheme has good accuracy of predicting the network intrusion attempts by observing the network packet sequences.

Guiling Zhang, Jizhou Sun

Modelling and Simulation in Engineering

On the Random Sampling Amplitude Error

The main purpose of this paper is to examine the distribution of the random amplitude error for the sampling problem in diverse situations, and specific formulas are given, which reveal the connection between the random errors of the sampled values and the amplitude error caused by them. The information loss error is also included as a special case.

Shouyuan Yang, Zhanjie Song, Xingwei Zhou
Enhancing 3D Face Recognition by Combination of Voiceprint

This paper investigates the enhancement of identification performance when using voice classifier to help 3D face recognition. 3D face recognition is well known for its being superior to 2D due to the invariance in illumination, make-ups and pose. However, it is still challenged by expression variance. The partial ICP method we used for 3D face recognition could implicitly and dynamically extract the rigid parts of facial surface and be able to get much better performance than other methods in 3D face recognition under expression changes. This work serves to further improve the performance of recognition by combining a voiceprint classifier into partial ICP method. We implement 9 combination schemes, and experiments on database of 360 models with 40 subjects, 9 3D face scans with four different kinds of expression and 9 sessions of utterance for each subject, shows improvement of performance is very promising.

Yueming Wang, Gang Pan, Yingchun Yang, Dongdong Li, Zhaohui Wu
Physical Modeling of Laser-Induced Breakdown of Glass

We made a physical model for investigation of laser-induced breakdown of glass. To estimate the laser energy absorption through electron heating, we derive a power transfer rate equation and an electron number density equation for a steady state as a function of temperature and electric field. Numerical simulations using the derived equations show that the laser power absorption dependence of glass on temperature and electric field strength.

Jaemyoung Lee, Michael F. Becker, Taikyeong T. Jeong
An Enhanced Speech Emotion Recognition System Based on Discourse Information

There are certain correlation between two persons’ emotional states in communication, but none of previous work has focused on it. In this paper, a novel conversation database in Chinese was collected and an emotion interaction matrix was proposed to embody the discourse information in conversation. Based on discourse information, an enhanced speech emotion recognition system was presented to improve the recognition accuracy. Some modifications were performed on traditional KNN classification, which could reduce the interruption of noise. Experiment result shows that our system makes 3% – 5% relative improvement compared with the traditional method.

Chun Chen, Mingyu You, Mingli Song, Jiajun Bu, Jia Liu
Simulation of Time-Multiplexing Cellular Neural Networks with Numerical Integration Algorithms

A novel approach to simulate Cellular Neural Networks (CNN) is presented in this paper. The approach, time-multiplexing simulation, is prompted by the need to simulate hardware models and test hardware implementations of CNN. For practical applications, due to hardware limitations, it is impossible to have a one-to-one mapping between the CNN hardware processors and all the pixels of the image. This simulator provides a solution by processing the input image block by block, with the number of pixels in a block being the same as the number of CNN processors in the hardware. The algorithm for implementing this simulator is presented along with popular numerical integration algorithms. Some simulation results and comparisons are also presented.

V. Murugesh, K. Murugesan

Modelling and Simulation in Engineering

Dynamics of POD Modes in Wall Bounded Turbulent Flow

The dynamic properties of POD modes of the fluctuating velocity field developing in the wall region of turbulent channel flow are investigated. The flow of viscous incompressible fluid in a channel is simulated numerically by means of a parallel computational code based on a mixed spectral-finite difference algorithm for the numerical integration of the Navier-Stokes equations. The DNS approach (Direct Numerical Simulation of turbulence) is followed in the calculations, performed at friction Reynolds number

Re

τ

= 180. A database representing the turbulent statistically steady state of the flow through 10 viscous time units is assembled and the Proper Orthogonal Decomposition technique (POD) is applied to the fluctuating portion of the velocity field. The dynamic properties of the most energetic POD modes are investigated showing a clear interaction between streamwise-independent modes and quasi-streamwise modes in the temporal development of the turbulent flow field.

Giancarlo Alfonsi, Leonardo Primavera
Score Evaluation Within the Extended Square-Root Information Filter

A newly developed algorithm for evaluating the Log Likelihood Gradient (score) of linear discrete-time dynamic systems is presented, based on the extended Square-Root Information Filter (eSRIF). The new result can be used for efficient calculations in gradient-search algorithms for maximum likelihood estimation of the unknown system parameters. The theoretical results are given with the examples showing the superior perfomance of this computational approach over the conventional one.

Maria V. Kulikova, Innokenti V. Semoushin
An Improved Algorithm for Sequence Pair Generation

Sequence Pair is an elegant representation for block placement of IC design, and the procedure to generate the SP from an existing placement is necessary in most cases. An improved generation algorithm is proposed instead of the existing methods that are either difficult or inefficient to be implemented. The algorithm simplifies the definition of relation between blocks and avoids employing complicated graph operations. The time complexity of the algorithm is O (n

2

) and can be reduced to O (n log n), where n is the number of blocks. The experimental results of the algorithm show its superiority in running time.

Mingxu Huo, Koubao Ding
Implicit Constraint Enforcement for Rigid Body Dynamic Simulation

The paper presents a simple, robust, and effective constraint enforcement scheme for rigid body dynamic simulation. The constraint enforcement scheme treats the constraint equations implicitly providing stability as well as accuracy in constrained dynamic problems. The method does not require ad-hoc problem dependent parameters. We describe the formulation of implicit constraint enforcement for both holonomic and non-holonomic cases in rigid body simulation. A first order version of the method is compared to a first order version of the well-known Baumgarte stabilization.

Min Hong, Samuel Welch, John Trapp, Min-Hyung Choi
Heat Diffusion – Searching for the Accurate Modeling

The authors study three approaches which allow to model the steady state heat conduction in a 2D multiphase composite. The subject under investigating is the thermal conductivity of a c-BN composite thick film with possible inclusions of air bubbles. To define the thermal conductivity we have utilized (i) a commercial program ANSYS, for which a random structure has been externally generated, (ii) a cellular automata (CA) based model and (iii) a modified cellular automata based model where we have taken into account a thermal contact resistance between adjacent grains of c-BN.

Malgorzata Langer, Janusz Wozny, Malgorzata Jakubowska, Zbigniew Lisik

Parallel and Distributed Algorithms

Parallel Exact and Approximate Arrow-Type Inverses on Symmetric Multiprocessor Systems

In this paper we present new parallel inverse arrow-type matrix algorithms based on the concept of sparse factorization procedures, for computing explicitly exact and approximate inverses, on symmetric multiprocessor systems. The parallel implementation of the new inversion algorithms is discussed and numerical results are presented, using the simulation tool of Multi-Pascal.

George A. Gravvanis, Konstantinos M. Giannoutakis
A Permutation-Based Differential Evolution Algorithm Incorporating Simulated Annealing for Multiprocessor Scheduling with Communication Delays

Employing a differential evolution (DE) algorithm, we present a novel permutation-based search technique in list scheduling for parallel program. By encoding a vector as a scheduling list and differential variation as s swap operator, the DE algorithm can generate high quality solutions in a short time. In standard differential evolution algorithm, while constructing the next generation, a greedy strategy is used which maybe lead to convergence to a local optimum. In order to avoid the above problem, we combine differential evolution algorithm with simulated annealing algorithm which relaxes the criterion selecting the next generation. We also use stochastic topological sorting algorithm (STS) to generate an initial scheduling list. The results demonstrate that the hybrid differential evolution generates better solutions even optimal solutions in most cases and simultaneously meet scalability.

Xiaohong Kong, Wenbo Xu, Jing Liu
Accelerating the Viterbi Algorithm for Profile Hidden Markov Models Using Reconfigurable Hardware

Profile Hidden Markov Models (PHMMs) are used as a popular tool in bioinformatics for probabilistic sequence database searching. The search operation consists of computing the Viterbi score for each sequence in the database with respect to a given query PHMM. Because of the rapid growth of biological sequence databases, finding fast solutions is of highest importance to research in this area. Unfortunately, the required scan times of currently available sequential software implementations are very high. In this paper we show how reconfigurable hardware can be used as a computational platform to accelerate this application by two orders of magnitude.

Timothy F. Oliver, Bertil Schmidt, Yanto Jakop, Douglas L. Maskell
Benchmarking and Adaptive Load Balancing of the Virtual Reactor Application on the Russian-Dutch Grid

This paper addresses a problem of porting a distributed parallel application to the Grid. As a case study we use the Virtual Reactor appli cation on the Russian-Dutch Grid testbed. We sketch the Grid testbed infra structure and application modular architecture, and concentrate on performance issues of one of the core parallel solvers on the Grid. We compare the performance achieved on homogeneous resources with that observed on heterogeneous com puting and networking infrastructure. To increase the parallel efficiency of the solver on heterogeneous resources we developed an adaptive load balancing al gorithm. We demonstrate the speedup achieved with this technique and indicate the ways to further enhance the algorithm and develop an automated procedure for optimal utilization of Grid resources for parallel computing.

Vladimir V. Korkhov, Valeria V. Krzhizhanovskaya
Improved Prediction Methods for Wildfires Using High Performance Computing: A Comparison

Recently, dry and hot seasons have seriously increased the risk of forest fire in the Mediterranean area. Wildland simulators, used to predict fire behavior, can give erroneous forecasts due to lack of precision for certain dynamic input parameters. Developing methods to avoid such parameter problems can improve significantly the fire behavior prediction. In this paper, two methods are evaluated, involving statistical and uncertainty schemes. In each one, the number of simulations that must be carried out is enormous and it is necessary to apply high-performance computing techniques to make the methodology feasible. These techniques have been implemented in parallel schemes and tested in Linux cluster using MPI.

Germán Bianchini, Ana Cortés, Tomàs Margalef, Emilio Luque

Other Aspects of Computational Science

Support Vector Machine Regression Algorithm Based on Chunking Incremental Learning

On the basis of least squares support vector machine regression (LSSVR), an adaptive and iterative support vector machine regression algorithm based on chunking incremental learning (CISVR) is presented in this paper. CISVR is an iterative algorithm and the samples are added to the working set in batches. The inverse of the matrix of coefficients from previous iteration is used to calculate the regression parameters. Therefore, the proposed approach permits to avoid the calculation of the inverse of a large-scale matrix and improves the learning speed of the algorithm. Support vectors are selected adaptively in the iteration to maintain the sparseness. Experimental results show that the learning speed of CISVR is improved greatly compared with LSSVR for the similar training accuracy. At the same time the number of the support vectors obtained by the presented algorithm is less than that obtained by LSSVR greatly.

Jiang Jingqing, Song Chuyi, Wu Chunguo, Marchese Maurizio, Liang Yangchun
Factorization with Missing and Noisy Data

Several factorization techniques have been proposed for tackling the

Structure from Motion

problem. Most of them provide a good solution, while the amount of missing data is within an acceptable ratio. Focussing on this problem, we propose an incremental multiresolution scheme able to deal with a high rate of missing data, as well as noisy data. It is based on an iterative approach that applies a classical factorization technique in an incrementally reduced space. Information recovered following a coarse-to-fine strategy is used for both, filling in the missing entries of the input matrix and denoising original data. A statistical study of the proposed scheme compared to a classical factorization technique is given. Experimental results obtained with synthetic data and real video sequences are presented to demonstrate the viability of the proposed approach.

Carme Julià, Angel Sappa, Felipe Lumbreras, Joan Serrat, Antonio López
An Edge-Based Approach to Motion Detection

This paper presents a simple technique for motion detection in steady-camera video sequences. It consists of three stages. Firstly, a coarse moving edge representation is computed by a set of arithmetic operations between a given frame and two equidistant ones (initially the nearest ones). Secondly, non-desired edges are removed by means of a filtering technique. The previous two stages are enough for detecting edges corresponding to objects moving in the image plane with a dynamics higher than the camera’s capture rate. However, in order to extract moving edges with a lower dynamics, a scheme that repeats the previous two stages at different time scales is performed. This temporal scheme is applied over couples of equidistant frames and stops when no new information about moving edges is obtained or a maximum number of iterations is reached. Although the proposed approach has been tested on human body motion detection it can be used for detecting moving objects in general. Experimental results with scenes containing movements at different speeds are presented.

Angel D. Sappa, Fadi Dornaika
A Dominating Set Based Clustering Algorithm for Mobile Ad Hoc Networks

We propose a new Connected Dominating Set (CDS) based algorithm for clustering in Mobile Ad hoc Networks (MANETs). Our algorithm is based on Wu and Li’s [14] algorithm, however we provide significant modifications by considering the degrees of the nodes during marking process and also provide further heuristics to determine the color of a node in the initial phase. We describe, analyze and measure performance of this new algorithm by simulation and show that it performs better than Wu and Li’s [14] algorithm especially in the case of dense networks.

Deniz Cokuslu, Kayhan Erciyes, Orhan Dagdeviren
MVRC Heuristic for Solving the Multi-Choice Multi-Constraint Knapsack Problem

This paper presents the heuristic algorithm Maximizing Value per Resources Consumption (MVRC) that solves the Multi-Choice Multi-Constraint Knapsack Problem, a variant of the known NP-hard optimization problem called Knapsack problem. Starting with an initial solution, the MVRC performs iterative improvements through exchanging the already picked items in order to conclude to the optimal solution. Following a three step procedure, it tries to pick the items with the maximum Value per Aggregate Resources Consumption. The proposed algorithm has been evaluated in terms of the quality of the final solution and its run-time performance.

Maria Chantzara, Miltiades Anagnostou

Computational Science Aspects of Data Mining and Information Retrieval

FACT: A New Fuzzy Adaptive Clustering Technique

Clustering belongs to the set of mathematical problems which aim at classification of data or objects into related sets or classes. Many different pattern clustering approaches based on the pattern membership model could be used to classify objects within various classes. Different models of Crisp, Hierarchical, Overlapping and Fuzzy clustering algorithms have been developed which serve different purposes. The main deficiency that most of the algorithms face is that the number of clusters for reaching the optimal arrangement is not automatically calculated and needs user intervention. In this paper we propose a fuzzy clustering technique (FACT) which determines the number of appropriate clusters based on the pattern essence. Different experiments for algorithm evaluation were performed which show a much better performance compared with the typical widely used K-means clustering algorithm.

Faezeh Ensan, Mohammad Hossien Yaghmaee, Ebrahim Bagheri
Algorithm for K Disjoint Maximum Subarrays

The maximum subarray problem is to find the array portion that maximizes the sum of array elements in it. For

K

disjoint maximum subarrays, Ruzzo and Tompa gave an

O

(

n

) time solution for one-dimension. This solution is, however, difficult to extend to two-dimensions. While a trivial solution of

O

(

Kn

3

) time is easily obtainable for two-dimensions, little study has been undertaken to better this. We first propose an

O

(

n

+

K

log

K

) time solution for one-dimension. This is equivalent to Ruzzo and Tompa’s when order is considered. Based on this, we achieve

O

(

n

3

+

Kn

2

log

n

) time for two-dimensions. This is cubic time when

K

n

/log

n

.

Sung Eun Bae, Tadao Takaoka
An Evolutionary Approach in Information Retrieval

One critical step in information retrieval is the skimming of the returned documents, considered as globally relevant by an Information retrieval system as responses to a user’s query. This skimming has generally to be done in order to find the parts of the returned documents which contain the information satisfying the user’s information need. This task may be particularly heavy when only small parts of the returned documents are related to the asked topic. Therefore, our proposition here is to substitute an automatic extraction and recomposition process in order to provide the user with synthetic documents, called here composite documents, made of parts of documents extracted from the set of documents returned as responses to a query. The composite documents are built in such a way that they summarize as concisely as possible the various aspects of relevant information for the query and which are initially scattered among the returned documents. Due to the combinatorial cost of the recomposition process, we use a genetic algorithm whose individuals are texts and that aims at optimizing a satisfaction criterion based on similarity. We have implemented several variants of the algorithm and we proposed an analysis of the first experimental results which seems promising for a preliminary work.

T. Amghar, B. Levrat, F. Saubion
An Index Data Structure for Searching in Metric Space Databases

This paper presents the Evolutionary Geometric Near-neighbor Access Tree (EGNAT) which is a new data structure devised for searching in metric space databases. The EGNAT is fully dynamic, i.e., it allows combinations of insert and delete operations, and has been optimized for secondary memory. Empirical results on different databases show that this tree achieves good performance for high-dimensional metric spaces. We also show that this data structure allows efficient parallelization on distributed memory parallel architectures. All this indicates that the EGNAT is suitable for conducting similarity searches on very large metric space databases.

Roberto Uribe, Gonzalo Navarro, Ricardo J. Barrientos, Mauricio Marín

Hybrid Computational Methods and New Algorithmic Approaches to Computational Kernels and Applications

Unsplittable Anycast Flow Problem: Formulation and Algorithms

Our discussion in this article centers on a new optimization problem called

unsplittable anycast flow problem

(UAFP). We are given a directed network with arc capacities and a set of anycast requests.

Anycast

is a one-to-one-of-many delivery technique that allows a client to choose a content server of a set of replicated servers. In the context of unsplittable flows, anycast request consists of two connections: upstream (from the client to the server) and the downstream (in the opposite direction). The objective of UAFP is to find a subset of the requests of maximum total demand for which upstream and downstream connection uses only one path and the capacity constraint is satisfied. To our best survey, this is the first study that addresses the UFP (unsplittable flow problem) in the context of anycast flows. After formulation of UAFP, we propose several heuristics to solve that problem. Next, we present results of simulation evaluation of these algorithms.

Krzysztof Walkowiak
Lagrangean Heuristic for Anycast Flow Assignment in Connection-Oriented Networks

In this work we address the problem of anycast flow assignment.

Anycast

is a one-to-one-of-many delivery technique that allows a client to choose a content server of a set of replicated servers. We formulate an optimization problem of anycast flows assignment in a connection-oriented network, which is 0/1 and NP-complete. Thus, we propose a new effective heuristic algorithm based on Lagrangean relaxation technique. To our best survey, this is the first study that applies the Lagrangean relaxation to anycast flow problem. We evaluate the performance of the proposed scheme by making a comparison with its counterpart using a sample network topology and different scenarios of traffic demand patterns and replica location. Obtained results show advantage of the Lagrangean heuristic over a previously proposed algorithm.

Krzysztof Walkowiak
Low Complexity Systolic Architecture for Modular Multiplication over GF(2 m )

The modular multiplication is known as an efficient basic operation for public key cryptosystems over GF(2

m

). Various systolic architectures for performing the modular multiplication have already been proposed based on a standard basis representation. However, they have high hardware complexity and long latency. Thereby, this paper presents a new algorithm and architecture for the modular multiplication in GF(2

m

). First, a new algorithm is proposed based on the LSB-first scheme using a standard basis representation. Then, bit serial systolic multiplier is derived with a low hardware complexity and small latency. Since the proposed architecture incorporates simplicity, regularity, and modularity, it is well suited to VLSI implementation and can be easily applied to modular exponentiation architecture. Furthermore, the architecture will be utilized for the basic architecture of crypto-processor.

Hyun-Sung Kim, Sung-Woon Lee
A Generic Framework for Local Search: Application to the Sudoku Problem

Constraint Satisfaction Problems (CSP) provide a general framework for modeling many practical applications. CSPs can be solved with complete methods or incomplete methods. Although some frameworks has been designed to formalized constraint propagation, there are only few studies of theoretical frameworks for local search. In this paper, we are concerned with the design of a generic framework to model local search as the computation of a fixed point of functions and to solve the Sudoku problem. This work allows one to simulate standard strategies used for local search, and to design easily new strategies in a uniform framework.

T. Lambert, E. Monfroy, F. Saubion
C-Means Clustering Applied to Speech Discrimination

An effective voice activity detection (VAD) algorithm is proposed for improving speech recognition performance in noisy environments. The proposed speech/pause discrimination method is based on a hard-decision clustering approach built over a set of subband log-energies. Detecting the presence of speech frames (a new cluster) is achieved using a basic sequential algorithm scheme (BSAS) according to a given “distance” (in this case, geometrical distance) and a suitable threshold. The accuracy of the Cl-VAD algorithm lies in the use of a decision function defined over a multiple-observation (MO) window of averaged subband log-energies and the modeling of noise subspace into cluster prototypes. In addition, time efficiency is also reached due to the clustering approach which is fundamental in VAD real time applications, i.e. speech recognition. An exhaustive analysis on the Spanish SpeechDat-Car databases is conducted in order to assess the performance of the proposed method and to compare it to existing standard VAD methods. The results show improvements in detection accuracy over standard VADs and a representative set of recently reported VAD algorithms.

J. M. Górriz, J. Ramírez, I. Turias, C. G. Puntonet, J. González, E. W. Lang

Simulations and Systems

An Improved Particle Swarm Optimization Algorithm for Global Numerical Optimization

This paper presents an improved particle swarm optimization algorithm (IPSO) for global numerical optimization. The IPSO uses more particles’ information to control the mutation operation. A new adaptive strategy for choosing parameters is also proposed to assure convergence of the IPSO. Meanwhile, we execute the IPSO to solve eight benchmark problems. The results show that the IPSO is superior to some existing methods for finding the best solution, in terms of both solution quality and algorithm robustness.

Bo Zhao
Pores in a Two-Dimensional Network of DNA Strands – Computer Simulations

Formation of random network of DNA strands is simulated on a two-dimensional triangular lattice. We investigate the size distribution of pores in the network. The results are interpreted within theory of percolation on Bethe lattice.

PACS Code:

05.10.Ln, 87.14.Gg, 87.15.Aa.

M. J. Krawczyk, K. Kułakowski
Efficient Storage and Processing of Adaptive Triangular Grids Using Sierpinski Curves

We present an algorithm to store and process fully adaptive computational grids requiring only a minimal amount of memory. The adaptive grid is specified by a recursive decomposition of triangular grid cells; the cells are stored and processed in an order that is given by Sierpinski’s space filling curve. A sophisticated system of stacks is used to ensure the efficient access to the unknowns. The resulting scheme makes it possible to process grids containing more than one hundred million cells on a common workstation, and is also inherently cache efficient.

M. Bader, Ch. Zenger
Integrating Legacy Authorization Systems into the Grid: A Case Study Leveraging AzMan and ADAM

While much of the Grid security community has focused on developing new authorization systems, the real challenge is often integrating legacy authorization systems with Grid software. The existing authorization system might not understand Grid authentication, might not scale to Grid-level usage, might not be able to understand the operations that are requested to be authorized, and might require an inordinate amount of "glue code" to integrate the native language of the legacy authorization system with the Grid software. In this paper, we discuss several challenges and the resulting successful mechanisms for integrating the Globus Toolkit and WSRF.NET with AzMan, a role-based authorization system that ships with Windows Server 2003. We leverage the OGSA GGF Authorization Interface and our own SAML implementation so that the enterprise can retain their existing AzMan mechanism while resulting in new, scalable mechanisms for Grid authorization.

Weide Zhang, David Del Vecchio, Glenn Wasson, Marty Humphrey

Advances in Parameter Estimation in Computational-Science: Strategies, Concepts, and Applications

Quasi-Gaussian Particle Filtering

The recently-raised Gaussian particle filtering (GPF) introduced the idea of Bayesian sampling into Gaussian filters. This note proposes to generalize the GPF by further relaxing the Gaussian restriction on the prior probability. Allowing the non-Gaussianity of the prior probability, the generalized GPF is provably superior to the original one. Numerical results show that better performance is obtained with considerably reduced computational burden.

Yuanxin Wu, Dewen Hu, Meiping Wu, Xiaoping Hu
Improved Sensitivity Estimate for the H 2 Estimation Problem

The paper deals with the local sensitivity analysis of the discrete-time infinite-horizon

H

2

estimation problem. An improved, nonlinear sensitivity estimate is derived which is less conservative than the existing, condition number based sensitivity estimates.

N. D. Christov, M. Najim, E. Grivel
Constrained Optimization of the Stress Function for Multidimensional Scaling

Multidimensional Scaling (MDS) requires the multimodal Stress function optimization to estimate the model parameters, i.e. the coordinates of points in a lower-dimensional space. Therefore, finding the global optimum of the Stress function is very important for applications of MDS. The main idea of this paper is replacing the difficult multimodal problem by a simpler unimodal constrained optimization problem. A coplanarity measure of points is used as a constraint while the Stress function is minimized in the original high-dimensional space. Two coplanarity measures are proposed. A simple example presented illustrates and visualizes the optimization procedure. Experimental evaluation results with various data point sets demonstrate the potential ability to simplify MDS algorithms avoiding multidimodality.

Vydunas Saltenis
Targeted Observations for Atmospheric Chemistry and Transport Models

The aim of this paper is to address computational aspects of the targeted observations problem for atmospheric chemical transport models. The fundamental question being asked is where to place the observations such that, after data assimilation, the uncertainty in the resulting state is minimized. Our approach is based on reducing the system along the subspace defined by the dominant singular vectors, and computing the locations of maximal influence on the verification area. Numerical results presented for a simulation of atmospheric pollution in East Asia in March 2001 show that the optimal location of observations depends on the pattern of the flow but is different for different chemical species. Targeted observations have been previously considered in the context of numerical weather prediction. This work is, to the best of our knowledge, the first effort to study targeted observations in the context of chemical transport modeling. The distinguishing feature of these models is the presence of stiff chemical interactions.

Adrian Sandu
Model Optimization and Parameter Estimation with Nimrod/O

Optimization problems where the evaluation step is computationally intensive are becoming increasingly common in both engineering design and model parameter estimation. We describe a tool, Nimrod/O, that expedites the solution of such problems by performing evaluations concurrently, utilizing a range of platforms from workstations to widely distributed parallel machines. Nimrod/O offers a range of optimization algorithms adapted to take advantage of parallel batches of evaluations. We describe a selection of case studies where Nimrod/O has been successfully applied, showing the parallelism achieved by this approach.

David Abramson, Tom Peachey, Andrew Lewis
The Criticality of Spare Parts Evaluating Model Using Artificial Neural Network Approach

This paper presents artificial neural networks (ANNs) for the criticality evaluating of spare parts in a power plant. Two learning methods were utilized in the ANNs, namely back propagation and genetic algorithms. The reliability of the models was tested by comparing their classification ability with a hold-out sample and an external data set. The results showed that both ANN models had high predictive accuracy. The results also indicate that there was no significant difference between the two learning methods. The proposed ANNs was successful in decreasing inventories holding costs significantly by modifying the unreasonable target service level setting which is confirmed by the corresponding criticality class in the organization.

Lin Wang, Yurong Zeng, Jinlong Zhang, Wei Huang, Yukun Bao

Efficient Fault Tolerance Techniques for Large Scale Systems

Solving Election Problem in Asynchronous Distributed Systems

So far, the weakest failure detectors had been studied extensively for several of such fundamental problems. It is stated that Perfect Failure Detector

P

is the weakest failure detector to solve the Election problem with any number of faulty processes. In this paper, we introduce Modal failure detector

M

and show that to solve Election,

M

is the weakest failure detector to solve election when the number of faulty processes is less than ⌈

n

/2 ⌉. We also show that it is strictly weaker than

P

.

SeongHoon Park
A Performance Model of Fault-Tolerant Routing Algorithm in Interconnect Networks

The Software-Based fault-tolerant routing [1] has been proposed as an efficient routing algorithm to preserve both performance and fault-tolerant demands in large–scale parallel computers and current multiprocessor systems-on-chip (Mp-SoCs). A number of different analytical models for fault-free routing algorithms have been suggested in the past literature. However, there has not been reported any similar analytical model of fault-tolerant routing in the presence of faulty components. This paper presents a new analytical model to capture the effects of failures in wormhole-switched

k

-ary

n

-cubes using Software-Based fault-tolerant routing algorithm. The validity of the model is demonstrated by comparing analytical results with those obtained through simulation experiments

.

F. Safaei, M. Fathy, A. Khonsari, M. Ould-Khaoua
Speculation Meets Checkpointing

This paper describes a checkpointing mechanism destined for Distributed Shared Memory (DSM) systems with speculative prefetching. Speculation is a general technique involving prediction of the future of a computation, namely accesses to shared objects unavailable on the accessing node (

read faults

). Thanks to such predictions objects can be fetched before the actual access operation is performed, resulting, at least potentially, in considerable performance improvement. The proposed mechanism is based on independent incremental checkpointing integrated with a coherence protocol introducing little overhead. It ensures the consistency of checkpoints, allowing fast recovery from failures.

Arkadiusz Danilecki, Michał Szychowiak
Design and Verification for Hierarchical Power Efficiency System (HPES) Design Techniques Using Low Power CMOS Digital Logic

This paper presents the design implementation of digital circuit and verification method for power efficiency systems, focused on static power consumption while the CMOS logic is in standby mode. As complexity rises, it is necessary to study the effects of system energy at the circuit level and to develop accurate fault models to ensure system dependability. Our approach to designing reliable hardware involves techniques for hierarchical power efficiency system (HPES) design and a judicious mixture of verification method is verified by this formal refinement. This design methodology is validated by the low power adder with functional verification at the chip level after satisfying the design specification. It also describes a new HPES integration method combining low power circuit for special purpose computers. The use of new circuits and their corresponding HPES design techniques leads to minimal system failure in terms of reliability, speed, low power and design complexity over a wide range of integrated circuit (IC) designs.

Taikyeong Jeong, Jaemyoung Lee
Dynamic Fault Tolerance in Distributed Simulation System

Distributed simulation system is widely used for forecasting, decision-making and scientific computing. Multi-agent and Grid have been used as platform for simulation. In order to survive from software or hardware failures and guarantee successful rate during agent migrating, system must solve the fault tolerance problem. Classic fault tolerance technology like checkpoint and redundancy can be used for distributed simulation system, but is not efficient. We present a novel fault tolerance protocol which combines the causal message logging method and prime-backup technology. The proposed protocol uses iterative backup location scheme and adaptive update interval to reduce overhead and balance the cost of fault tolerance and recovery time. The protocol has characteristics of no orphan state, and do not need the survival agents to rollback. Most important is that the recovery scheme can tolerant concurrently failures, even the permanent failure of single node. Correctness of the protocol is proved and experiments show the protocol is efficient.

Min Ma, Shiyao Jin, Chaoqun Ye, Xiaojian Liu

Poster Session I

A Novel Supervised Information Feature Compression Algorithm

In this paper, a novel supervised information feature compression algorithm is set up. Firstly, according to the information theories, we carried out analysis for the concept and its properties of the cross entropy, then put forward a kind of lately concept of symmetry cross entropy (SCE), and point out that the SCE is a kind of distance measure, which can be used to measure the difference of two random variables. Secondly, We make the SCE separability criterion of the classes for information feature compression, and design a novel algorithm for information feature compression. At last, the experimental results demonstrate that the algorithm here is valid and reliable, and provides a new research approach for feature compression, data mining and pattern recognition.

Shifei Ding, Zhongzhi Shi
On a Family of Cheap Symmetric One-Step Methods of Order Four

In the paper we present a new family of one-step methods. These methods are of the Runge-Kutta type. However, they have only explicit internal stages that leads to a cheap practical implementation. On the other hand, the new methods are of classical order 4 and stage order 2 or 3. They are

A

-stable and symmetric.

Gennady Yu. Kulikov, Sergey K. Shindin
Influence of the Mutation Operator on the Solution of an Inverse Stefan Problem by Genetic Algorithms

This paper presents the influence of choice of the mutation operator on the accuracy of a solution of a two-phase design inverse Stefan problem using genetic algorithms. In the problem to be solved, the coefficient of convective heat transfer on one boundary had to be so selected that the moving interface of the phase change (freezing front) would take the given position.

Damian Słota
A Novel Nonlinear Neural Network Ensemble Model for Financial Time Series Forecasting

In this study, a new nonlinear neural network ensemble model is proposed for financial time series forecasting. In this model, many different neural network models are first generated. Then the principal component analysis technique is used to select the appropriate ensemble members. Finally, the support vector machine regression method is used for neural network ensemble. For further illustration, two real financial time series are used for testing.

Kin Keung Lai, Lean Yu, Shouyang Wang, Huang Wei
Performance Analysis of Block Jacobi Preconditioning Technique Based on Block Broyden Method

The Block Jacobi preconditioning technique based on Block Broyden method is introduced to solve nonlinear equations. This paper theoretically analyzes the time complexity of this algorithm as well as the unpreconditioned one. Numerical experiments are used to show that Block Jacobi preconditioning method, compared with the unpreconditioned one, has faster solving speed and better performance under different dimensions and numbers of blocks.

Peng Jiang, Geng Yang, Chunming Rong
The Generic McMillan Degree: A New Method Using Integer Matrices

In this paper the problem of computing the generic McMillan degree of a

Structured Transfer Function (STF)

rational matrix is considered. The problem of determining the generic McMillan degree is tackled using genericity arguments and an optimisation procedure based on path properties of integer matrices is developed. This novel approach exploits the structure of integer matrices and leads to an efficient new algorithm for the computation of the generic value of the McMillan degree.

E. Sagianos, N. Karcanias
State Estimation of Congested TCP Traffic Networks

State Estimation is an intrinsic element of many network manage-ment systems, like Power Distribution Networks and Water Distribution Networks, where its implementation not only facilitates real-time online monitoring with better observability, but it also enables an advanced control with improved system security. This paper presents a new technique based on State Estimation to address some general shortcomings of the current Active Queue Management schemes such as RED and discusses potential issues in TCP networks in order to achieve better performance.

Atulya Nagar, Ghulam Abbas, Hissam Tawfik
Study of Electron Transport in Composite Films Below the Percolation Threshold

Composite and nanocomposite films consisting of metal objects embedded in a dielectric matrix are studied by computer experiment. The electron transport through the composites is calculated in the case when the basic conductivity mechanism is the tunnel effect. It was found that the conductivity of composite film is located into tunneling clusters strongly influenced by objects arrangement in composite film.

Rudolf Hrach, Stanislav Novák, Martin Švec, Jiří Škvor
A Dynamic Partitioning Self-scheduling Scheme for Parallel Loops on Heterogeneous Clusters

Loop partitioning on parallel and distributed systems has been an important problem. Furthermore, it becomes more difficult to deal with on the emerging heterogeneous PC cluster environments. In this paper, we propose a performance-based scheme, which dynamically partitions loop iterations according to the performance ratio of cluster nodes. To verify our approach, a heterogeneous cluster is built, and two kinds of application programs are implemented to be executed in this testbed. Experimental results show that our approach performs better than traditional schemes.

Chao-Tung Yang, Wen-Chung Shih, Shian-Shyong Tseng
3-D Numerical Modelling of Coastal Currents and Suspended Sediment Transport

A three dimensional hydrodynamic and suspended sediment transport model (HYDROTAM-3) has been developed and applied to Fethiye Bay. Model can simulate the transport processes due to tidal or nontidal forcing which may be barotropic or baroclinic. The Boussinesq approximation, i.e. the density differences are neglected unless the differences are multiplied by the gravity, is the only simplifying assumption in the model. The model is also capable of computing suspended sediment distributions, amount of eroded and deposited sediment. It is a composite finite difference, finite element model. At three Stations in the Bay, continuous measurements of velocity throughout the water depth and water level were taken for 27 days. Model predictions are in good agreement with the field data.

Lale Balas, Alp Küçükosmanoğlu, Umut Yegül
Ontology-Driven Resource Selecting in the Grid Environments

In the Grid environments where many different implementations are available, the need for semantic matching based on a defined ontology becomes increasingly important. Especially for service or resource discovery and selection. In this paper, we propose a flexible and extensible approach for solving resource discovery and selection in the Grid Environments using ontology and semantic web and grid technologies. We have designed and prototyped an ontology-driven resource discovery and selection framework that exploits ontologies and domain knowledge base. We present results obtained when this framework is applied in the context of drug discovery grid. These results demonstrate the effectiveness of our framework.

Wenju Zhang, Yin Li, Fei Liu, Fanyuan Ma
Error Estimate on Non-bandlimited Random Signals by Local Averages

We show that a non-bandlimited weak sense stationary stochastic process can be approximated by its local averages near the sampling points, and explicit error bounds are given.

Zhanjie Song, Xingwei Zhou, Gaiyun He
A Fast Pseudo Stochastic Sequence Quantification Algorithm Based on Chebyshev Map and Its Application in Data Encryption

Chaos theory has been widely used in cryptography fields in recent years and the performance of the pseudo stochastic sequence quantified from chaos map has great influence on the efficiency and security of an encryption system. In this paper, an improved stochastic middle multi-bits quantification algorithm based on Chebyshev map is proposed to enhance the ability of anti reconstruction a chaos system through reverse iteration and improve the performance of the generated sequence under precision restricted condition. The balance and correlation properties of the generated sequence are analyzed. The sequence is proved to be a binary Bernoulli sequence and the distribution of the differences between the amounts of 0 and 1 is analyzed. The side lobes of auto correlation and values of cross correlation are proved to obey normal distribution

N

(0, 1/

N

).

Chong Fu, Pei-rong Wang, Xi-min Ma, Zhe Xu, Wei-yong Zhu
Supporting Interactive Computational Science Applications Within the JGrid Infrastructure

Future computational grid systems must support interactive and collaborative applications to help computational scientists in using the grid effectively. This paper shortly introduces how the JGrid system, a service-oriented dynamic grid framework, supports interactive grid applications. Via providing a high level service view of grid resources JGrid enables easy access to grid services and provides an effective programming model that allows developers to create dynamic grid applications effortlessly.

Szabolcs Pota, Zoltan Juhasz
Application of Virtual Ant Algorithms in the Optimization of CFRP Shear Strengthened Precracked Structures

Many engineering applications often involve the minimization of objective functions. The optimization becomes very difficult when the objective functions are either unknown or do not have an explicit form. This is certainly the case in the strengthening of existing precracked reinforced concrete structures using external carbon fibre reinforced polymer (CFRP) reinforcement. For a given concrete structure, the identification of the optimum strengthening system is very important and difficult, and depends on many parameters including the extent and distribution of existing cracks, loading capacity, materials and environment. The choice of these parameters essentially forms a coupled problem of finite element analysis and parameter optimization with the aim of increasing the serviceability of the structure concerned. In this paper, virtual ant algorithms combined with nonlinear FE analysis are used in the optimization of the strengthening parameters. Simulations show that the location and orientation of the CFRP reinforcement has a significant influence on the behaviour of the strengthened structure. The orientation of the reinforcement with a fixed location becomes optimal if the reinforcing material is placed perpendicular to the existing crack direction. The implication for strengthening will also be presented.

Xin-She Yang, Janet M. Lees, Chris T. Morley
Massive Data Oriented Replication Algorithms for Consistency Maintenance in Data Grids

Based on the Grid Community (GC) for data usage, differentiated replication algorithms are proposed, by which the replica node is selected according to key replica node (

SHN

) and the other. First, The replicas are passed into

SHN

; second, if certain nodes in a GC often access the data resource, it or its nearby nodes become an optional replica node based on the storage and bandwidth. A consistency maintenance algorithm is presented to facilitate the coherence function in a differentiated manner: a pessimistic manner or an optimistic manner, and to update according to the context. The simulation results show that the differentiated mechanism can improve grid access performance.

Changqin Huang, Fuyin Xu, Xiaoyong Hu
A Model of the Role of Cholesterol in the Development of Alzheimer’s Disease

We present a mathematical-computational model of the development of Alzheimer’s disease, based on the assumption that cholesterol plays a key role in the formation of neuropathological lesions that characterize the disease: the senile amyloid plaques and neurofibrillary tangles. The final model, conceived as a system of equations, was simulated by a computer program.

Gizelle Kupac Vianna, Artur E. Reis, Luís Alfredo V. de Carvalho, Roseli S. Wedemann
Characterization of Cardiac Dynamics from Locally Topological Considerations

Evolution of cardiac activity is investigated by means of methods of nonlinear dynamics, namely the method of temporal localization on the attractor reconstructed from electrocardiogram (ECG) signal is proposed for this purpose. Convergence for the function of topological instability at changing dimensionality is proved both theoretically and numerically, independently on personal features of subjects in the latter case, that provides the opportunity to estimate the complexity of cardiac dynamics. In contrast, that instability function normalized by its average displays different kind of behaviour that somewhat differs for various persons and reflects their individual features.

Victor F. Dailyudenko
Sliding Free Lagrangian-Eulerian Finite Element Method

People usually use arbitrary Lagrangian-Eulerian method to simulate the multi-phase flowing problems, but some numerical errors may be introduced during remapping. In this paper, sliding free Lagrangian-Eulerian finite element method(SFLEFEM) is developed. In SFLEFEM compressible Eulerian equations for moving mesh are dircretized without Lagrangian step and numerical experiments prove that SFLEFEM is convergent and stable.

Junbo Cheng, Guiping Zhao, Zupeng Jia, Yibing Chen, Junxia Cheng, Shuanghu Wang, Wanzhi Wen
Large-Scale Simulations of a Bi-dimensional n-Ary Fragmentation Model

A bi-dimensional

n

-ary fragmentation model is numerically studied by large-scale simulations. Its main assumptions are the existence of random point flaws and a fracture mechanism based on the larger net force. For the 4-ary fragment size distribution it was obtained a power law with exponent 1.0≤

β

≤ 1.15 . The visualizations of the model resemble brittle material fragmentation.

Gonzalo Hernandez, Luis Salinas, Andres Avila
Computationally Efficient Technique for Nonlinear Poisson-Boltzmann Equation

Discretization of non-linear Poisson-Boltzmann Equation equations results in a system of non-linear equations with symmetric Jacobian. The Newton algorithm is the most useful tool for solving non-linear equations. It consists of solving a series of linear system of equations (Jacobian system). In this article, we adaptively define the tolerance of the Jacobian systems. Numerical experiment shows that compared to the traditional method our approach can save a substantial amount of computational work. The presented algorithm can be easily incorporated in existing simulators.

Sanjay Kumar Khattri
Geometric Calibration for Multi-projector Tiled Display Based on Vanishing Point Theory

In this paper, we present a new geometric calibration method for multi-projector tiled display. Firstly, one circular feature template is attached onto a planar screen and a single un-calibrated camera observes the circular feature template and the tiled arrangement for each projector alternatively. Secondly, geometric coordinates on the planar screen for all features in each tile are computed according to the two vanishing points from the perspective template image. Thirdly, the inscribed quadrangle region in the planar screen space is calculated as the effectively region of the tiled display. Finally the geometric alignment is achieved by the correspondence between the quadrangular mesh in the projector space and the one in the display space. Our algorithm removes the limitation against the parallelism between camera image plane and planar screen plane and improves the convenience to deploy a multi-projector display system. The provided experimental results demonstrate the validity and efficiency of our new method.

Yuan Guodong, Qin Kaihuai, Hu Wei
Immersive Open Surgery Simulation

Realistic medical simulation has great potential for augmenting or complimenting traditional medical training or surgery planning, and Virtual Reality (VR) is a key enabling technology for delivering this goal. Although, medical simulators are now widely used in medical institutions, the majority of them are still reliant on desktop monitor displays, and many are restricted in their modelling capability to minimally invasive or endoscopic surgery scenarios. Whilst useful, such models lack the realism and interaction of the operating theatre. In this paper, we describe how we are advancing the technology by simulating open surgery procedures in an Immersive Projection Display CAVE environment thereby enabling medical practitioners to interact with their virtual patients in a more realistic manner.

Ali Al-khalifah, Rachel McCrindle, Vassil Alexandrov
An XML Specification for Automatic Parallel Dynamic Programming

Dynamic Programming is an important problem-solving technique used for solving a wide variety of optimizations problems. Dynamic programs are commonly designed as individual applications and, software tools are usually tailored for specific classes of recurrences and methodologies. We presented in [9] a methodological proposal that allowed us to develop a generic tool [DPSKEL] that supports a wide range of Dynamic Programming formalizations for different parallel paradigms. In this paper we extent this work by including a new layer between the end users and the tool in order to reduce the development complexity. This new layer consists in a XML specification language to describe Dynamic Programming problems in an easy manner.

Ignacio Peláez, Francisco Almeida, Daniel González
Remote Sensing Information Processing Grid Node with Loose-Coupling Parallel Structure

To use traditional algorithms and software packages on Grid system, traditional algorithms and software packages, in general, have to be modified. In this paper we focus on standards and methodologies for Grid platform within the context of the Remote Sensing Data Processing Grid Node (RSDPGN) that implements a loose-coupling parallel structure for orchestrating traditional remote sensing algorithms and software packages on the Condor platform. We have implemented 17 remote sensing applications in one system using Web service and workflow technology without any change to traditional codes. Some core algorithm codes are come from a remote sensing software package which we has neither resource codes nor APIs. Others come from the program codes accumulated by our group. The design and prototype implementation of RSDPGN are presented. The advantage and shortage of loose-coupling structure is analysed. Through a case study of land surface temperature calcu-lation from MODIS data, we demonstrate the way to modify software packages in details. Moreover we discuss the problems and solutions based on our experience such as system architecture, the kinds of functional modules, fast data transfer, and state monitoring.

Ying Luo, Yong Xue, Yincui Hu, Chaolin Wu, Guoyin Cai, Lei Zheng, Jianping Guo, Wei Wan, Shaobo Zhong
Preliminary Through-Out Research on Parallel-Based Remote Sensing Image Processing

As the most important and most complex Geo-Information, remote sensing data can be fast processed with cluster-based parallel computation technologies. The through-out ability of every step of such processing will affect heavily the application of remote sensing image parallel processing technologies. This paper shall discuss more detail how to set up the through-out model of remote sensing image parallel processing and presents some deeply research works on such through-out mechanism. A lot of experiments have been given to support a quantitative analysis method to adjust the performance of remote sensing parallel processing system.

Guoqing Li, Yan Ma, Jian Wang, Dingsheng Liu
A Shortest Path Searching Method with Area Limitation Heuristics

While heuristics based on geometric constructs of the networks would appear to improve performance of Dijkstra’s algorithm, the fallacy of depreciated accuracy has been an obstacle to the wider application of heuristics in the search for shortest paths. The authors presented a shortest path algorithm that employs limited area heuristics guided by spatial arrangement of networks. The algorithm was shown to outperform other theoretically optimal solutions to the shortest path problem and with only little accuracy lost. More importantly, the confidence and accuracy levels were both controllable and predictable.

Feng Lu, Poh-Chin Lai
FPGA-Based Hyperspectral Data Compression Using Spectral Unmixing and the Pixel Purity Index Algorithm

Hyperspectral data compression is expected to play a crucial role in remote sensing applications. Most available approaches have largely overlooked the impact of mixed pixels and subpixel targets, which can be accurately modeled and uncovered by resorting to the wealth of spectral information provided by hyperspectral image data. In this paper, we develop an FPGA-based data compression technique based on the concept of spectral unmixing. It has been implemented on a Xilinx Virtex-II FPGA formed by several millions of gates, and with high computational power and compact size, which make this reconfigurable device very appealing for onboard, real-time data processing.

David Valencia, Antonio Plaza
Advertisement-Aided Search in a P2P Context Distribution System

We study a P2P proactive search mechanism based on the dissemination of advertisements for the new sources. The system design goal of limiting the state maintained by each peer and ensuring search efficiency is the driving reason for exploiting the hierarchical network model of the small-world idea. The results testify the theoretical bounds on search time and provide a view on the search time in relation to the directory capacity requirements of the peers.

Irene Sygkouna, Miltiades Anagnostou, Efstathios Sykas
A Reputation Management Framework Based on Global Trust Model for P2P Systems

A framework based on global trust model, called SRGTrust, is proposed for reputation management in P2P systems. SRGTrust assigns each peer a unique global trust value, which reflects the rating that the system as a whole gives to a peer. SRGTrust does not need any pre-trusted peers to ensure the convergence of the algorithm and invalidates one of the basic assumptions used in the previous models. Experiments show that SRGTrust converges quickly and significantly decreases the number of inauthentic files in the system.

Jingtao Li, Xueping Wang, Yongqiang Chen, Gendu Zhang
A Comparative Study of Memory Structures for DSM Systems on Wireless Environments

In recent years, wireless networking has become more popular than ever. Distributed shared memory (DSM) combines computer hardware resources in order to achieve the efficiency and high performance provided by parallel computing. Unfortunately, the major overhead of DSM software is the communication time, especially in wireless network environments. In this paper, we have implemented three memory structures over JIAJIA DSM system on wireless network, and then analyzed their performance. From experimental results, we could find out the relation between communication time and memory layouts. In addition, we have also discovered relationship between characteristics of application programs and memory structures. Experimental results of five well-known benchmark applications show that a suitable memory layout can effectively reduce communication overhead in wireless network. We have analyzed advantages and disadvantages of these memory structures, to improve future designs of wireless DSM systems.

Hsiao-Hsi Wang, Kuang-Jui Wang, Chien-Lung Chou, Ssu-Hsuan Lu, Kuan-Ching Li
Coordinated Exception Handling in J2EE Applications

In the paper, we present a method of exception handling in J2EE applications. It is proposed to create a dedicated component that is responsible for handling two types of exceptions: those concerning more than one object and those occurring commonly in an environment. The component, referred to as Remote Exception Handler, is an extension of the middleware layer of a computer system, which enables its use without modifications of application source code. Concerning highly distributed architectures, many cooperating Remote Exception Handlers placed on different nodes are created. The solution has been implemented in practice in JBoss Application Server as an additional service of the platform.

Paweł L. Kaczmarek, Bogdan Krefft, Henryk Krawczyk

Poster Session II

Efficient Unilateral Authentication Mechanism for MIPv6

We present a unilateral authentication protocol for protecting IPv6 networks against abuse of mobile IPv6 primitives. The proposed protocol imposes minimal computational requirements on mobile nodes, uses as few messages as possible. It is also easy to implement, economic to deploy and lightweight in use. We formally verifies the correctness of the protocol using the finite-state analysis tool mur

φ

, which has been used previously to analyze hardware designs and security properties of several protocols.

Yoon-Su Jeong, Bong-Keun Lee, Keon-Myung Lee, Sang-Ho Lee
Optimal Constant Weight Codes

A new class of binary constant weight codes is presented. We establish new lower bound and exact values on A(n, 2k, k + 1), in particular, A(30, 12, 7) = 9, A(48, 16, 9) = 11, A(51,16, 9) = 12, A(58, 18, 10) = 12.

Igor Gashkov
Measure on Time Scales with Mathematica

In this paper we study the Lebesgue Δ-measure on time scales. We refer to [3, 4] for the main notions and facts from the general measure and Lebesgue Δ integral theory. The objective of this paper is to show how the main concepts of

Mathematica

can be applied to fundamentals of Lebesgue Δ- and Lebesgue

$\nabla$

- measure on an arbitrary time scale and also on a discrete time scale whose rule is given by the reader. As the time scale theory is investigated in two parts, by means of

σ

and

ρ

operators, we named the measures on time scales by the set function

DMeasure

and

NMeasure

respectively for arbitrary time scales.

Ünal Ufuktepe, Ahmet Yantır
Mackendrick: A Maple Package Oriented to Symbolic Computational Epidemiology

A Maple Package named

Mackendrick

is presented. Such package is oriented to symbolic computational epidemiology.

Juan Ospina, Doracelly Hincapie
The Effect of the Theorem Prover in Cognitive Science

Humans use strategies to solve problems. Strategies are used as knowledge to plan solutions and decide procedures. A computer algebra system with a theorem prover is being developed. We must consider the theorem prover from not only the perspective of its effect on cognitive science, but also from the perspective of mathematical studies.

Tadashi Takahashi, Hidetsune Kobayashi
Designing Next-Generation Training and Testing Environment for Expression Manipulation

T-algebra is a project for creating an interactive learning environment for basic school algebra. Our main didactical principle has been that all the necessary decisions and calculations at each solution step should be made by the student, and the program should be able to understand the mistakes. This paper describes the design of our Action-Object-Input dialogue and different input modes as an instrument to communicate three natural attributes of the step: choice of conversion rule, operands and result.

Rein Prank, Marina Issakova, Dmitri Lepp, Vahur Vaiksaar
Automatic Node Configuration Protocol Using Modified CGA in Hierarchical MANETs

The CGA is designed to prevent address spoofing and stealing and to provide digital signature to users without any help from security infrastructures, but fake key generation and address collision appear in flat-tiered network. To solve these problems, CGA defines security parameter (SEC), which is set to high value when high security is required. Although CGA with high SEC makes attackers be difficult to find fake key, it brings an alarming increase in processing time to generate CGA. On the contrary, the probability to find a fake key is high if low SEC is applied to CGA. In this paper, MCGA applicable to as well public networks as ad-hoc networks is proposed. Address collision problems are settled by employing hierarchy. Using MCGA, no previous setup is required before communication, and automatic node configuration is feasible.

Hyewon K. Lee
Route Optimization in NEMO Environment with Limited Prefix Delegation Mechanism

The 4G network will must be ALL-IP network and IPv6 is destiny. Already IPv6 has extended as Mobile IPv6 for host mobility and Mobile IPv6 is extending for whole network mobility. There are many possible mobile networks, and they would be nested as they change locations. Most serious problem in a nested mobile network is the complexity of routing path of packets, and the complexity grows as the level of nesting increases. In this paper, we propose ‘limited prefix delegation mechanism’, which delivers packets through the optimized path even though mobile networks are nested. We show the effectiveness of our mechanism by solved problems.

Jungwook Song, Sunyoung Han, Kiyong Park
A Target Tracking Method to Reduce the Energy Consumption in Wireless Sensor Networks

Reducing the energy consumption is one of the most critical issues in wireless sensor networks and an accurate prediction is especially required in tracking. It is desirable that only the nodes surrounding the mobile target should be responsible for observing the target to save the energy consumption and extend the network lifetime as well. In this paper, we propose an efficient tracking method based on prediction through the error correction, which can minimize the number of participating sensor nodes for target tracking. We show that our tracking method performs well in terms of saving energy regardless of mobility pattern of the mobile target.

Hyunsook Kim, Kijun Han
Adaptive Space-Frequency Block Coded OFDM

This paper discusses an adaptive modulation technique combined with space-frequency block coded OFDM(SFBC OFDM) over frequency selective channels and evaluates the performance in terms of the outdated channel state information(CSI) in mobile environments. This paper employs the Alamouti’s diversity scheme in multiple input multiple output OFDM (MIMO OFDM) and an adaptive modulation with enhanced performance. Through the various simulations, the performance of SFBC OFDM employing adaptive modulation is compared with the performance of fixed modulation. Also, in adaptive modulation scheme, the effects of the outdated CSI under mobile environments are shown

Tae Jin Hwang, Sang Soon Park, Ho Seon Hwang
On Modelling Reliability in RED Gateways

In this paper we investigate the reliability of Random Early Detection (RED) gateway. For the RED buffer behavior a new model based on Markov chains is offered. The reliability of RED gateway is defined by an average rate of accepted packets. We examine the impact of RED tuning on the reliability by taking into account stick-slip nature of traffic intensity. The goal of the proposed model is to improve RED buffer management by using a technique of traffic intensity change detection.

Vladimir V. Shakhov, Jahwan Koo, Hyunseung Choo
Control Parameter Setting of IEEE 802.11e for Proportional Loss Rate Differentiation

The IEEE 802.11 DCF mechanism does not present performance differentiation, because Best-effort-Service is used for the degree of importance of packets. The Enhanced Distributed Coordination Function (EDCF) mechanism of IEEE 802.11e supports QoS. According to the degree of importance of packets, packets are assigned priority and control parameters are assigned difference values. Through differentiation of theses parameters, differentiated services can be provided to various priority packets (classes) in terms of throughput, packet loss rate, and delay. In this paper, parameters of the IEEE 802.11e EDCF mechanism for Proportional Loss Rate Differentiation Service(PLDS) between adjacent priority classes are investigated through mathematical analysis and network simulation.

Seung-Jun Lee, Chunsoo Ahn, Jitae Shin
Dynamic Handoff Threshold Algorithm Using Mobile Speed for WLAN Utilization Improvement in 3G-WLAN Integrated Networks

In 3G-WLAN integrated networks, for high data-rate WLAN (Wireless LAN) network the user wants to maintain the WLAN connection as long as possible and then switch to the overlaying 3G cellular data service dynamically. Thus, we propose a new dynamic threshold for seamless vertical handoff, are used to more long maintain the WLAN connection compared to fixed threshold, thus total WLAN usability is increasing. We present the design architecture of the proposed method and evaluate its performance in a network environment.

JangSub Kim, HoJin Shin, DongRyeol Shin
Efficient Data Indexing System Based on OpenLDAP in Data Grid

Grid technologies enable sharing various types of large-scale data resources generated by many unknown users for day by day jobs done at work. Finding the required data there in an easy manner is really necessary but laborious. Monitoring and information service(MIS) is very important in huge distributed systems such as grid and effective in data probing. Here we develop data indexing system(DIS) to provide the efficient data access to users based on OpenLDAP for the distributed environment. According to the comprehensive evaluation, DIS shows the better performance in terms of response time and scalability for the large number of users. It is also expected that the proposed system can be applied to globus system.

Hongseok Lee, Sung-Gon Mun, Eui-Nam Huh, Hyunseung Choo
A Home-Network Service System Based on User’s Situation Information in Ubiquitous Environment

In this paper, we propose a home-network service system that can support home services appropriate to user’s situation information in ubiquitous computing environments. The suggested system uses a uWDL workflow service scenario describing a user’s situation information as service execution constraints to support context-aware home services. The suggested system consists of a context handler and a context mapper. The context handler represents contexts described in a uWDL document as a context subtree, which expresses not only context data but also relation information among services into the fields of its node. The context mapper uses a context comparison algorithm for context comparison between context subtrees and user’s situation information. The algorithm can distinguish contexts that have the same values but different types with user’s contexts and selects a context that has all together values and types entirely equal to those of user’s contexts.

Yongyun Cho, Joohyun Han, Jaeyoung Choi, Chae-Woo Yoo
Simple-Adaptive Link State Update Algorithm for QoS Routing

To guarantee Quality of Service, routers should automatically determine routing paths in order to satisfy service requirements efficiently, based on link state information as well as network topology. Link State Database (LSDB) in routers should be well managed in order to reflect the current state of all links effectively. However, there is a trade-off between the exact reflection of the current link status and its update cost. In this paper, a simple-adaptive LSU algorithm to adaptively control the generation of link state update messages is proposed and its performance is compared with those of four existing algorithms by intensive simulations.

Seung-Hyuk Choi, Myoung-Hee Jung, Min Young Chung, Mijeong Yang, Taeil Kim, Jaehyung Park
Throughput Analysis and Enhancement for CSMA Based Wireless Networks

We studied the performance of contention based medium access control (MAC) protocols. We used a novel technique for estimating the throughput, and other parameters of interest, of such protocols. In this paper, a new assumption for the theoretical throughput limit in the distributed CSMA based MAC algorithm is introduced. Through the performance analysis and simulation studies, the proposed algorithm shows significant performance improvements in CSMA based wireless networks.

Younggoo Kwon
Efficient Password-Authenticated Key Exchange for Three-Party Secure Against Undetectable On-Line Dictionary Attacks

A password-authenticated key exchange (PAKE) protocol in the three-party setting allows two users communicating over a public network to agree on a common session key by the help of a server. In the setting the users do not share a password between themselves, but only with the server. In this paper, we explore the possibility of designing a round-efficient three-party PAKE protocol with a method to protect against undetectable on-line dictionary attacks without using the random oracle. The protocol matches the most efficient three-party PAKE protocol secure against undetectable on-line dictionary attacks among those found in the literature while providing the same level of security. Finally, we indentify the relations between detectable on-line and undetectable on-line dictionary attacks by providing counter-examples to support the observed relations.

Jeong Ok Kwon, Kouichi Sakurai, Dong Hoon Lee
A Publish-Subscribe Middleware for Real-Time Wireless Sensor Networks

The specific characteristics of Wireless Sensor Networks (WSNs) have changed Quality of Service (QoS) support in these networks to a challenging task. In this paper, we propose a dispatcher as part of a message routing component in publish/subscribe WSNs. Some works consider link-based solutions to support real-time parameters in WSNs. These works do not take into account the dynamic behavior of WSNs with probable damaged nodes and links. The use of dispatcher can reduce the average message delay, whether the message has high priority or low priority. The dispatcher uses a scheduler to support real-time parameters, such as delay, and selects messages from two separate queues, namely, QoS queue and non-QoS queue. Simulation results show that our approach really reduces the average delay and increases the delivery rate for both QoS messages and non-QoS messages.

Mohsen Sharifi, Majid Alkaee Taleghan, Amirhosein Taherkordi
Performance Evaluation of a Handover Scheme for Fast Moving Objects in Hierarchical Mobile Networks

Reducing the handover latency has been one of the most critical research issues in Mobile IPv6. The research includes the fast handover, the hierarchical handover, and variations of them. This paper proposes a variation of the hierarchical handover and develops analytical models to compare the performance of the proposed scheme with that used in the hierarchical mobile networks. The performance evaluation shows that the proposed scheme is very effective for applications like Telematics where mobile nodes move fast. The paper also gives readers the threshold values with which they can select an optimal handover scheme for the given applications.

In-Hye Shin, Gyung-Leen Park, Junghoon Lee
Longest Path First WDM Multicast Protection for Maximum Degree of Sharing

In this paper, we investigate efficient approaches and algorithms for protecting multicast sessions against any single link failure while establishing multicast sessions in WDM mesh networks. Since a single failure may affect whole nodes in a multicast group and causes severe service disruption and a lot of traffic loss, protecting critical multicast sessions against link failure such as fiber cut becomes important in WDM optical networks. One of the most efficient algorithms is optimal path pair-shared disjoint paths (OPP-SDP). In this algorithm every source-destination (SD) pair has the optimal path pair (working and protection path) between the source and destination node. Since degree of sharing among the paths is essential to reduce the total cost and blocking probability, we propose the longest path first-shared disjoint paths (LPF-SDP) algorithm which decides the priority of selection among SD pairs in a resource-saving manner. Our LPF-SDP is shown to outperform over OPP-SDP in terms of degree of sharing and blocking probability.

Hyun Gi Ahn, Tae-Jin Lee, Min Young Chung, Hyunseung Choo
Multi-scale CAFE Modelling for Hot Deformation of Aluminium Alloys

The multi-Scale CAFE modelling system utilises Cellular Automata, Finite Elements and a Hybrid Modelling technique which combines neuro-fuzzy models and physical equations to simulate hot deformation of Al-1%Mg aluminium alloys using the commercial finite element software package ABAQUS

TM

. This paper addresses the issue of capturing microstructural details and providing macro linkage by simulating two phenomena. The first defines a suitable length scale such that numerical models are sufficient in detail and are appropriate in terms of computational time. The second is the feasibility using Cellular Automata (CA) as an additional technique that can be used in conj-unction with a conventional Finite Elements (FE) representation to model material heterogeneity and related properties. This is done by identifying an abstract scale in between the micro and macro scales, termed the “mesoscale” to obtain a multi-scale CAFE modelling technique that utilises the CA technique to represent initial and evolving microstructural features at an appropriate length obtained using an overlying FE mesh.

M. F. Abbod, I. C. Howard, D. A. Linkens, M. Mahfouf
Construction of Small World Networks Based on K-Means Clustering Analysis

In this paper we present a new method to create small world networks based on K-means clustering analysis. Because of the close relationship between the small world networks and the data with clustering characteristics, the resulting networks based on K-means method have many properties of small world networks including small average distance, right skewed degree distribution, and the clustering effect. Moreover the constructing process also has shown some behaviors including networks formation and evolution of small world networks.

Jianyu Li, Rui Lv, Zhanxin Yang, Shuzhong Yang, Hongwei Mo, Xianglin Huang
Spatiotemporal Data Mining with Cellular Automata

In this paper, we describe a cellular automata model for predicting biological spatiotemporal dynamics in an imagery data flow. The Bayesian probability-based algorithm is used to estimate the algal formation in a two-dimensional space. The dynamics of the cellular artificial life is described with diffusion, transport, collision and deformation. We tested the model with the historical data, including parameters, such as time, position and temperature.

Karl Fu, Yang Cai
MicroCASim: An Automata Network Simulator Applied to the Competition Between Microparasites and Host Immune Response

In this work we had developed an alternative model framework to study the concept of immunological memory by proposing as a simplified model for the interaction dynamics between a population of pathogens and the immunological system. The model is based on a probabilistic cellular automata which allows relatively easy inclusion of some of the real immunological systems in a transparently biological manner. The resulting virtual laboratory is called Microscopic Cellular Automata Simulator (

MicroCASim

)

,

a software whose basic idea is to create a tool to assist the visualization of the interaction dynamics of these entities

in silico

.

Luciana B. Furlan, Henrique F. Gagliardi, Fabrício A. B. da Silva, Ivan T. Pisa, Domingos Alves
Simulated Annealing: A Monte Carlo Method for GPS Surveying

This paper describes simulated annealing technique,which is a Monte Carlo method, to analyze and improve the efficiency of the design of Global Positioning System (GPS) surveying networks. The paper proposes various local search procedures which can be coupled with the simulated annealing technique.

Stefka Fidanova
Novel Congestion Control Scheme in Next-Generation Optical Networks

In this paper, to improve the burst loss performance, we actively avoid contentions by proposing a novel congestion control scheme that operates based on the highest (called peak load) of the loads of all links over the path between each pair of ingress and egress nodes in an Optical Burst Switching (OBS) network.

LaeYoung Kim, SuKyoung Lee, JooSeok Song
A New Fairness Guaranteeing Scheme in Optical Burst Switched Networks

Optical burst switching (OBS) paradigm has been proposed to achieve the packet switching level throughput in the all optical domains. To meet the recent OBS commercialization requirement, we study the edge/core node combined (ECNC) OBS networks where the core node performs the edge node function as well. In this type of networks, the fairness problem occurs because the remaining offset-time decides the priority under the JET-based OBS. To solve this problem, we propose a fairness guaranteeing scheme by grouping bursts based on the destination and the remaining offset-time and assigning them to each wavelength group. Also, we propose a new burst generating and scheduling scheme for the ingress node and analyze the networks throughput.

In-Yong Hwang, SeoungYoung Lee, Hong-Shik Park
Disclosing the Element Distribution of Bloom Filter

An algorithm named Reconstruction based on Semantically Enhanced Counting Bloom Filter (

RSECBF

) was proposed to disclose the distribution of original element from semantically enhanced Counting Bloom Filter’s hash space. The algorithm deploys

DBSM,

which directly selects some bits from original string as the reversible hash function. The overlapping of hash bit strings in this paper brings the ability to confirm the homogenetic hash strings. The efficiency of

RSECBF

in the principal component analysis was verified by the DDoS detection in a published trace.

Yanbing Peng, Jian Gong, Wang Yang, Weijiang Liu
An Efficient Key-Update Scheme for Wireless Sensor Networks

A novel key-update scheme is proposed for wireless sensor networks. The center server in a wireless sensor network first broadcasts a series of randomly generated code slices to sensor nodes. Upon receiving all the code slices, the sensor nodes find their neighboring coordinators to generate a permutation of slice numbers and send this permutation back to the center server. The center server and the sensor nodes can thus assemble a common program based on the permutation to derive their common key. Subsequent key-updates can then be done by this program based on the previous keys. The proposed scheme is simple, efficient, and is secure if the sensor nodes cannot be compromised within a short bound of time.

Chien-Lung Wang, Gwoboa Horng, Yu-Sheng Chen, Tzung-Pei Hong
Estimating Average Flow Delay in AQM Router

Delay measurement plays an important role in network QOS control. Much work has been done about the measurement of end-to-end delay, this paper describes a mechanism estimating flow Number and average delay in AQM router in the Internet. This estimate is obtained without collecting or analyzing state information on individual flows.

Ming Liu, Wen-hua Dou, Rui Xiao
Stock Trading System: Framework for Development and Evaluation of Stock Trading Strategies

Intelligent stock trading models are becoming valuable advisors in stock markets. While developing such models a big importance is given to its evaluation and comparison with other working models. The paper introduces trading system for stock markets, which is adesigned as a framework for development and evaluation of intelligent decision–making models.

Jovita Nenortaitė, Alminas Čivilis
A Quantum Hydrodynamic Simulation of Strained Nanoscale VLSI Device

Strained silicon field effect transistor (FET) has been known for enhancing carrier mobility. The stained Si channel thickness, the Si

1 − − x

Ge

x

composition fraction and the Si

1 − − x

Ge

x

layer thickness are three crucial parameters for designing strained Si/SiGe MOSFET. Mobility enhancement and device reliability may be unnecessarily conservative. In this paper, numerical investigation of drain current, gate leakage and threshold voltage for strained Si/SiGe MOSFET are simulated under different device profiles. According to our results, the optimal combination of parameters are as follows: stained Si channel thickness is 7 nm, Ge content is 20%, and the Si

1 − − x

Ge

x

layer thickness should be chosen between 20~50 nm.

Shih-Ching Lo, Shao-Ming Yu
Implementing Predictable Scheduling in RTSJ-Based Java Processor

Due to the preeminent work of the RTSJ, Java is increasingly expected to become the leading programming language in embedded real-time systems. To provide an efficient real-time Java platform, a Real-Time Java Processor (HRTEJ) based on the RTSJ was designed. This Java Processor efficiently implements the scheduling mechanism proposed in the RTSJ, and offers a simpler programming model through meliorating the scoped memory. Special hardwares are provided in the processor to guarantee the Worst Case Execution Time (WCET) of scheduling. In this paper, the scheduling implementation of this Java Processor is discussed, and its WCET is analyzed as well.

Zhilei Chai, Wenbo Xu, Shiliang Tu, Zhanglong Chen
The Improvement of NetSolve System

NetSolve is a kind of grid middleware used for high performance computing. In this article, the architecture and operational principle of NetSolve are first analyzed, the limitations of the Netsolve system are pointed out, and then Web Service Server and Server Proxy are put forward as the improvement of NetSolve System to solve these limitations.

Haiying Cheng, Wu Zhang, Wenchao Dai
Grid Information Service Based on Network Hops

Grid information service influences outcome of applications on grid platforms directly. In this paper, network coordinate is introduced in grid information service mechanism to locate each grid node. With the hop count generation algorithm, network hops between user and resource providers can be forecasted, and results can be submitted to grid information service, which offers a list of resource providers with network hops increasing so that scheduler can work more efficiently. Performance proves that it is suitable for time-sensitive applications or applications with special restriction of network hops.

Xia Xie, Hai Jin, Jin Huang, Qin Zhang
Security and Performance in Network Component Architecture

In the Internet computing environment, clients usually invoke remote components to accomplish computation, which leads to many security problems. Traditional network component architecture model focuses on business logic, and takes little consideration for security problems. This paper proposes Secure Network Component Architecture Model (SNCAM). It classifies clients according to their credibility levels on component-side. Next, it presents the main idea of this paper, which is the security domain. To reduce the overhead introduced by the security mechanism, a method called security agent is also proposed.

Dongjin Yu, Ying Li, Yi Zhou, Zhaohui Wu
Design and Implementation of a Resource Management System Using On-Demand Software Streaming on Distributed Computing Environment

As distributed computing has become a large-scale environment such as grid computing, software resource management is rising as the key issue. In this paper, we propose a new resource management system, which manages software resources effectively, and its prototype implementation. This system uses an existing on-demand software streaming technology to manage software resources. This system greatly reduces traditional software deployment costs and associated installation problems. In this system, an added node can also execute a requested job without installation of applications.

Jongbae Moon, Sangkeon Lee, Jaeyoung Choi, Myungho Kim, Jysoo Lee
Pipelining Network Storage I/O

In this paper, with introducing pipelining technology, network storage (e.g. network-attached storage, storage area network) and segmenting the reasonable pipelining stage are studied, which may have significant direction for enhancing performance of network storage system. Some experi-ments about pipelining I/O are implemented in the Heter-RAID. These results show that I/O pipelining scheduling can take advantage of pipelining features to achieve high performance I/O over network storage system.

Lingfang Zeng, Dan Feng, Fang Wang
Modeling Efficient XOR-Based Hash Functions for Cache Memories

In this paper, we design new XOR-based hash functions, which compute each set index bit as XOR of a subset of the bits in the address. These are conflict-free hash functions which are different types according to

m

is even or odd.

Sung-Jin Cho, Un-Sook Choi, Yoon-Hee Hwang, Han-Doo Kim
Maintaining Gaussian Mixture Models of Data Streams Under Block Evolution

A new method for maintaining a Gaussian mixture model of a data stream that arrives in blocks is presented. The method constructs local Gaussian mixtures for each block of data and iteratively merges pairs of closest components. Time and space complexity analysis of the presented approach demonstrates that it is 1-2 orders of magnitude more efficient than the standard

EM

algorithm, both in terms of required memory and runtime.

J. P. Patist, W. Kowalczyk, E. Marchiori
An Adaptive Data Retrieval Scheme for Reducing Energy Consumption in Mirrored Video Servers

We propose a new energy-aware data retrieval scheme (EDR) for mirrored video servers. We start by analytically determining the retrieval period that balances bandwidth and buffer size. We then propose a data retrieval scheme in which the period can be dynamically changed to reflect server utilization, with the aim of giving disks more chance to enter low-power mode. Simulation results show that it saves up to 36% energy compared with conventional video server operation.

Minseok Song
Backmatter
Metadaten
Titel
Computational Science – ICCS 2006
herausgegeben von
Vassil N. Alexandrov
Geert Dick van Albada
Peter M. A. Sloot
Jack Dongarra
Copyright-Jahr
2006
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-34380-6
Print ISBN
978-3-540-34379-0
DOI
https://doi.org/10.1007/11758501