Skip to main content

2009 | Buch

Computational Science – ICCS 2009

9th International Conference Baton Rouge, LA, USA, May 25-27, 2009 Proceedings, Part II

herausgegeben von: Gabrielle Allen, Jarosław Nabrzyski, Edward Seidel, Geert Dick van Albada, Jack Dongarra, Peter M. A. Sloot

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

“There is something fascinating about science. One gets such wholesale returns of conjecture out of such a tri?ing investment of fact. ” Mark Twain, Life on the Mississippi The challenges in succeeding with computational science are numerous and deeply a?ect all disciplines. NSF’s 2006 Blue Ribbon Panel of Simulation-Based 1 Engineering Science (SBES) states ‘researchers and educators [agree]: com- tational and simulation engineering sciences are fundamental to the security and welfare of the United States. . . We must overcome di?culties inherent in multiscale modeling, the development of next-generation algorithms, and the design. . . of dynamic data-driven application systems. . . We must determine better ways to integrate data-intensive computing, visualization, and simulation. - portantly,wemustoverhauloureducationalsystemtofostertheinterdisciplinary study. . . The payo?sformeeting these challengesareprofound. ’The International Conference on Computational Science 2009 (ICCS 2009) explored how com- tational sciences are not only advancing the traditional hard science disciplines, but also stretching beyond, with applications in the arts, humanities, media and all aspects of research. This interdisciplinary conference drew academic and industry leaders from a variety of ?elds, including physics, astronomy, mat- matics,music,digitalmedia,biologyandengineering. Theconferencealsohosted computer and computational scientists who are designing and building the - ber infrastructure necessary for next-generation computing. Discussions focused on innovative ways to collaborate and how computational science is changing the future of research. ICCS 2009: ‘Compute. Discover. Innovate. ’ was hosted by the Center for Computation and Technology at Louisiana State University in Baton Rouge.

Inhaltsverzeichnis

Frontmatter

Third Workshop on Teaching Computational Science (WTCS 2009)

Frontmatter
Third Workshop on Teaching Computational Science (WTCS 2009)

The Third Workshop on Teaching Computational Science, within the International Conference on Computational Science, provides a platform for discussing innovations in teaching computational sciences at all levels and contexts of higher education. This editorial provides an introduction to the work presented during the sessions.

Alfredo Tirado-Ramos, Angela Shiflet
Combination of Bayesian Network and Overlay Model in User Modeling

The core of adaptive system is user model containing personal information such as knowledge, learning styles, goals which is requisite for learning personalized process. There are many modeling approaches, for example: stereotype, overlay, plan recognition... but they do not bring out the solid method for reasoning from user model. This paper introduces the statistical method that combines Bayesian network and overlay modeling so that it is able to infer user’s knowledge from evidence collected during user’s learning process.

Loc Nguyen, Phung Do
Building Excitement, Experience and Expertise in Computational Science among Middle and High School Students

Three of the most important skills for advancing modern mathematics and science are quantitative reasoning, computational thinking, and multi-scale modeling. The SUCCEED Apprenticeship program gives students the opportunity of exploring all three of these areas. The SUCCEED Apprenticeship program uses innovative approaches to get students excited about computational science. The overall goal of this program is to provide middle and high school students with authentic experiences in the techniques and tools of information technology with a particular focus on computational science. The program combines appropriate structure (classroom-style training and project-based work experience) with meaningful work content, giving students a wide variety of technical and communication skills. The program provides middle and high students from ethnically and economically diverse backgrounds with training and authentic experiences in using computational science.

Patricia Jacobs
Using R for Computer Simulation and Data Analysis in Biochemistry, Molecular Biology, and Biophysics

Modern biology has become a much more quantitative science, so there is a need to teach a quantitative approach to students. I have developed a course that teaches students some approaches to constructing computational models of biological mechanisms, both deterministic and with some elements of randomness; learning how concepts of probability can help to understand important features of DNA sequences; and applying a useful set of statistical methods to analysis of experimental data. The free, open-source, cross-platform program R serves well as the computer tool for the course, because of its high-level capabilities, excellent graphics, superb statistical capabilities, extensive contributed packages, and active development in bioinformatics.

Victor A. Bloomfield
Teaching Model for Computational Science and Engineering Programme

Computational Science and Engineering is an inherently multidisciplinary field, the increasingly important partner of theory and experimentation in the development of knowledge. The Computer Architecture and Operating Systems department of the Universitat Autònoma de Barcelona has created a new innovative masters degree programme with the aim of introducing students to core concepts in this field such as large scale simulation and high performance computing. An innovative course model allows students without a computational science background to enter this arena. Students from different fields have already completed the first edition of the new course and positive feedback has been received from students and professors alike. The second edition is in development.

Hayden Stainsby, Ronal Muresano, Leonardo Fialho, Juan Carlos González, Dolores Rexachs, Emilio Luque
Spread-of-Disease Modeling in a Microbiology Course

Microbiology is the study of microorganisms. Most college courses in microbiology emphasize the biology of bacteria and viruses, including those that are human pathogens. One challenging aspect of the course is to introduce students to epidemiology, which considers the causes, dispersal, and control of disease. Although disease transmission models have helped develop successful strategies for managing epidemics, most science students are unaware of their advantages and complexities. To address this challenge, the microbiology course at Wofford College has incorporated a sequence of three or four laboratories on modeling the spread of disease. Emphasis in Computational Science students who have studied modeling and simulation in depth serve as laboratory assistants and mentors. Evidence from test scores and self-assessment support the hypothesis that the sequence of laboratories has improved student understanding of human disease dynamics and demonstrated the utility of computational models.

George W. Shiflet, Angela B. Shiflet
An Intelligent Tutoring System for Interactive Learning of Data Structures

The high level of abstraction necessary to teach

data structures

and

algorithmic schemes

has been more than a hindrance to students. In order to make a proper approach to this issue, we have developed and implemented during the last years, at the Computer Science Department of the Complutense University of Madrid, an innovative

intelligent tutoring system

for the interactive learning of data structures according to the new guidelines of the

European Higher Education Area

. In this paper, we present the main contributions to the design of this intelligent tutoring system. In the first place, we describe the tool called

Vedya

for the visualization of data structures and algorithmic schemes. In the second place, the

Maude

system to execute the algebraic specifications of abstract data types using the

Eclipse

system, by which it is possible to study from the more abstract level of a software specification up to its specific implementation in

Java

, thereby allowing the students a self-learning process. Finally, we describe the

Vedya Professor

module, designed to allow teachers to monitor the whole educational process of the students.

Rafael del Vado Vírseda, Pablo Fernández, Salvador Muñoz, Antonio Murillo
A Tool for Automatic Code Generation from Schemas

Algorithm design is one of the more neglected aspects in programming introduction courses. On the contrary, schemas focus on solution construction, since they gather common characteristics of algorithms, so they can be considered as algorithm cognitive units. In this paper, we go beyond the benefits of teaching schemas and we present a tool that incorporates their use. It automatically generates code from the application of schemas, allowing its integration into the class as a useful educational tool.

Antonio Gavilanes, Pedro J. Martín, Roberto Torres
The New Computational and Data Sciences Undergraduate Program at George Mason University

We describe the new undergraduate science degree program in Computational and Data Sciences (CDS) at George Mason University (Mason), which began offering courses for both major (B.S.) and minor degrees in Spring 2008. The overarching theme and goal of the program are to train the next-generation scientists in the tools and techniques of

cyber-enabled science (e-Science)

to prepare them to confront the emerging petascale challenges of data-intensive science. The Mason CDS program has a significantly stronger focus on data-oriented approaches to science than do most computational science and engineering programs. The program has been designed specifically to focus both on simulation (Computational Science) and on data-intensive applications (Data Science). New courses include Introduction to Computational & Data Sciences, Scientific Data and Databases, Scientific Data & Information Visualization, Scientific Data Mining, and Scientific Modeling & Simulation. This is an

interdisciplinary science

program, drawing examples, classroom materials, and student activities from a broad range of physical and biological sciences. We will describe some of the motivations and early results from the program. More information is available at http://cds.gmu.edu/.

Kirk Borne, John Wallin, Robert Weigel
Models as Arguments: An Approach to Computational Science Education

Hardware and software technology have outpaced our ability to develop models and simulations that can utilize them. Furthermore, as models move further into unfamiliar territory, the issues of correctness becomes more difficult to assess. We propose extending classical argumentation structures as the basis for computational science education.

D. E. Stevenson
A Mathematical Modeling Module with System Engineering Approach for Teaching Undergraduate Students to Conquer Complexity

This paper presents a mathematical modeling module for ODE courses. The module uses light-weight systems engineering approach to promote the competency of undergraduates to overcome the complexity in applied mathematics problems. The mathematics training of undergraduates in most colleges is limited to solving applications with a couple of variables in few steps of computations. Once faced with problems beyond that level of complexity, they are not only challenged to plan a scheme for finding solutions, but also to provide justification for their answers. This module combines an iterative modeling process with the compartmental analysis methodology to leverage these challenges. Verification and validation techniques are introduced for assuring the soundness of answers. The query-based process forces the students to trace critical mathematics equations to the corresponding phenomena of the problem under consideration. Examples within the module are arranged with incremental complexity. Stella is used as a modeling and simulation tool.

Hong Liu, Jayathi Raghavan
Lessons Learned from a Structured Undergraduate Mentorship Program in Computational Mathematics at George Mason University

We present the results from the first two years of the Undergraduate Research for Computational Mathematics (URCM) program at George Mason University (Mason). In this program, students work on a year-long research project in Computational Science while being supervised by a mentor. We describe the structure and goals of this program along with some observations about the elements that we have found that have been challenging in its implementation. Finally, we will provide a summary of the outcomes of the first two years of this project.

John Wallin, Tim Sauer

Workshop on Computational Chemistry and Its Applications (4th CCA)

Frontmatter
First Principle Study of the Anti– and Syn–Conformers of Thiophene–2–Carbonyl Fluoride and Selenophene–2–Carbonyl Fluoride in the Gas and Solution Phases

The anti- and syn-conformers of thiophene-2-carbonyl fluoride (

A

) and selenophene-2-carbonyl fluoride (

B

) have been studied in the gas phase. The transition states have also been obtained for the interconversion of the anti- and syn-conformers. The methods used are MP2 and DFT/B3LYP and the basis sets used for all atoms are 6-311++G(d,p). The optimized geometries, dipole moments, moment of inertia, energies, energy differences and rotational barriers are reported. This study has been extended to include solvent effect. Some of the vibrational frequencies of the conformers are reported with appropriate assignments. The results indicate that in the gas phase the syn conformer is more stable and the CCSD(T)//MP2 energy differences are 2.97 kJ/mol (

A

) and 3.02 kJ/mol (

B

) and barriers of rotation are 38.50 kJ/mol (

A

) and 36.89 kJ/mol (

B

). The structures and vibrational frequencies of (

A

) and (

B

) are not much affected by the solvents but the more polar conformer gets more stabilized. The major effect of the solvents is that energy difference decreases but rotational barrier increases. The peculiar characteristic of fluorine affecting conformational preference is not observed.

Hassan H. Abdallah, Ponnadurai Ramasami
Density Functional Calculation of the Structure and Electronic Properties of Cu n O n (n=1-4) Clusters

We have performed

ab initio

Monte Carlo simulated annealing simulations and density functional theory calculations to study the structures and stabilities of copper oxide clusters, Cu

n

O

n

(

n

=1-4). We determined the lowest energy structures of neutral, positive and negatively charged copper oxide clusters using the B3LYP/LANL2DZ model chemistry. The geometries are found to undergo a structural change from two- to three-dimensions when

n

 = 4 in the neutral clusters. We have investigated the size dependence of selected electronic properties of the binding energies, second differences of the energy, ionization potentials, electron affinities, and HOMO-LUMO gaps. We also have investigated fragmentation channels and charge distributions.

Gyun-Tack Bae, Randall W. Hall
Effects of Interface Interactions on Mechanical Properties in RDX-Based PBXs HTPB-DOA: Molecular Dynamics Simulations

Atomistic molecular dynamics simulation was carried out to study interface interactions between a crystal structure and a plastic bonded explosive (PBX) system. In this work, the polymer is hydroxyl-terminated polybutadiene (HTPB), the plasticizer is dioctyl adipate (DOA) and the crystal phase is hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX). Experimental RDX crystallo-graphic data show that (

020

), (

200

) and (

210

) crystal faces usually dominate, and these were therefore only these were studied. Interface models were built and interfacial bonding energies calculated to investigate HTPB/RDX adhesion properties in the (DOA+HTPB)/RDX system. Mechanical properties such as Poisson’s ratio, Young, bulk and shear moduli were also predicted. The most favourable interactions occur between HTPB-DOA and the RDX (020) crystal face: obtaining crystals with prominent (020) faces may provide a more flexible mixture, with a lower Young’s modulus and an increased ductility.

Mounir Jaidann, Louis-Simon Lussier, Amal Bouamoul, Hakima Abou-Rachid, Josée Brisson
Pairwise Spin-Contamination Correction Method and DFT Study of MnH and H2 Dissociation Curves

A clear advantage of broken symmetry (BS) unrestricted density functional theory DFT is qualitatively correct description of bond dissociation process, but its disadvantage is that spin-polarized Slater determinant is no longer a pure spin state (a.k.a. spin contamination). We propose a new approach to eliminate the spin-contamination, based on canonical Natural Orbitals (NO). We derive an expression to extract the energy of the pure singlet state given in terms of energy of BS DFT solution, the occupation number of the bonding NO, and the energy of the higher state built on these bonding and antibonding NOs (as opposed to self-consistent Kohn-Sham orbitals). Thus, unlike spin-contamination correction schemes by Noodleman and Yamaguchi, spin-correction is introduced for each correlated electron pair individually and thus expected to give more accurate results. We validate this approach on two examples, a simple diatomic H

2

and transition metal hydride MnH.

Satyender Goel, Artëm E. Masunov
Prediction of Exchange Coupling Constant for Mn12 Molecular Magnet Using Dft+U

Single-molecule magnets are perspective materials for molecular spintronic applications. Predictions of magnetic coupling in these systems have posed a long standing problem, as calculations of this kind require a balanced description of static and dynamic electron correlation. The large size of these systems limits the choice of theoretical methods used. Two methods feasible to predict the exchange coupling parameters are broken symmetry Density Functional Theory (BSDFT) and DFT with empirical Hubbard U parameter (DFT+U). In this contribution we apply DFT+U to study Mn-based molecular magnets using Vanderbilt Ultrasoft Pseudopotential plane wave DFT method, implemented in Quantum ESPRESSSO code. Unlike most previous studies, we adjust U parameters for both metal and ligand atoms using two dineuclear molecular magnets [Mn

2

O

2

(phen)

4

]

2 + 

and [Mn

2

O

2

(OAc)(Me

4

dtne)]

3 + 

as the benchmarks. Next, we apply this methodology to Mn

12

molecular wheel. Our study finds antiparallel spin alignment in weakly interacting fragments of Mn

12

, in agreement with experimental observations.

Shruba Gangopadhyay, Artëm E. Masunov, Eliza Poalelungi, Michael N. Leuenberger
A Cheminformatics Approach for Zeolite Framework Determination

Knowledge of the framework topology of zeolites is essential for multiple applications. Framework type determination relying on the combined information of coordination sequences and vertex symbols is appropriate for crystals with no defects. In this work we present an alternative machine learning model to classify zeolite crystals according to their framework types. The model is based on an eighteen-dimensional feature vector generated from the crystallographic data of zeolite crystals that contains topological, physical-chemical and statistical descriptors. Trained with sufficient known data, this model predicts the framework types of unknown zeolite crystals within 1-2 % error and shows to be better suited when dealing with real zeolite crystals, all of which always have geometrical defects even when the structure is resolved by crystallography.

Shujiang Yang, Mohammed Lach-hab, Iosif I. Vaisman, Estela Blaisten-Barojas
Theoretical Photochemistry of the Photochromic Molecules Based on Density Functional Theory Methods

Mechanism of photoswitching in diarylethenes involves the light-initiated symmetry-allowed disrotatory electrocyclic reaction. Here we propose a computationally inexpensive Density Functional Theory (DFT) based method that is able to produce accurate potential surfaces for the excited states. The method includes constrained optimization of the geometry for the ground and two excited singlet states along the ring-closing reaction coordinate using the Slater Transition State method, followed by single-point energy evaluation. The ground state energy is calculated with the broken-symmetry unrestricted Kohn-Sham formalism (UDFT). The first excited state energy is obtained by adding the UDFT ground state energy to the excitation energy of the pure singlet obtained in the linear response Time-Dependent (TD) DFT restricted Kohn-Sham formalism. The excitation energy of the double excited state is calculated using a recently proposed (Mikhailov, I. A.; Tafur, S.; Masunov, A. E. Phys. Rev. A 77, 012510, 2008)

a posteriori

Tamm-Dancoff approximation to the second order response TD-DFT.

Ivan A. Mikhailov, Artëm E. Masunov
Predictions of Two Photon Absorption Profiles Using Time-Dependent Density Functional Theory Combined with SOS and CEO Formalisms

Two-photon absorption (2PA) and subsequent processes may be localized in space with a tightly focused laser beam. This property is used in a wide range of applications, including three dimensional data storage. We report theoretical studies of 5 conjugated chromophores experimentally shown to have large 2PA cross-sections. We use the Time Dependent Density Functional Theory (TD-DFT) to describe the electronic structure. The third order coupled electronic oscillator formalism is applied to calculate frequency-dependent second order hyperpolarizability. Alternatively, the sum over states formalism using state-to-state transition dipoles provided by the

a posteriori

Tamm-Dancoff approximation is employed. It provides new venues for qualitative interpretation and rational design of 2PA chromophores.

Sergio Tafur, Ivan A. Mikhailov, Kevin D. Belfield, Artëm E. Masunov
The Kinetics of Charge Recombination in DNA Hairpins Controlled by Counterions

The charge recombination rate in DNA hairpins is investigated. The distance dependence for the charge recombination rate between stilbene donor (Sd

 + 

) and stilbene acceptor (Sa

) linkers separated by an AT bridge has a double exponential form. We suggest that this dependence is associated with two tunneling channels distinguished by the presence or absence of the Cl

counterion bound to Sd

 + 

. Experiment-based estimates of counterion binding parameters agree within reasonable expectations. A control experiment replacing the Cl

ion with other halide ions is suggested. Counterion substitution allows modification of the charge recombination rate in either direction by orders of magnitude.

Gail S. Blaustein, Frederick D. Lewis, Alexander L. Burin, Rajesh Shrestha
Quantum Oscillator in a Heat Bath

In the present article, we use the density matrix evolution method to study the effect of a model solvent on the vibrational spectrum of a diatomic solute particle. The effect of the solvent is considered as a perturbation on the Hamiltonian of the quantum subsystem consisting of a harmonic oscillator. The bath particles are treated classically. The perturbation potential representing the interaction between the solute and the solvent is represented in a bi-exponential form. This provides an effective way to evaluate the required matrix elements needed to compute the evolution of the density matrix. The model calculations indicate that the repulsive parts of the potential dominate causing blue shifts in the vibrational frequencies.

Pramodh Vallurpalli, Praveen K. Pandey, Bhalachandra L. Tembe
Density Functional Theory Study of Ag-Cluster/CO Interactions

The interactions between carbon monoxide and small clusters of silver atoms are examined. Optimal geometries of the cluster-molecules complexes, i.e. silver cluster - carbon monoxide molecule, are obtained for different sizes of silver clusters and different numbers of carbon monoxide molecules. This analysis is performed in terms of different binding energy of these complexes and analysis of the frontier orbitals of the complex compared to those of its constituents. The silver atom and the dimer (Ag

2

) bond up to three carbon monoxide molecules per Ag atom, while the larger clusters appear to saturate at two CO’s per Ag atom. Analysis of the binding energy of each CO molecule to the cluster reveals that the general trend is a decrease with the number of CO molecules, with the exception of Ag where the second CO molecule is the strongest bound. A careful analysis of the frontier orbitals shows that the bent structures of AgCO and Ag

2

CO are a result from the interaction of the highest occupied orbital of Ag (5s) and Ag

2

(

σ

) with the lowest unoccupied orbital of CO (

π

*

). The same bent structure also appears in the bonding of CO to some of the atoms in the larger clusters. Another general trend is that the CO molecules have a tendency to bond atop of an atom rather than on bridge or face sites. These results can help us elucidate the catalytic properties of small silver clusters at the atomic level.

Paulo H. Acioli, Narin Ratanavade, Michael R. Cline, Sudha Srinivas
Time-Dependent Density Functional Theory Study of Structure-Property Relationships in Diarylethene Photochromic Compounds

Photochromic compounds exhibit reversible transition between closed and open isomeric forms upon irradiation accompanied by change in their color. The two isomeric forms differ not only in absorption spectra, but also in various physical and chemical properties and find applications as optical switching and data storage materials. In this contribution we apply Density Functional Theory (DFT) and Time-Dependent DFT (TD-DFT) to predict the equilibrium geometry and absorption spectra of a benchmark set of diarylethene based photochromic compounds in open and closed forms (before and after photocyclization). Comparison of the calculated Bond Length Alternation parameters with those available from the X-ray data indicates M05-2x functional to be the best method for geometry optimization when basis set includes polarization functions. We found M05 functional accurately predicts the maximum absorption wavelength when solvent is taken into account. We recommend combined theory level TD-M05/6-31G*/PCM//M05-2x/6-31G*/PCM for prediction of geometrical and spectral parameters of diarylethene derivatives.

Pansy D. Patel, Artëm E. Masunov
Free Energy Correction to Rigid Body Docking : Application to the Colicin E7 and Im7 Complex

We performed a 2-dimensional free energy calculation in the conformational space composed of two structures, best RMSD (Root Mean Square Distance) and the worst RMSD structures using ZDOCK on the Colicin E7 (protein) and Im7 (Inhibitor) complex. The lowest free energy minimum structure is compared to the X-ray crystal structure and the best RMSD docking structure. The free energy correction for the best RMSD structure shows an alternative in the prediction of a flexible loop position, which could not describe rigid body docking.

Sangwook Wu, Vasu Chandrasekaran, Lee G. Pedersen
The Design of Tris(o-phenylenedioxy)cyclo-trisphosphazene (TPP) Derivatives and Analogs toward Multifunctional Zeolite Use

Taking tris(

o

-phenylenedioxy)cyclotrisphosphazene (TPP) as template, series of derivatives and analogs were designed with the aim to investigate the structural features of organic zeolite (OZ) and their potential applications. On the basis of DFT-PBE0/6-31G** quantum calculation, the results show a tight dependence of the electron donor (E-D) of the entire molecule on that of the side group and bridge. It was found that extending the side fragment with a phenyl ring and substituting CH/N, or tetrathiafulvalene (TTF)-like group, or the side phenyl fragments substitution by TTF and its derivatives, preserve the “paddle wheel” molecular shape, a key factor in the tunnel formation on which is based the organic OZ use of TPP. In comparison with the commonly used organic superconductors, most of the designed molecules with TTF fragments were predicted to show comparable or better E-D strength.

Godefroid Gahungu, Wenliang Li, Jingping Zhang

Workshop on Atmospheric and Oceanic Computational Science

Frontmatter
Atmospheric and Oceanic Computational Science
First International Workshop

The first workshop on Atmospheric and Oceanic Computational Science brings together computational and domain scientists who develop computational tools for the study of the atmosphere and oceans. These tools are essential for understanding and prediciting weather, air and water pollution, and the evolution of the planet’s climate. The dynamics of the atmosphere and of the oceans is driven by a multitude of physical processes and is characterized by a multiple spatial and temporal scales. Moreover, the computations are very large scale: present day models track the time evolution of tens of millions to tens of billions variables. These factors make atmospheric and oceanic simulations a challenging, vibrant research field with a tremendous impact on society at large.

Topics covered in this symposium include new methods for spatial and temporal discretization, parallel and high performance computing, advances with existing models, and data assimilation and observation targeting algorithms.

Adrian Sandu, Amik St-Cyr, Katherine J. Evans
A Fully Implicit Jacobian-Free High-Order Discontinuous Galerkin Mesoscale Flow Solver

In this work it is shown how to discretize the compressible Euler equations around a vertically stratified base state using the discontinuous Galerkin approach on collocated Gauss type grids. A stiffly stable Rosenbrock W-method is combined with an approximate evaluation of the Jacobian to integrate in time the resulting system of ODEs. Simulations with fully compressible equations for a rising thermal bubble are performed. Also included are simulations of an inertia gravity wave in a periodic channel. The proposed time-stepping method accelerates the simulation times with respect to explicit Runge-Kutta time stepping procedures having the same number of stages.

Amik St-Cyr, David Neckels
Time Acceleration Methods for Advection on the Cubed Sphere

Climate simulation will not grow to the ultrascale without new algorithms to overcome the scalability barriers blocking existing implementations. Until recently, climate simulations concentrated on the question of whether the climate is changing. The emphasis is now shifting to impact assessments, mitigation and adaptation strategies, and regional details. Such studies will require significant increases in spatial resolution and model complexity while maintaining adequate throughput. The barrier to progress is the resulting decrease in time step without increasing single-thread performance. In this paper we demonstrate how to overcome this time barrier for the first standard test defined for the shallow-water equations on a sphere. This paper explains how combining a multiwavelet discontinuous Galerkin method with exact linear part time-evolution schemes can overcome the time barrier for advection equations on a sphere. The discontinuous Galerkin method is a high-order method that is conservative, flexible, and scalable. The addition of multiwavelets to discontinuous Galerkin provides a hierarchical scale structure that can be exploited to improve computational efficiency in both the spatial and temporal dimensions. Exact linear part time-evolution schemes are explicit schemes that remain stable for implicit-size time steps.

R. K. Archibald, Katherine J. Evans, J. B. Drake, J. B. White III
Comparison of Traditional and Novel Discretization Methods for Advection Models in Numerical Weather Prediction

Numerical Weather Prediction has been dominated by low order finite difference methodology over many years. The advent of high performance computers and the development of high order methods over the last two decades point to a need to investigate the use of more advanced numerical techniques in this field. Domain decomposable high order methods such as spectral element and discontinuous Galerkin, while generally more expensive (except perhaps in the context of high performance computing), exhibit faster convergence to high accuracy solutions and can locally resolve highly nonlinear phenomena. This paper presents comparisons of CPU time, number of degrees of freedom and overall behavior of solutions for finite difference, spectral difference and discontinuous Galerkin methods on two model advection problems. In particular, spectral differencing is investigated as an alternative to spectral-based methods which exhibit stringent explicit time step requirements.

Sean Crowell, Dustin Williams, Catherine Mavriplis, Louis Wicker
A Non-oscillatory Advection Operator for the Compatible Spectral Element Method

The spectral element method is well known as an efficient way to obtain high-order numerical solutions on unstructured finite element grids. However, the oscillatory nature of the method’s advection operator makes it unsuitable for many applications. One popular way to address this problem is with high-order discontinuous-Galerkin methods. In this work, an alternative solution which fits within the continuous Galerkin formulation of the spectral element method is proposed. Making use of a compatible formulation of spectral elements, a natural way to implement conservative non-oscillatory reconstructions for spectral element advection is shown. The reconstructions are local to the element and thus preserve the parallel efficiency of the method. Numerical results from a low-order quasi-monotone reconstruction and a higher-order sign-preserving reconstruction are presented.

M. A. Taylor, A. St. Cyr, A. Fournier
Simulating Particulate Organic Advection along Bottom Slopes to Improve Simulation of Estuarine Hypoxia and Anoxia

In a coupled hydrodynamic and water quality model, the hydrodynamic model provides forces for movement of simulated particles in the water quality model. A proper simulation of organic solid movement from shallow to deep waters is important to simulate summer hypoxia in the deepwater. It is necessary to have a full blown particle transport model that focuses organic particulates’ resuspension and transport. This paper presents an approach to move volatile solids from the shoals to the channel by simulating movement of particulate organics due to slopes based on an example in the Chesapeake Bay eutrophication model. Implementations for the simulation of this behavior in computer parallel processing are discussed.

Ping Wang, Lewis C. Linker
Explicit Time Stepping Methods with High Stage Order and Monotonicity Properties

This paper introduces a three and a four order explicit time stepping method. These methods have high stage order and favorable monotonicity properties. The proposed methods are based on multistage-multistep (MM) schemes that belong to the broader class of general linear methods, which are generalizations of both Runge-Kutta and linear multistep methods. Methods with high stage order alleviate the order reduction occurring in explicit multistage methods due to non-homogeneous boundary/source terms. Furthermore, the MM schemes presented in this paper can be expressed as convex combinations of Euler steps. Consequently, they have the same monotonicity properties as the forward Euler method. This property makes these schemes well suited for problems with discontinuous solutions.

Emil Constantinescu, Adrian Sandu
Improving GEOS-Chem Model Tropospheric Ozone through Assimilation of Pseudo Tropospheric Emission Spectrometer Profile Retrievals

4D-variational or adjoint-based data assimilation provides a powerful means for integrating observations with models to estimate an optimal atmospheric state and to characterize the sensitivity of that state to the processes controlling it.In this paper we present the improvement of 2006 summer time distribution of global tropospheric ozone through assimilation of pseudo profile retrievals from the Tropospheric Emission Spectrometer (TES) into the GEOS-Chem global chemical transport model based on a recently-developed adjoint model of GEOS-Chem v7. We are the first to construct an adjoint of the linearized ozone parameterization (linoz) scheme that can be of very high importance in quantifying the amount of tropospheric ozone due to upper boundary exchanges. Tests conducted at various geographical levels show that the mismatch between adjoint values and their finite difference approximations could be up to 87% if linoz module adjoint is not used, leading to a divergence in the quasi-Newton approximation algorithm (L-BFGS) during data assimilation. We also present performance improvements in this adjoint model in terms of memory usage and speed. With the parallelization of each science process adjoint subroutine and sub-optimal combination of checkpoints and recalculations, the improved adjoint model is as efficient as the forward GEOS-Chem model.

Kumaresh Singh, Paul Eller, Adrian Sandu, Kevin Bowman, Dylan Jones, Meemong Lee
Chemical Data Assimilation with CMAQ: Continuous vs. Discrete Advection Adjoints

The Community Multiscale Air Quality (CMAQ) system is the Environmental Protection Agency’s main modeling tool for atmospheric pollution studies. CMAQ-ADJ, the adjoint model of CMAQ, offers new capabilities such as receptor-oriented sensitivity analysis and chemical data assimilation. This paper presents the construction of discrete advection adjoints in CMAQ. The new adjoints are thoroughly validated against finite differences. We assess the performance of discrete and continuous advection adjoints in CMAQ on sensitivity analysis and 4D-Var data assimilation applications. The results show that discrete adjoint sensitivities better agree with finite difference value than their continuous counterparts. However, continuous adjoints result in a faster convergence of the numerical optimization in 4D-Var data assimilation. Similar conclusions apply to modified discrete adjoints.

Tianyi Gou, Kumaresh Singh, Adrian Sandu
A Second Order Adjoint Method to Targeted Observations

The role of the second order adjoint in targeting strategies is studied and analyzed. Most targeting strategies use the first order adjoint to identify regions where additional information is of potential benefit to a data assimilation system. The first order adjoint posses a restriction on the targeting time for which the linear approximation accurately tracks the evolution of perturbation. Using second order adjoint information it is possible to maintain some accuracy for longer time intervals, which can lead to an increase on the target time. We propose the use of the dominant eigenvectors of the Hessian matrix as an indicator of the directions of maximal error growth for a given targeting functional. These vectors are a natural choice to be included in the targeting strategies given their mathematical properties.

Humberto C. Godinez, Dacian N. Daescu
A Scalable and Adaptable Solution Framework within Components of the Community Climate System Model

A framework for a fully implicit solution method is implemented into (1) the High Order Methods Modeling Environment (HOMME), which is a spectral element dynamical core option in the Community Atmosphere Model (CAM), and (2) the Parallel Ocean Program (POP) model of the global ocean. Both of these models are components of the Community Climate System Model (CCSM). HOMME is a development version of CAM and provides a scalable alternative when run with an explicit time integrator. However, it suffers the typical time step size limit to maintain stability. POP uses a time-split semi-implicit time integrator that allows larger time steps but less accuracy when used with scale interacting physics. A fully implicit solution framework allows larger time step sizes and additional climate analysis capability such as model steady state and spin-up efficiency gains without a loss in scalability. This framework is implemented into HOMME and POP using a new Fortran interface to the Trilinos solver library, ForTrilinos, which leverages several new capabilities in the current Fortran standard to maximize robustness and speed. The ForTrilinos solution template was also designed for interchangeability; other solution methods and capability improvements can be more easily implemented into the models as they are developed without severely interacting with the code structure. The utility of this approach is illustrated with a test case for each of the climate component models.

Katherine J. Evans, Damian W. I. Rouson, Andrew G. Salinger, Mark A. Taylor, Wilbert Weijer, James B. White III

Workshop on Geocomputation 2009

Frontmatter
GeoComputation 2009

The tremendous computing requirements of today’s algorithms and the high costs of high-performance supercomputers drive us to share computing resources. The emerging computational Grid technologies are expected to make feasible the creation of a computational environment handling many PetaBytes of distributed data, tens of thousands of heterogeneous computing resources, and thousands of simultaneous users from multiple research institutions (Giovanni

et al.

2003).

Yong Xue, Forrest M. Hoffman, Dingsheng Liu
Grid Workflow Modeling for Remote Sensing Retrieval Service with Tight Coupling

Grid computing technology is a new way for remotely sensed data processing. Tight-coupling remote sensing algorithms can’t be scheduled by grid platform directly. Therefore, we need a interactive graphical tool to present the executing relationships of algorithms and to generate automatically the corresponding submitted description files for grid platform. In this paper we mainly discusses some application cases based on Grid computing for Geo-sciences and the application limit of Grid in remote sensing, and gives the method of Grid Workflow modeling for remote sensing. Then based on the modeling, we design a concrete example.

Jianwen Ai, Yong Xue, Jie Guang, Yingjie Li, Ying Wang, Linyan Bai
An Asynchronous Parallelized and Scalable Image Resampling Algorithm with Parallel I/O

Image resampling which is frequently used in remote sensing processing procedure is a time-consuming task. Parallel computing is an effective way to speed up. However, recent parallel image resampling algorithms with massive time-consuming global processes like I/O, always lead to low efficiency and non-linear speedup ratio, especially when the amount of computing nodes increases to a certain extent. And what’s more, the various geo-referencing related to different processing applications caused a real problem of code reuse. To solve these problems, Parallel Image Resampling Algorithm with Parallel I/O (PIRA-PIO) which is an asynchronous parallelized image resampling algorithm with parallel I/O is proposed in this paper. Parallel I/O of parallel file system and asynchronous parallelization which using I/O hidden policy to sufficiently overlap the computing time with I/O time are used in PIRA-PIO for performance enhancement. In addition, the design of reusable code like design pattern will be used for the improving of flexibility in different remote sensing image processing applications. Through experimental and comparative analysis, its outstanding parallel efficiency and perfect linear speedup is showed in this paper.

Yan Ma, Lingjun Zhao, Dingsheng Liu
Design and Implementation of a Scalable General High Performance Remote Sensing Satellite Ground Processing System on Performance and Function

This paper discusses design and implementation of a scalable high performance remote sensing satellite ground processing system using a variety of advanced hardware and software application technology on performance and function. These advanced technologies include the network, parallel file system, parallel programming, job schedule, workflow management, design patterns,etc, which make performance and function of remote sensing satellite ground processing system scalable enough to fully meet the high performance processing requirement of multi-satellite, multi-tasking, massive remote sensing satellite data. The "beijing-1" satellite remote sensing ground processing system is introduced as an instance.

Jingshan Li, Dingsheng Liu
Incremental Clustering Algorithm for Earth Science Data Mining

Remote sensing data plays a key role in understanding the complex geographic phenomena. Clustering is a useful tool in discovering interesting patterns and structures within the multivariate geospatial data. One of the key issues in clustering is the specification of appropriate number of clusters, which is not obvious in many practical situations. In this paper we provide an extension of G-means algorithm which automatically learns the number of clusters present in the data and avoids over estimation of the number of clusters. Experimental evaluation on simulated and remotely sensed image data shows the effectiveness of our algorithm.

Ranga Raju Vatsavai
Overcoming Geoinformatic Knowledge Fence: An Exploratory of Intelligent Geospatial Data Preparation within Spatial Analysis

The booming of earth observation provides decision-makers with more available geospatial data as well as more puzzles about how to understand, evaluate, search, process, and utilize those overwhelming resources. The paper distinguishes a concept termed geoinformatic knowledge fence (GeoKF) to discuss the knowledge-aspect of such puzzles and the approach to overcoming them. Basing on analysis of the gap between common geography sense and geoinformatic professional knowledge, the approach composes analysis space modeling and spatial reasoning to match decision models to the online geospatial data sources they need. Such approach enables automatically and intelligently searching of suitable geospatial data resources and calculating their suitability to given spatial decision and analysis. An experiment with geo-services, geo-ontology and rule-based reasoning (Jess) is developed to illustrate the feasibility of the approach in scenario of data preparation within decisions of bird flu control.

Jian Wang, Chun-jiang Zhao, Fang-qu Niu, Zhi-qiang Wang
Spatial Relations Analysis by Using Fuzzy Operators

Spatial relations play important role in computer vision, scene analysis, geographic information systems (GIS) and content based image retrieval. Analyzing spatial relations by Force histogram was introduced by

Miyajima et al

[1] and largely developed by Matsakis [2] who used a quantitative representation of relative position between 2D objects. Fuzzy Allen relations are used to define the fuzzy topological relations between different objects and to detect object positions in images. Concept for combined extraction of topological and directional relations by using histogram was developed by J.Malki and E.Zahzah [3], and further improved by Matsakis [4]. This algorithm has high computational and temporal complexity due to its limitations of object approximations. In this paper fuzzy aggregation operators are used for information integration along with polygonal approximation of objects. This approach gives anew, with low temporal and computational complexity of algorithm for the extraction of topological and directional relations.

Nadeem Salamat, El-hadi Zahzah
A Parallel Nonnegative Tensor Factorization Algorithm for Mining Global Climate Data

Increasingly large datasets acquired by NASA for global climate studies demand larger computation memory and higher CPU speed to mine out useful and revealing information. While boosting the CPU frequency is getting harder, clustering multiple lower performance computers thus becomes increasingly popular. This prompts a trend of parallelizing the existing algorithms and methods by mathematicians and computer scientists. In this paper, we take on the task of parallelizing the Nonnegative Tensor Factorization (NTF) method, with the purposes of distributing large datasets into each cluster node and thus reducing the demand on a single node, blocking and localizing the computation at the maximal degree, and finally minimizing the memory use for storing matrices or tensors by exploiting their structural relationships. Numerical experiments were performed on a NASA global sea surface temperature dataset and result factors were analyzed and discussed.

Qiang Zhang, Michael W. Berry, Brian T. Lamb, Tabitha Samuel
Querying for Feature Extraction and Visualization in Climate Modeling

The ultimate goal of data visualization is to clearly portray features relevant to the problem being studied. This goal can be realized only if users can effectively communicate to the visualization software what features are of interest. To this end, we describe in this paper two query languages used by scientists to locate and visually emphasize relevant data in both space and time. These languages offer descriptive feedback and interactive refinement of query parameters, which are essential in any framework supporting queries of arbitrary complexity. We apply these languages to extract features of interest from climate model results and describe how they support rapid feature extraction from large datasets.

C. Ryan Johnson, Markus Glatter, Wesley Kendall, Jian Huang, Forrest Hoffman
Applying Wavelet and Fourier Transform Analysis to Large Geophysical Datasets

The recurrence of periodic environmental states is important to many systems of study, and particularly to the life cycles of plants and animals. Periodicity in parameters that are important to life, such as precipitation, are important to understanding environmental impacts, and changes to their intensity and duration can have far reaching impacts. To keep pace with the rapid expansion of Earth science datasets, efficient data mining techniques are required. This paper presents an automated method for Discrete Fourier transform (DFT) and wavelet analysis capable of rapidly identifying changes in the intensity of seasonal, annual, or interannual events. Spectral analysis is used to diagnose model behavior, and locate land surface cells that show shifting cycle intensity, which could be used as an indicator of climate shift. The strengths and limitations of DFT and wavelet spectral analysis are also explored. Example routines in

Octave/Matlab

and

IDL

are provided.

Bjørn-Gustaf J. Brooks
Seismic Wave Field Modeling with Graphics Processing Units

GPGPU - general-purpose computing on graphics processing units is a very effective and inexpensive way of dealing with time consuming computations. In some cases even a low end GPU can be a dozens of times faster than a modern CPUs. Utilization of GPGPU technology can make a typical desktop computer powerful enough to perform necessary computations in a fast, effective and inexpensive way. Seismic wave field modeling is one of the problems of this kind. Some times one modeled common shot-point gather or one wave field snapshot can reveal the nature of an analyzed wave phenomenon. On the other hand these kinds of modelings are often a part of complex and extremely time consuming methods with almost unlimited needs of computational resources. This is always a problem for academic centers, especially now when times of generous support from oil and gas companies have ended.

Tomasz Danek

Workshop on Dynamic Data Driven Applications Systems – DDDAS 2009

Frontmatter
Dynamic Data Driven Applications Systems – DDDAS 2009

This workshop covers several aspects of the Dynamic Data Driven Applications Systems (DDDAS) concept, which is an established approach defining a symbiotic relation between an application and sensor based measurement systems. Applications can accept and respond dynamically to new data injected into the executing application. In addition, applications can dynamically control the measurement processes. The synergistic feedback control-loop between an application simulation and its measurements opens new capabilities in simulations, e.g., the creation of applications with new and enhanced analysis and prediction capabilities, greater accuracy, longer simulations between restarts, and enable a new methodology for more efficient and effective measurements. DDDAS transforms the way science and engineering are done with a major impact in the way many functions in our society are conducted, e.g., manufacturing, commerce, transportation, hazard prediction and management, and medicine. The workshop will present such new opportunities as well as the challenges and approaches in technology needed to enable DDDAS capabilities in applications, relevant algorithms, and software systems. The workshop will showcase ongoing research in these aspects with examples from several important application areas.

Craig C. Douglas
Characterizing Dynamic Data Driven Applications Systems (DDDAS) in Terms of a Computational Model

The DDDAS (Dynamic Data Driven Applications Systems) concept creates new capabilities in applications and measurements, through a new computational paradigm where application simulations can dynamically incorporate and respond to online field-data and measurements, and/or control such measurements. Such capabilities entail dynamic integration of the computational and measurement aspects of an application in a dynamic feed-back-loop, leading to unified SuperGrids of the computational and the instrumentation platforms. Examples of advances in applications capabilities enabled through DDDAS over traditional computational modeling methods, and advances in measurements methods, and instrumentation and sensor network systems, have appeared extensively in the literature. This paper concentrates in discussing a computational model representing the unified DDDAS computation-measurement environments, and asymptotic cases leading to traditional computational environments, data assimilation, and traditional control systems.

Frederica Darema
Enabling End-to-End Data-Driven Sensor-Based Scientific and Engineering Applications

Technical advances are leading to a pervasive computational infrastructure that integrates computational processes with embedded sensors and actuators, and giving rise to a new paradigm for monitoring, understanding, and managing natural and engineered systems – one that is information/data-driven. However, developing and deploying these applications remains a challenge, primarily due to the lack of programming and runtime support. This paper addresses these challenges and presents a programming system for end-to-end sensor/actuator-based scientific and engineering applications. Specifically, the programming system provides semantically meaningful abstractions and runtime mechanisms for integrating sensor systems with computational models for scientific processes, and for in-network data processing such as aggregation, adaptive interpolation and assimilations. The overall architecture of the programming system and the design of its key components, as well as its prototype implementation are described. An end-to-end dynamic data-driven oil reservoir application that combines reservoir simulation models with sensors/actuators in an instrumented oilfield is used as a case study to demonstrate the operation of the programming system, as well as to experimentally demonstrate its effectiveness and performance.

Nanyan Jiang, Manish Parashar
Feature Clustering for Data Steering in Dynamic Data Driven Application Systems

In this paper, we describe how feature clustering on real-world cell-phone data can be used to locate the impact area of emergency events. We first examine the effect of two emergency events on the call activity in the areas surrounding the events. We then investigate how the time series of the affected areas behave relative to the time series of their respective neighboring areas. Finally, we examine the differences in hierarchical clusterings of the time series of the affected areas and neighboring areas.

Alec Pawling, Greg Madey
An Ensemble Kalman-Particle Predictor-Corrector Filter for Non-Gaussian Data Assimilation

An Ensemble Kalman Filter (EnKF, the predictor) is used make a large change in the state, followed by a Particle Filer (PF, the corrector), which assigns importance weights to describe a non-Gaussian distribution. The importance weights are obtained by nonparametric density estimation. It is demonstrated on several numerical examples that the new predictor-corrector filter combines the advantages of the EnKF and the PF and that it is suitable for high dimensional states which are discretizations of solutions of partial differential equations.

Jan Mandel, Jonathan D. Beezley
Computational Steering Strategy to Calibrate Input Variables in a Dynamic Data Driven Genetic Algorithm for Forest Fire Spread Prediction

This work describes a Dynamic Data Driven Genetic Algorithm (DDDGA) for improving wildfires evolution prediction. We propose an universal computational steering strategy to automatically adjust certain input data values of forest fire simulators, which works independently on the underlying propagation model. This method has been implemented in a parallel fashion and the experiments performed demonstrated its ability to overcome the input data uncertainty and to reduce the execution time of the whole prediction process.

Mónica Denham, Ana Cortés, Tomás Margalef
Injecting Dynamic Real-Time Data into a DDDAS for Forest Fire Behavior Prediction

This work presents a novel idea for forest fire prediction, based on Dynamic Data Driven Application Systems. We developed a system capable of assimilating data at execution time, and conduct simulation according to those measurements. We used a conventional simulator, and created a methodology capable of removing parameter uncertainty. To test this methodology, several experiments were performed based on southern California fires.

Roque Rodríguez, Ana Cortés, Tomás Margalef
Event Correlations in Sensor Networks

In this paper we present a novel method to mine the correlations of events in sensor networks to extract correlation patterns of sensors’ behaviors by using an unsupervised algorithm based on a hash table. The goal is to discover anomalous events in a large sensor network where its structure is unknown. Our algorithm enables users to select the correlation confidence level and only display the significant event correlations. Our experiment results show that it can discover significant event correlations in both continuous and discrete signals from heterogeneous sensor networks. The applications include smart building design and large network data mining.

Ping Ni, Li Wan, Yang Cai

Workshop on Computational Finance and Business Intelligence

Frontmatter
Chairs’ Introduction to Workshop on Computational Finance and Business Intelligence

We have been organizing the Workshop on Computational Finance and Business Intelligence (CFBI) at International Conference on Computational Science (ICCS) since 2003. This workshop at ICCS, Baton Rouge, Louisiana, U.S.A., May 25-27, 2009 focuses on computational science aspects of asset and derivatives pricing, financial risk management, and related topics to business intelligence. It will include but not limited to modeling, numeric computation, algorithmic and complexity issues in arbitrage, asset pricing, future and option pricing, risk management, credit assessment, interest rate determination, insurance, foreign exchange rate forecasting, online auction, cooperative game theory, general equilibrium, information pricing, network band witch pricing, rational expectation, repeated games, etc.

Yong Shi, Shouyang Wang, Xiaotie Deng
Lag-Dependent Regularization for MLPs Applied to Financial Time Series Forecasting Tasks

The application of multilayer perceptrons to forecasting the future value of some time series based on past (or

lagged

) values of the time series usually requires very careful selection of the number of lags to be used as inputs, and this must usually be determined empirically. This paper proposes a regularization technique by which the influence that a lag has in determining the forecast value decreases exponentially with the lag, and is consistent with the intuitive notion that recent values should have more influence than less recent values in predicting future values. This means that in principle an infinite number of dimensions could be used. Empirical results show that the regularization technique yields superior performance on out-of-sample data compared with approaches that use a fixed number of inputs without lag-dependent regularization.

Andrew Skabar
Bias-Variance Analysis for Ensembling Regularized Multiple Criteria Linear Programming Models

Regularized Multiple Criteria Linear Programming (RMCLP) models have recently shown to be effective for data classification. While the models are becoming increasingly important for data mining community, very little work has been done in systematically investigating RMCLP models from common machine learners’ perspectives. The missing of such theoretical components leaves important questions like whether RMCLP is a strong and stable learner unable to be answered in practice. In this paper, we carry out a systematic investigation on RMCLP by using a well-known statistical analysis approach, bias-variance decomposition. We decompose RMCLP’s error into three parts: bias error, variance error and noise error. Our experiments and observations conclude that RMCLP’error mainly comes from its bias error, whereas its variance error remains relatively low. Our observation asserts that RMCLP is stable but not strong. Consequently, employing boosting based ensembling mechanism RMCLP will mostly further improve the RMCLP models to a large extent.

Peng Zhang, Xingquan Zhu, Yong Shi
Knowledge-Rich Data Mining in Financial Risk Detection

Financial risks refer to risks associated with financing, such as credit risk, business risk, debt risk and insurance risk, and these risks may put firms in distress. Early detection of financial risks can help credit grantors to reduce risk and losses, establish appropriate policies for different credit products and increase revenue. As the size of financial databases increases, large-scale data mining techniques that can process and analyze massive amounts of electronic data in a timely manner become a key component of many financial risk detection strategies and continue to be a subject of active research. However, the knowledge gap between the results data mining methods can provide and actions can be taken based on them remains large in financial risk detection. The goal of this research is to bring the concept of chance discovery into financial risk detection to build the knowledge-rich data mining process and therefore increase the usefulness of data mining results in financial risk detection. Using six financial risk related datasets, this research illustrates that the combination of data mining techniques and chance discovery can provide knowledge-rich data mining results to decision makers; promote the awareness of previously unnoticed chances; and increase the actionability of data mining results.

Yi Peng, Gang Kou, Yong Shi
Smoothing Newton Method for L 1 Soft Margin Data Classification Problem

A smoothing Newton method is given for solving the dual of the

l

1

soft margin data classification problem. A new merit function was given to handle the high-dimension variables caused by data mining problems. Preliminary numerical tests show that the algorithm is very promising.

Weibing Chen, Hongxia Yin, Yingjie Tian
Short-Term Capital Flows in China: Trend, Determinants and Policy Implications

The volatility of international capital flows have further increased both in volume and speed since the outbreak of subprime crisis originating from America. Orientation of international capital flows blurred because of the downward expectation on the growth rate in main countries. Since the short-term capital flow has gradually become an important part of international capital flow in China in decade, the volatility of short-term capital flows may affect the development of Chinese economy severely. A structural model-VECM was build to explore the determinants of net flows of short-term capital in China. The conclusions of this study were that net flows in China are largely determined by estate price, circulated stock value, expectation on exchange rate and interest rate. On that basis, some policy suggestions were proposed.

Haizhen Yang, Yanping Zhao, Yujing Ze
Finding the Hidden Pattern of Credit Card Holder’s Churn: A Case of China

In this paper, we propose a framework of the whole process of churn prediction of credit card holder. In order to make the knowledge extracted from data mining more executable, we take the execution of the model into account during the whole process from variable designing to model understanding. Using the Logistic regression, we build a model based on the data of more than 5000 credit card holders. The tests of model perform very well.

Guangli Nie, Guoxun Wang, Peng Zhang, Yingjie Tian, Yong Shi
Nearest Neighbor Convex Hull Classification Method for Face Recognition

In this paper, nearest neighbor convex hull (NNCH) classification approach is used for face recognition. In NNCH classifier, a convex hull of training samples of a class is taken as the distribution estimation of the class, and Euclidean distance from a test sample to the convex hull (the distance is called convex hull distance) is taken as the similarity measure for classification. Experiments on face data show that the nearest neighbor convex hull approach can lead to better results than those of 1-nearest neighbor (1-NN) classifier and SVM classifiers.

Xiaofei Zhou, Yong Shi
The Measurement of Distinguishing Ability of Classification in Data Mining Model and Its Statistical Significance

In order to test to what extent can data mining distinguish from observation points of different types, the indicators that can measure the difference between the distribution of positive and negative point scores are raised. First of all, we use the overlapping area of two types of point distributions-overlapping degree, to describe the difference, and discuss the nature of overlapping degree. Secondly, we put forward the image and quantitative indicators with the ability to distinguish different models: Lorenz curve, Gini coefficient, AR, as well as the similar ROC curve and AUC. We have proved AUC and AR are completely linear related; Finally, we construct the nonparametric statistics of AUC, however, the difference of K-S is that we cannot draw the conclusion that zero assumption is more difficult to be rejected when negative points take up a smaller proportion.

Lingling Zhang, Qingxi Wang, Jie Wei, Xiao Wang, Yong Shi
Maximum Expected Utility of Markovian Predicted Wealth

This paper proposes an ex-post comparison of portfolio selection strategies based on the assumption that the portfolio returns evolve as Markov processes. Thus we propose the comparison of the ex-post final wealth obtained with the maximization of the expected negative exponential utility and expected power utility for different risk aversion parameters. In particular, we consider strategies where the investors recalibrate their portfolios at a fixed temporal horizon and we compare the wealth obtained either under the assumption that returns follow a Markov chain or under the assumption we have independent identically distributed data. Thus, we implement an heuristic algorithm for the global optimum in order to overcome the intrinsic computational complexity of the proposed Markovian models.

Enrico Angelelli, Sergio Ortobelli Lozza
Continuous Time Markov Chain Model of Asset Prices Distribution

The aim of this paper is to introduce a new model of a financial asset prices distribution. It is known that the probability distribution of an asset prices or returns is unknown in reality. The general model of asset prices based on continuous time Markov chains is proposed. For this reason the interarrivals between two price states are approximated by mixture of exponential distributions. Numerical-analytic approach is used to obtain the probability distribution of asset prices. The developed software allows creating the space of an asset prices, the matrix of transition rates among states, a system of equations to find the steady state probabilities of price states and solves the system of equations by method of imbedded Markov chains.

Eimutis Valakevičius
Foreign Exchange Rates Forecasting with a C-Ascending Least Squares Support Vector Regression Model

In this paper, a modified least squares support vector regression (LSSVR) model, called

C

-ascending least squares support vector regression (

C

-ALSSVR), is proposed for foreign exchange rates forecasting. The generic idea of the proposed

C

-ALSSVR model is based on the prior knowledge that different data points often provide different information for modeling and more weights should be given to those data points containing more information. The

C

-ALSSVR can be obtained by a simple modification of the regularization parameter in LSSVR, whereby more weights are given to the recent least squares errors than the distant least squares errors while keeping the regularized terms in its original form. For verification purpose, the performance of the

C

-ALSSVR model is evaluated using three typical foreign exchange rates. Experimental results obtained demonstrated that the

C

-ALSSVR model is very promising tool in foreign exchange rates forecasting.

Lean Yu, Xun Zhang, Shouyang Wang
Multiple Criteria Quadratic Programming for Financial Distress Prediction of the Listed Manufacturing Companies

Nowadays, how to effectively predict financial distress has become an important issue for companies, investors and many other user groups. The purpose of this paper is to apply the Multiple Criteria Quadratic Programming (MCQP) model to predict financial distress of the listed manufacturing companies. Firstly, we introduce the formulation of MCQP model. Then we use ten-folder cross validation to test the stability and accuracy of MCQP model on a real-life listed companies’ financial ratios dataset. At last, we compare MCQP model with other two well-known models: Logistic Regression and SVM models. The experimental results show that MCQP is accurate and stable for predicting the financial distress of the listed manufacturing companies. Consequently, we can safely say that MCQP is capable of providing stable and credible results in predicting financial distress.

Ying Wang, Peng Zhang, Guangli Nie, Yong Shi
Kernel Based Regularized Multiple Criteria Linear Programming Model

Although Regularized Multiple Criteria Linear Programming (RMCLP) model has shown its effectiveness in classification problems, its inherent drawback of linear formulation limits itself into only solving linear classification problems. To extend RMCLP into solving non-linear problems, in this paper, we propose a kernel based RMCLP model by using a form

$ w = \sum\limits^{N}_{i=1}\beta_{i}\phi(x_{i})$

to replace the original weight

w

in RMCLP model. Empirical studies on synthetic and real-life datasets demonstrate that our new model is capable to classify non-linear datasets. Moreover, comparisons to SVM and MCQP also exhibit the fact that our new model is superior to other non-linear models in classification problems.

Yuehua Zhang, Peng Zhang, Yong Shi
Retail Exposures Credit Scoring Models for Chinese Commercial Banks

This paper firstly discussed several credit scoring models and their development history, then designed the target system of individual credit scoring with individual housing loans data of a stated-owned commercial bank and logistic method, and established an individual credit scoring model including testing. Finally, the paper discussed the application of the individual credit scoring model in consumer credit domain, and brought forward corresponding conclusions and policies.

Yihan Yang, Guangli Nie, Lingling Zhang
The Impact of Financial Crisis of 2007-2008 on Crude Oil Price

For better estimation of the impact of extreme events on crude oil price volatility, an EMD-based event analysis approach is proposed. In this method, the time series to be analyzed is first decomposed into several intrinsic modes with different time scales from fine-to-coarse and an average trend. The decomposed modes respectively capture the fluctuations caused by the extreme event or other factors during the analyzed period. The total impact of an extreme event is included in only one or several dominant modes, but other modes provide valuable information for subsequent factors. The effects of financial crisis of 2007-2008 to crude oil price are analyzed through this method and empirical results reveal that the EMD-based event analysis method provides a feasible solution to estimating the impact of extreme events on crude oil prices.

Xun Zhang, Lean Yu, Shouyang Wang

Joint Workshop on Tools for Program Development and Analysis in Computational Science and Software Engineering for Large-Scale Computing

Frontmatter
Preface for the Joint Workshop on Tools for Program Development and Analysis in Computational Science and Software Engineering for Large-Scale Computing

Today, computers and computational methods are increasingly important and powerful tools for science and engineering. Yet, using them effectively and efficiently requires both, expert knowledge of the respective application domain as well as solid experience applying the technologies. Only the combination allows new and faster advancement in the area of application. The same is true for establishing new computational concepts as regular methods in the field of application. This applies to either quantitative improvement (e.g. by parallel scalability) or by qualitative progress (e.g. by better algorithms).

Andreas Knüpfer, Arndt Bode, Dieter Kranzlmüller, Daniel Rodrìguez, Roberto Ruiz, Jie Tao, Roland Wismüller, Jens Volkert
Snapshot-Based Data Backup Scheme: Open ROW Snapshot

In this paper, we present the design and implementation details of the Open ROW Snapshot which is the data backup scheme based on the snapshot approach. As the data to be stored in storage systems are tremendously increased, data protection techniques have become more important to provide data availability and reliability. Snapshot is one of such data protection techniques and has been adopted to many file systems. However, in large-scale storage systems, adopting a snapshot technique to prevent data loss from intentional/accidental intrusion is not an easy task because the data size being backup-ed at a given time interval may be huge. In this paper, we present the Open ROW Snapshot that has been implemented based on the file system-based snapshot approach. The Open ROW Snapshot provides a widely portable structure and causes less I/O processing overhead than the ROW(Redirect-On Write) method does. Furthermore, the Open ROW Snapshot provides a capability of maintaining the disk space assigned to snapshot images in a consistently-sized disk portion. We present the performance results of the Open ROW Snapshot obtained from the Linux cluster located at Sejong University.

Jinsun Suk, Moonkyung Kim, Hyun Chul Eom, Jaechun No
Managing Provenance in iRODS

Nowadays provenance is an important issue. Provenance data does not only give a history of events, it also provides enough information to allow the opportunity to verify the authenticity of the data, as well as, determine the quality of the data. The data grid management system, iRODS, comes with metadata which can be used as provenance data. Currently, iRODS’s metadata is not sufficient for tracking and reconstructing procedures applied to data. In this paper, we describe the provenance needs of iRODS and we survey briefly current provenance and provenance enabled workflow systems. We describe an architecture that can be used to manage provenance in iRODS (and other systems) in a fault-tolerant way.

Andrea Weise, Adil Hasan, Mark Hedges, Jens Jensen
Instruction Hints for Super Efficient Data Caches

Data cache is a commodity in modern microprocessor systems. It is a fact that the size of data caches keeps growing up, however, the increase in application size goes faster. As a result, it is usually not possible to store the complete working set in the cache memory.

This paper proposes an approach that allows the data access of some load/store instructions to bypass the cache memory. In this case, the cache space can be reserved for storing more frequently reused data. We implemented an analysis algorithm to identify the specific instructions and a simulator to model the novel cache architecture. The approach was verified using applications from

MediaBench

/

MiBench

benchmark suite and for all except one application we achieved huge gains in performance.

Jie Tao, Dominic Hillenbrand, Holger Marten
A Holistic Approach for Performance Measurement and Analysis for Petascale Applications

Contemporary high-end Terascale and Petascale systems are composed of hundreds of thousands of commodity multi-core processors interconnected with high-speed custom networks. Performance characteristics of applications executing on these systems are a function of system hardware and software as well as workload parameters. Therefore, it has become increasingly challenging to measure, analyze and project performance using a single tool on these systems. In order to address these issues, we propose a methodology for performance measurement and analysis that is aware of applications and the underlying system hierarchies. On the application level, we measure cost distribution and runtime dependent values for different components of the underlying programming model. On the system front, we measure and analyze information gathered for unique system features, particularly shared components in the multi-core processors. We demonstrate our approach using a Petascale combustion application called S3D on two high-end Teraflops systems, Cray XT4 and IBM Blue Gene/P, using a combination of hardware performance monitoring, profiling and tracing tools.

Heike Jagode, Jack Dongarra, Sadaf Alam, Jeffrey Vetter, Wyatt Spear, Allen D. Malony
A Generic and Configurable Source-Code Instrumentation Component

A common prerequisite for a number of debugging and performance-analysis techniques is the injection of auxiliary program code into the application under investigation, a process called

instrumentation

. To accomplish this task, source-code preprocessors are often used. Unfortunately, existing preprocessing tools either focus only on a very specific aspect or use hard-coded commands for instrumentation. In this paper, we examine which basic constructs are required to specify a user-defined routine entry/exit instrumentation. This analysis serves as a basis for a generic instrumentation component working on the source-code level where the instructions to be inserted can be flexibly configured. We evaluate the identified constructs with our prototypical implementation and show that these are sufficient to fulfill the needs of a number of todays’ performance-analysis tools.

Markus Geimer, Sameer S. Shende, Allen D. Malony, Felix Wolf

Workshop on Collaborative and Cooperative Environments

Frontmatter
Dynamic VO Establishment in Distributed Heterogeneous Business Environments

As modern SOA and Grid infrastructures are being moved from academic and research environments to more challenging business and commercial applications, such issue as control of resource sharing become of crucial importance. In order to manage and share resources within distributed environments the idea of Virtual Organizations (VO) emerged, which enables sharing only subsets of resources among partners of such a VO within potentially larger settings. This paper describes the Framework for Intelligent Virtual Organizations (FiVO), focusing on its functionality of enforcing security (Authentication and Authorization) in dynamically deployed Virtual Organizations. The paper presents the overall architecture of the framework along with different security settings which FiVO can support within one Virtual Organization.

Bartosz Kryza, Lukasz Dutka, Renata Slota, Jacek Kitowski
Interactive Control over a Programmable Computer Network Using a Multi-touch Surface

This article introduces the Interactive Network concept and describes the design and implementation of the first prototype. In an Interactive Network humans become an integral part of the control system to manage programmable networks and grid networks. The implementation consists of a multi-touch table that allows multiple persons to manage and monitor a programmable network simultaneously. The amount of interactive control of the multi-touch interface is illustrated by the ability to create and manipulate paths, which are either end-to-end, multicast or paths that contain loops. First experiences with the multi-touch table show its potential for collaborative management of large-scale infrastructures.

Rudolf Strijkers, Laurence Muller, Mihai Cristea, Robert Belleman, Cees de Laat, Peter Sloot, Robert Meijer
Eye Tracking and Gaze Based Interaction within Immersive Virtual Environments

Our eyes are input sensors which provide our brains with streams of visual data. They have evolved to be extremely efficient, and they will constantly dart to-and-fro to rapidly build up a picture of the salient entities in a viewed scene. These actions are almost subconscious. However, they can provide telling signs of how the brain is decoding the visuals, and can indicate emotional responses, prior to the viewer becoming aware of them.

In this paper we discuss a method of tracking a user’s eye movements, and use these to calculate their gaze within an immersive virtual environment. We investigate how these gaze patterns can be captured and used to identify viewed virtual objects, and discuss how this can be used as a natural method of interacting with the Virtual Environment. We describe a flexible tool that has been developed to achieve this, and detail initial validating applications that prove the concept.

Adrian Haffegee, Russell Barrow
Collaborative and Parallelized Immersive Molecular Docking

During docking, protein molecules and other small molecules interact together to form transient macromolecular complexes. Docking is an integral part of structure-based drug design and various docking programs are used for

in-silico

docking. Although these programs have powerful docking algorithms, they have limitations in the three-dimensional visualization of molecules. An immersive environment would bring additional advantages in understanding the molecules being docked. It would enable scientists to fully visualize molecules to be docked, manipulate their structures and manually dock them before sending to new conformations to a docking algorithm. This could greatly reduce docking time and resource consumption. Being an exhaustive process, parallelization of docking is of utmost importance for faster processing. This paper proposes the use of a collaborative and immersive environment for initially hand docking molecules and which then uses powerful algorithms in existing parallelized docking programs to decrease computational docking time and resources.

Teeroumanee Nadan, Adrian Haffegee, Kimberly Watson
The gMenu User Interface for Virtual Reality Systems and Environments

Desktop computers are able to provide a user interface with many features that allow the user to perform tasks such as execute applications load files and edit data. The gMenu system proposed in this paper is a step closer to having these same facilities in virtual reality systems. The gMenu can currently be used to perform a selection of common tasks provided by a user interface, for example executing or closing virtual reality applications or scenes. It is fully customisable and can be used to create many different styles of menu by both programmers and users. It also has shown promising results bringing some of the system based commands into the virtual environment, as well as keeping the functionality and adaptions required by applications. The use cases presented demonstrate a collection of these abilities.

Andrew Dunk, Adrian Haffegee

Eighth International Workshop on Computer Graphics and Geometric Modeling, CGGM 2009

Frontmatter
VIII International Workshop on Computer Graphics and Geometric Modeling - CGGM’2009

This short paper is intended to give our readers a brief insight about the Eight International Workshop on Computer Graphics and Geometric Modeling-CGGM’2009, held in Baton Rouge, Louisiana (USA), May 25-27 2009 as a part of the ICCS’2009 general conference.

Andrés Iglesias
Reconstruction of Branching Surface and Its Smoothness by Reversible Catmull-Clark Subdivision

In the current research a new algorithm has been developed to get surface from the contours having branches and a final smooth surface is obtained by reversible Catmull-Clark Subdivision. In branching, a particular layer has more than one contour, corresponds with the contour at the adjacent layer. The layer having more than one contour is converted into a 3D composite curve by inserting points between the layers. The points are inserted in such a way that the center of contours should merged to the center of the contours at the adjacent layer. This process is repeated for all layers having branching problems. In the next step, 3D composite curves are converted into different polyhedrons by the help of the contours at adjacent layers. Number of control points at different layer for contours and 3D curves may not be the same, in this case a special polyhedron construction technique has been developed. The polyhedrons are subdivided using reversible Catmull-Clark subdivision to give a smooth surface.

Kailash Jha
A New Algorithm for Image Resizing Based on Bivariate Rational Interpolation

A new method for image resizing by bivariate rational interpolation based on function values and partial derivative value is presented. When an original image is resized in an arbitrary ratio, the first step of the method is constructing the rational interpolation fitting the original surface where the given image data points are sampled from. The resized image can be obtained just by re-sampling on the interpolation surface. The algorithm presents how to estimate the partial derivative value of image data point needed for rational interpolation, and at same time considers the adjustment of tangent vector of the edge point to keep edges well defined. Various experiments are presented to show efficiency of the proposed method and that the resized images can preserve clear and sharp borders and hence offer more detail information in real application.

Shanshan Gao, Caiming Zhang, Yunfeng Zhang
Hardware-Accelerated Sumi-e Painting for 3D Objects

Brushwork and ink dispersion make it difficult to render 3D common objects in the style of sumi-e painting. We use sphere mapping with brush texture and an image processing techniques to simulate brushstrokes and ink dispersion. The whole process is implemented in shaders running on Graphics Process Unit (GPU) that allows fast and high-quality rendering 3D polygonal models in the style of sumi-e painting. We show several results which demonstrate the practicality and benefits of our system.

Joo-Hyun Park, Sun-Jeong Kim, Chang-Geun Song, Shin-Jin Kang
A New Approach for Surface Reconstruction Using Slices

This paper describes a novel algorithm for surface reconstruction from slices. A number of slices are extracted from a given data oriented along any of the principal axes. Slices are projected onto the XZ plane and equal number of traversals takes place for each slice by a cut plane oriented along the X axis. As the cut plane traverses along each slice, cut points are extracted. To establish correspondence between two consecutive slices, firstly domain mapping takes place. Then a heuristic approach is taken which is based on the comparison of the number of occurrences of particular cut points between slices. Optimization is performed on the basis of minimal differences of the number of occurrences of particular cut points between consecutive slices. Although heuristic approach is not flawless, this algorithm is able to construct surface of fairly complex objects. The algorithm is dynamic enough as the number of slices and the number of traversals can be adjusted depending on the complexity of the object.

Shamima Yasmin, Abdullah Zawawi Talib
Tools for Procedural Generation of Plants in Virtual Scenes

Creating interactive graphics applications that present to the user realistic natural scenes is very difficult. Natural phenomena are very complex and detailed to model, and using traditional modeling techniques takes huge amounts of time and requires skilled artists to obtain good results.

Procedural techniques allow to generate complex objects by defining a set of rules and selecting certain parameters. This allows to speed up the process of content creation and also allows to create objects on-the-fly, when needed. On-demand generation of scenes enables the authors to create potentially infinite worlds.

This survey identifies the main features of the most used systems that implement procedural techniques to model plants and natural phenomena and discuss usability issues.

Armando de la Re, Francisco Abad, Emilio Camahort, M. C. Juan

Workshop on Intelligent Agents in Simulation and Evolvable Systems

Frontmatter
Toward the New Generation of Intelligent Distributed Computing Systems

This paper is an introduction to the works presented in Intelligent Agents and Evolvable Systems Workshop. The workshop focuses on the various applications of agent-oriented systems, the roles of evolution and interactions of agents to build intelligent systems.

Robert Schaefer, Krzysztof Cetnarowicz, Bojin Zheng, Bartłomiej Śnieżyński
Multi-agent System for Recognition of Hand Postures

A multi-agent system for a recognition of hand postures of the Polish Sign Language is presented in the paper. The system is based on a syntactic pattern recognition approach, namely on parsable ETPL(k) graph grammars. An occurrence of a variety of styles of performing hand postures requires an introduction of many grammar productions that differ each from other slightly. This makes a construction of a grammar within a parsable class ETPL(k) dubious. Dividing a whole grammar into sub-grammars and distributing them to agents allows one to solve the problem.

Mariusz Flasiński, Janusz Jurek, Szymon Myśliński
From Algorithm to Agent

Although the notion of an agent has been used in computer science for a dozens of years now it is still not very well defined. It seems that there is a lack of formal definition of such concepts as an “object” and an “agent”. It makes difficult formal analysis of algorithms developed with their use. We should find more formal description that has connection with the basic definition of the algorithm.

In the paper we will propose an approach that may help to develop more formal definitions of an agent and an object with the use of algorithm concept. Starting from the notion of the algorithm and using the observation that complex algorithm should be developed with the use of its decomposition we propose some ideas how we can consider such notions as object and agent.

Proposed approach takes into consideration the necessity of the autonomy and of an agent and object and the problems of the interactions between them and suggest the resolution of the problems by communication and observation process.

Presented concept of an object and an agent makes possible to find further more formal definitions of these notions and find the crucial properties of these concepts and the main difference between the notion of an object and the notion of an agent.

Krzysztof Cetnarowicz
The Norm Game - How a Norm Fails

We discuss the simulations of the norm game between players at nodes of a directed random network. The final boldness, i.e. the probability of norm breaking by the players, can vary sharply with the initial boldness, jumping from zero to one at some critical value. One of the conditions of this behaviour is that the player who does not punish automatically becomes a defector. The threshold value of the initial boldness can be interpreted as a norm strength. It increases with the punishment and decreases with its cost. Surprisingly, it also decreases with the number of potential punishers. The numerical results are discussed in the context of the statistical data on crimes in Northern Ireland and New Zealand, on divorces in USA, and on the alcohol consumption in Poland.

Antoni Dydejczyk, Krzysztof Kułakowski, Marcin Rybak
Graph Grammar Based Petri Nets Model of Concurrency for Self-adaptive hp-Finite Element Method with Triangular Elements

The paper presents the model of concurrency for the self-adaptive

hp

-Finite Element Method (

hp

-FEM) with triangular elements. The model concerns the process of an initial mesh generation as well as mesh adaptation. The model is obtained by defining CP-graph grammar productions as basic undivided tasks for both mesh generation and adaptation algorithms. The order of execution of graph grammar productions is set by control diagrams. Finally, the Petri nets are created based on the control diagrams. The self-adaptive

hp

-FEM algorithm modeled as a Petri net can be analyzed for deadlocks, starvation or infinite execution.

Arkadiusz Szymczak, Maciej Paszyński
Multi-agent Crisis Management in Transport Domain

A multi-agent system that solves static and dynamic versions of transport problem (Pickup and Delivery Problem with Time Windows) in presence of crises is shown here. Scenarios to serve different kinds of crises (especially vehicle failures and traffic jams) by agents are described. The results summarising system functioning for solving classical PDPTW as well as influence of crises and applied algorithms of serving them upon quality of obtained solutions have been presented.

Michał Konieczny, Jarosław Koźlak, Małgorzata Żabińska
Agent-Based Model and Computing Environment Facilitating the Development of Distributed Computational Intelligence Systems

In the paper a simple formalism is proposed to describe the hierarchy of multi-agent systems, which is particularly suitable for the design of a certain class of distributed computational intelligence systems. The mapping between the formalism and the existing computing environment

AgE

is also sketched out.

Aleksander Byrski, Marek Kisiel-Dorohinicki
Graph Transformations for Modeling hp-Adaptive Finite Element Method with Mixed Triangular and Rectangular Elements

The paper presents composition graph (CP-graph) grammar, which consists of a set of CP-graph transformations, suitable for modeling transformations of two dimensional meshes with rectangular elements mixed with triangular elements. The mixed meshes are utilized by the self-adaptive

hp

Finite Element Method (FEM) extended to support triangular and rectangular elements. The

hp

-FEM generates a sequence of mixed triangular and rectangular element meshes providing exponential convergence of the numerical error with respect to the mesh size. This is done be executing several

h

or

p

refinements over an initial mesh. The mixed finite element mesh is represented by attributed CP-graph. The proposed graph transformations model the initial mesh generation as well as mesh refinements. The proposed extended graph grammar has been defined and verified by using implemented software.

Anna Paszyńska, Maciej Paszyński, Ewa Grabska
Agent-Based Environment for Knowledge Integration

Representing knowledge with the use of ontology description languages offers several advantages arising from knowledge reusability, possibilities of carrying out reasoning processes and the use of existing concepts of knowledge integration. In this work we are going to present an environment for the integration of knowledge expressed in such a way. Guaranteeing knowledge integration is an important element during the development of the Semantic Web. Thanks to this, it is possible to obtain access to services which offer knowledge contained in various distributed databases associated with semantically described web portals. We will present the advantages of the multi-agent approach while solving this problem. Then, we will describe an example of its application in systems supporting company management knowledge in the process of constructing supply-chains.

Anna Zygmunt, Jarosław Koźlak, Leszek Siwik
Agent Strategy Generation by Rule Induction in Predator-Prey Problem

This paper contains a proposal of application of rule induction for generating agent strategy. This method of learning is tested on a predator-prey domain, in which predator agents learn how to capture preys. We assume that proposed learning mechanism will be beneficial in all domains, in which agents can determine direct results of their actions. Experimental results show that the learning process is fast. Multi-agent communication aspect is also taken into account. We can show that in specific conditions transferring learned rules gives profits to the learning agents.

Bartłomiej Śnieżyński
Handling Ambiguous Inverse Problems by the Adaptive Genetic Strategy hp–HGS

We propose the new multi-deme, adaptive genetic strategy

hp

–HGS that allow solving ill posed parametric inverse problems that may posses multiple solutions. The strategy was obtained by combining

hp

-adaptive Finite Element Method with the Hierarchic Genetic Strategy. Its efficiency results from the coupled adaptation of the accuracy of solving optimization problem and the accuracy of

hp

–FEM direct problem solver. We present the simple L-shape domain benchmark that posses exactly two solutions. Test results show how the tunning of

hp

–HGS may affect on the number of solutions to be found. Moreover we discuss the artifacts that may appear by the particular setting of genetic operations.

Barbara Barabasz, Robert Schaefer, Maciej Paszyński
Backmatter
Metadaten
Titel
Computational Science – ICCS 2009
herausgegeben von
Gabrielle Allen
Jarosław Nabrzyski
Edward Seidel
Geert Dick van Albada
Jack Dongarra
Peter M. A. Sloot
Copyright-Jahr
2009
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-01973-9
Print ISBN
978-3-642-01972-2
DOI
https://doi.org/10.1007/978-3-642-01973-9