Skip to main content

2004 | Buch

Computational Science and Its Applications – ICCSA 2004

International Conference, Assisi, Italy, May 14-17, 2004, Proceedings, Part II

herausgegeben von: Antonio Laganá, Marina L. Gavrilova, Vipin Kumar, Youngsong Mun, C. J. Kenneth Tan, Osvaldo Gervasi

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The natural mission of Computational Science is to tackle all sorts of human problems and to work out intelligent automata aimed at alleviating the b- den of working out suitable tools for solving complex problems. For this reason ComputationalScience,thoughoriginatingfromtheneedtosolvethemostch- lenging problems in science and engineering (computational science is the key player in the ?ght to gain fundamental advances in astronomy, biology, che- stry, environmental science, physics and several other scienti?c and engineering disciplines) is increasingly turning its attention to all ?elds of human activity. In all activities, in fact, intensive computation, information handling, kn- ledge synthesis, the use of ad-hoc devices, etc. increasingly need to be exploited and coordinated regardless of the location of both the users and the (various and heterogeneous) computing platforms. As a result the key to understanding the explosive growth of this discipline lies in two adjectives that more and more appropriately refer to Computational Science and its applications: interoperable and ubiquitous. Numerous examples of ubiquitous and interoperable tools and applicationsaregiveninthepresentfourLNCSvolumescontainingthecontri- tions delivered at the 2004 International Conference on Computational Science and its Applications (ICCSA 2004) held in Assisi, Italy, May 14–17, 2004.

Inhaltsverzeichnis

Frontmatter

Grid Computing Workshop

Advanced Simulation Technique for Modeling Multiphase Fluid Flow in Porous Media

Modeling fluid flow in a porous medium is a challenging computational problem. It involves highly heterogeneous distributions of porous medium property, various fluid flow characteristics. For the design of better flow management scheme, high performance computing tools have been applied as a useful technique. In this paper, various parallel implementation approaches are introduced mainly based on high performance computing toolkits such as ADIFOR, PETSc, and Cactus. Provided by ADIFOR, accurate Jacobian matrix computation scheme was coupled with PETSc (Portable Extensible Toolkit for Scientific Computation), which allows data structures and routines for the scalable solution of scientific applications modeled by partial differential equations. In addition to the PETSc-ADIFOR-based approach, we also present a methodology to apply a grid-enabled parallel implementation toolkit, Cactus for the simulation of fluid flow phenomena in porous media. The preliminary result shows a significant parallel performance. On the basis of the current result, we further discuss the future plan for the design of multi-purpose porous media flow simulator.

Jong G. Kim, Hyoung W. Park
The P-GRADE Grid Portal

Providing Grid users with a widely accessible, homogeneous and easy-to-use graphical interface is the foremost aim of Grid-portal development. These portals if designed and implemented in a proper and user-friendly way, might fuel the dissemination of Grid-technologies, hereby promoting the shift of Grid-usage from research into real life, industrial application, which is to happen in the foreseeable future, hopefully. This paper highlights the key issues in Grid-portal development and introduces P-GRADE Portal being developed at MTA SZTAKI. The portal allows users to manage the whole life-cycle of executing a parallel application in the Grid: editing workflows, submitting jobs relying on Grid-credentials and analyzing the monitored trace-data by means of visualization.

Csaba Németh, Gábor Dózsa, Róbert Lovas, Péter Kacsuk
A Smart Agent-Based Grid Computing Platform

Most of Grid computing platforms are usually configured with high performance computing servers, supercomputers, and cluster systems. The research is about to design an aggressive Grid computing platform by utilizing a large number of pervasive PCs as a Grid component. For this goal, an effective configuration method to construct a group of PCs is required as a single Grid component to be used in the same way as those cluster systems are designed and utilized as the same system technology. The configuration method is designed as a layered service architecture, i.e., physical information service, mobile management service, and point-based scheduling service, based on agent technology. The physical information service is to gather and maintain the status information of physical nodes, the mobile management service to guarantee the effective resource management by using the mobile agent, and the point-based scheduling service to provide a simple scheduling policy for the PC-based computing resource. Through this configuration method, the efficiency of the resource management and system throughput are expected to be increased. The experimental result shows that the system using this configuration method can support more than 90% of the expected performance given by a chosen set of running PC resources in general computing environment.

Kwang-Won Koh, Hie-Cheol Kim, Kyung-Lang Park, Hwang-Jik Lee, Shin-Dug Kim
Publishing and Executing Parallel Legacy Code Using an OGSI Grid Service

This paper describes an architecture for publishing and executing parallel legacy code using an OGSI Grid service. A framework is presented that aids existing legacy applications to be deployed as OGSI Grid services and the concept is demonstrated by creating an OGSI/GT3 version of the Westminster MadCity traffic simulator application. This paper presents the Grid Execution Management for Legacy Code Architecture (GEMLCA), and describes the progress and achievements of its implementation.

T. Delaitre, A. Goyeneche, T. Kiss, S. C. Winter
The PROVE Trace Visualisation Tool as a Grid Service

This paper introduces the way of separating the PROVE trace visualisation tool from the P-GRADE environment into a stand-alone Grid service. The separation resulted three PROVE implementations: a local service, a servlet based solution and an OGSA (Open Grid Services Architecture) enabled GT3 (Globus Toolkit) Grid service. The paper describes the problems and decisions during the development process of the different versions. Based on our experiences one can see how a local program can be relocated into the Grid as one of the several services the global infrastructure will one day offer.

Gergely Sipos, Péter Kacsuk
Privacy Protection in Ubiquitous Computing Based on Privacy Label and Information Flow

The vision of upcoming ubiquitous computing environment allows us to exchange and share information with any parties any time, anywhere if we want. However, the widespread use of computing devices and sensors networked together could be a considerable threat to the privacy of individuals living in the ubiquitous computing environment. In the paper, we investigate privacy issues in the ubiquitous computing environment and present a model of privacy protection and an example architecture to support it. The proposed architecture ensures that individuals not only get benefits and services from ubiquitous environment by freely exchanging and sharing information, but they also preserve their own privacy.

Seong Oun Hwang, Ki Song Yoon

Resource Management and Scheduling Techniques for Cluster and Grid Computing Systems Workshop

Application-Oriented Scheduling in the Knowledge Grid: A Model and Architecture

This paper describes a model and an architecture of an application-oriented scheduler designed in the context of the Knowledge Grid: an environment developed for supporting knowledge discovery on Grids. The Knowledge Grid scheduler allows abstracting from computational resources through the use of abstract hosts, i.e. hosts whose characteristics are only partially specified, and instantiates abstract hosts to concrete ones trying to improve applications’ performances. The paper proposes a scheduling model, an open architecture for its support, and the specific scheduling functionalities used in the Knowledge Grid. The proposed model and architecture are general with respect to possible specific application domains and scheduling functionalities. Thus, they are potentially useful also in different Grid environments.

Andrea Pugliese, Domenico Talia
A Monitoring and Prediction Tool for Time-Constraint Grid Application

A Grid system must integrate heterogeneous resources with varying quality and availability. For example, the load on any given resource may increase during execution of a time-constrained job. This places importance on the system’s ability to recognise the state of these resources. This paper presents an approach used as a basis for applications and resources monitoring, in which Grid jobs are maintained at runtime. A reflective technique is used to simplify the monitoring of the Grid application. The monitoring tool is described and experimentally evaluated. Reflection is incorporated into the monitoring to separate functional and non-functional aspects of the system and facilitate the implementation of non-functional properties such as job migration. Results indicate that this approach enhances the likelihood of timely job completion in a dynamic Grid system.

Abdulla Othman, Karim Djemame, Iain Gourlay
Optimal Server Allocation in Reconfigurable Clusters with Multiple Job Types

We examine a system where the servers in a cluster may be switched dynamically and preemptively from one kind of work to another. The demand consists of M job types joining separate queues, with different arrival and service characteristics, and also different relative importance represented by appropriate holding costs. The switching of a server from queue i to queue j incurs a cost which may be monetary or may involve a period of unavailability. The optimal switching policy is obtained numerically by solving a dynamic programming equation. Two simple heuristic policies – one static and one dynamic – are evaluated by simulation and are compared to the optimal policy. The dynamic heuristic is shown to perform well over a range of parameters, including changes in demand.

J. Palmer, I. Mitrani
Design and Evaluation of an Agent-Based Communication Model for a Parallel File System

Agent paradigm has become one of the most important topics appeared and widely developed in computing systems in the last decade. This paradigm is being sucessfully used in a large number of fields. MAPFS is a multiagent parallel file system, which takes advantage of the semantic concept of agents in order to increase its modularity and performance. This paper shows how to use agent theory as conceptual framework in the design and development of MAPFS. MAPFS implementation is based on nearer technologies to system programming, although its design makes usage of the abstraction of a multiagent system.

María S. Pérez, Alberto Sánchez, Jemal Abawajy, Víctor Robles, José M. Peña
Task Allocation for Minimizing Programs Completion Time in Multicomputer Systems

Task allocation is one of the biggest issues in the area of parallel and distributed computing. Given a parallel program composed of M communicating modules (tasks) and a multicomputer system of N processors with a specific interconnection network, the problem is how to assign the program modules onto the available processors in the system so as to minimize the entire program completion time. This problem is known to be NP-complete and therefore untractable as soon as the number of tasks and/or processors exceeds a few units. This paper presents a heuristic algorithm, derived from Simulated Annealing, to this problem taking into account several kinds of constraints. The performance of the algorithm is evaluated through experimental study on randomly generated instances that being allocated into a multicomputer system of bus topology. Furthermore, the quality of solutions are compared with those derived using the Branch-and-Bound on the same sample problems.

Gamal Attiya, Yskandar Hamam
Fault Detection Service Architecture for Grid Computing Systems

The ability to tolerate failures while effectively exploiting the grid computing resources in an scalable and transparent manner must be an integral part of grid computing infrastructure. Hence, fault-detection service is a necessary prerequisite to fault tolerance and fault recovery in grid computing. To this end, we present an scalable fault detection service architecture. The proposed fault-detection system provides services that monitors user applications, grid middlewares and the dynamically changing state of a collection of distributed resources. It reports summaries of this information to the appropriate agents on demand or instantaneously in the event of failures.

J. H. Abawajy
Adaptive Interval-Based Caching Management Scheme for Cluster Video Server

In a video server, a good buffer management scheme can reduce the number of disk I/O. Traditional buffer replacement algorithms such as LRU (least recently used) and MRU (most recently used) only aim at increasing buffer hit ratio, can not guarantee the continuous and real-time needs for continuous media. Interval-based caching scheme can efficiently support cache share, which is a good candidate for video server cache management. In this paper, we propose a new interval-based caching scheme PISR (adaptive dynamic Preemptive Interval-based caching with Size and Rate), which can fit for heterogeneity environments. Our scheme is proven to outperform the original PIS (preemptive interval-based caching with size) and PIR (preemptive interval-based caching with rate) by simulation and performance evaluation.

Qin Zhang, Hai Jin, Yufu Li, Shengli Li
A Scalable Streaming Proxy Server Based on Cluster Architecture

With the explosive growth of multimedia streaming service, streaming proxy server is deployed to reduce response time, server load and network traffic. Existing single node proxy server has limitation on delivering many simultaneously streams. To solve this problem, in this paper, we propose a scalable streaming proxy server based on cluster architecture. We conduct some simulation experiments, which exhibit high scalability and high performance with our design.

Hai Jin, Jie Chu, Kaiqin Fan, Zhi Dong, Zhiling Yang
The Measurement of an Optimum Load Balancing Algorithm in a Master/Slave Architecture

Identifying the optimum load balancing algorithm for a web site is a difficult and complex task. This paper examines a number of simulated algorithms based on a master/slave architecture. Three algorithms are used in order to have comparable results to discuss. The first algorithm is the use of a master/slave architecture and processing requests to the relevant servers as a batch of requests. The second algorithm investigated is the standard round robin algorithm used in a master/slave architecture. The final algorithm proposed in the paper is the use of a master/slave architecture that uses the round robin algorithm combined with a reverse proxy of requests. The use of this final combination of algorithms has showed a performance improvement of 19% over conventional master/slave round robin load balancing. The use of batch processing of request shows some interesting findings useful for very heavily loaded web sites with a constant high umber of requests.

Finbarr O’Loughlin, Desmond Chambers
Data Discovery Mechanism for a Large Peer-to-Peer Based Scientific Data Grid Environment

Data Grid mostly deals with large computational problems and provide geographically distributed resources for large-scale data-intensive applications that generate large data sets. In a modern scientific computing communities, the scientists involves in managing massive amounts of a very large data collections in a geographically distributed environment. Research in the area of grid computing has given us various ideas and solutions to address these requirements. Recently, most of research groups working on the data distribution problems in Data Grids and they are investigating a number of data replication approaches on the data distribution. This leads to a new problem in discovery and access to data in Data Grids environment. Peer-to-peer networks also have become a major research topic over the last few years. In distributed peer-to-peer system, a discovery mechanism is required to locate specific information, applications, or users contained within the system. In this research work, we present our scientific data grid as a large peer-to-peer based distributed system model. By using this model, we study various discovery mechanisms based on peer-to-peer architecture and investigate these mechanisms for our Dynamic Scientific Data Grids Environment Model through our Grid Simulator. In this paper, we illustrate our model and our Grid Simulator. We then analyze the performance of the discovery mechanisms relative to their success rates and bandwidth consumption.

Azizol Abdullah, Mohamed Othman, Md Nasir Sulaiman, Hamidah Ibrahim, Abu Talib Othman
A DAG-Based XCIGS Algorithm for Dependent Tasks in Grid Environments

Generating high quality schedules for scientific computation on a computational grid is a challenging problem. Many scheduling algorithms in grid computing are for independent tasks. However, communications commonly occur among tasks executed on different grid nodes. In this paper, an eXtended Communication-Inclusion Generational Scheduling (XCIGS) algorithm is proposed to schedule dependent tasks of an application with their DAG. During scheduling, those ineligible tasks are momentarily ignored, and a Buffer Set of Independent tasks (BSI) is conducted to leverage the utilization of grid resources. The predicted transferring time, the machine ready time and the expectation completion time of all predecessors are taken into consideration while an alternative auxiliary algorithm dynamically makes the schedule. Corresponding experimental results suggest that it betters resource utilization of grid experiments and improves execution performance.

Changqin Huang, Deren Chen, Qinghuai Zeng, Hualiang Hu
Running Data Mining Applications on the Grid: A Bag-of-Tasks Approach

Data mining (DM) applications are composed of computing-intensive processing tasks working on huge datasets. Due to its computing-intensive nature, these applications are natural candidates for execution on high performance, high throughput platforms such as PC clusters and computational grids. Many data mining algorithms can be implemented as bag-of-tasks (BoT) applications, i.e., parallel applications composed of independent tasks. This paper discusses the use of computing grids for the execution of DM algorithms as BoT applications, investigates the scalability of the execution of an application and proposes an approach to improve its scalability.

Fabrício A. B. da Silva, Sílvia Carvalho, Hermes Senger, Eduardo R. Hruschka, Cléver R. G. de Farias

Parallel and Distributed Computing Workshop

Application of Block Design to a Load Balancing Algorithm on Distributed Networks

In order to maintain load balancing in a distributed system, we should obtain workload information from all the nodes in the network. This processing requires O(v2) communication complexity, where v is the number of nodes. In this paper, we present a new synchronous dynamic distributed load balancing algorithm on a (v,k+1,1)-configured network applying a symmetric balanced incomplete block design, where v=k2+k+1. Our algorithm needs only $O(v\sqrt{v})$ communication complexity and each node receives workload information from all the nodes without redundancy. Later, for the arbitrary number of nodes, the algorithm will be designed.

Yeijin Lee, Okbin Lee, Taehoon Lee, Ilyong Chung
Maintenance Strategy for Efficient Communication at Data Warehouse

Data warehousing is used for reducing the load of on-line transactional systems by extracting and storing the data needed for analytical purposes. Maintaining the consistency of warehouse data is challenging as views of the data warehouse span multiple sources, and user queries are processed using this view. The view has to be maintained to reflect the updates done against the base relations stored at the various data sources. In this paper, we present incremental and efficient view maintenance algorithm for a data warehouse derived from multiple distributed autonomous data sources and an enhanced algorithm that does not require the data warehouse be in a quiescent state for incorporating the new views. Also The proposed algorithm can be reduced server loads and the response time, by overlapping processing time and messages size between warehouse and sources. We show its performance result by comparing it to other conventional algorithms.

Hyun Chang Lee, Sang Hyun Bae
Conflict Resolution of Data Synchronization in Mobile Environment

Proliferation of personal information devices results in a difficulty of maintaining replicated copies of the user data in data stores of the various devices. Data synchronization is an automated action to make the replicated data be consistent with each other and up-to-date. The traditional consistency models and the conflict resolution schemes do not fit to the synchronization of replicated data in the mobile computing environment due to the lacks of simultaneous update. This paper shows characteristics of data synchronization and conflict between replicated data in the mobile environment and defines the resolution rules for each conflict scenario under the recent-data-win policy. These rules have been implemented and verified in a data synchronization server which was developed based on the SyncML specification [1].

YoungSeok Lee, YounSoo Kim, Hoon Choi
A Framework for Orthogonal Data and Control Parallelism Exploitation

We propose an innovative approach to structured exploitation of both data and control parallelism. Our approach is based on the clear distinction between data and control parallelism exploitation mechanisms. This separation leads to a programming model where data and control parallelism exploitation is managed by means of independent, orthogonal mechanisms. By exploiting this orthogonal set of mechanisms, clear semantic transformation and verification tools can be developed. We show here a preliminary definition of the programming model, some sample skeleton applications and we discuss the basis for the development of a clear semantic framework that can be used to develop semantics preserving transformation rules as well as semantic based reasoning on parallel program properties.

S. Campa, M. Danelutto
Multiplier with Parallel CSA using CRT’s Specific Moduli (2 k -1, 2 k , 2 k +1)

Recently, RNS has received increased attention due to its ability to support high-speed concurrent arithmetic. Applications such as fast fourier transform, digital filtering, and image processing utilize the efficiencies of RNS arithmetics in addition and multiplication; they do not require the difficult RNS operations such as division and magnitude comparison of digital signal processor. RNS have computational advantages since operation on residue digits are performed independently and so these processes can be performed in parallel. There are basically two methods that are used for residue to binary conversion. The first approach uses the mixed radix conversion algorithm, and the second approach is based on the Chinese remainder theorem. In this paper, the new design of CRT conversion is presented. This is a derived method using an overlapped multiple-bit scanning method in the process of CRT conversion. This is achieved by a general moduli form (2k-1, 2k , 2k+1). Then, it simulates the implementation using an overlapped multiple-bit scanning method in the process of CRT conversion, In conclusion, the simulation shows that the CRT method which is adopted in this research, performs arithmetic operations faster that the traditional approaches, due to advantages of parallel processing and carry-free arithmetic operation.

Wu Woan Kim, Sang-Dong Jang
Unified Development Solution for Cluster and Grid Computing and Its Application in Chemistry

P-GRADE programming environment provides high-level graphical support to develop parallel applications transparently for both the parallel systems and the Grid. This paper gives an overview on the parallelisation of a simulation algorithm for chemical reaction-diffusion systems applying P-GRADE environment at all stages of parallel program development cycle including the design, the debugging, the execution, and the performance analysis. The automatic checkpoint mechanism for parallel programs, which supports the migration of parallel jobs between different clusters, together with the application monitoring facilities of P-GRADE enable the long-running parallel jobs to run on various non-dedicated clusters in the Grid while their execution can be visualised on-line for the user. The presented research achievements will be deployed in a chemistry Grid environment for air pollution forecast.

Róbert Lovas, Péter Kacsuk, István Lagzi, Tamás Turányi
Remote Visualization Based on Grid Computing

Remote visualization can not be well done in interactive way because of lacking of the expensive equipments and a large amount of resources in the past. With the development of Grid, it becomes possible. This paper discusses the pipeline of remote visualization technology based on Grid Computing. Moreover, we analyze and propose the basic framework of remote visualization technology for Grid. In addition, methods on improving the performance of visualization in Grid environment are discussed.

Zhigeng Pan, Bailin Yang, Mingmin Zhang, Qizhi Yu, Hai Lin
Avenues for High Performance Computation on a PC

Modern microprocessors used in personal computers have built in parallel computing features. In general, these microprocessors support Single Instruction Multiple Data (SIMD) type parallel computing mechanism. Recently, dual-CPU systems are becoming more common and therefore, it is possible to combine the SIMD mechanism and the computing power that comes with two microprocessors to form a two-level parallel computing system. In this paper, we present combined utilization of two types of parallelism and demonstrate its performance on a computationally intensive image processing algorithm. Our results show that utilizing the bi-level parallel mechanism, a far superior speed-up can be achieved than those by using only two CPUs.

Yu-Fai Fung, M. Fikret Ercan, Wai-Leung Cheung, Gujit Singh
A Modified Parallel Computation Model Based on Cluster

Based on cluster, a modified parallel computation model MSG (Master-Slave-Gleaner) model is presented. MSG model can be used as a parallel computation model to develop high performance volume rendering algorithms. A very complex parallel splatting algorithm was used to test the efficiency and the practicability of the MSG model in IBM Cluster 1350. The results of theoretical analysis and numerical testing showed that the modified computing model presented can decrease computing time, increase accelerator and improve computing efficiency greatly. In addition, the MSG model is suitable for a wide variety of volume rendering algorithms and has good scalability.

Xiaotu Li, Jizhou Sun, Jiawan Zhang, Zhaohui Qi, Gang Li
Parallel Testing Method by Partitioning Circuit Based on the Exhaustive Test

This paper presents an approach, which is applicable to parallel testing, to the generation of test patterns using the partitioning technique based on the exhaustive testing scheme. The suggested method can discover faults faster than the exhaustive testing scheme. It also shows that testing can be performed in parallel using the functionally partitioned blocks, rather than in linear. In this paper, a functional level description as well as the Boolean differential is used to generate a test pattern that can be inserted into in parallel.

Wu Woan Kim
A Parallel Volume Splatting Algorithm Based on PC-Clusters

Splatting is a widely used object-order volume rendering algorithm. By using a hybrid data-space and image-space partitioning scheme, a parallel volume Splatting algorithm based on PC-clusters is presented in this paper. Experiment results demonstrate that the proposed method can gain high rendering speed and excellent speedups on commercial PC-clusters.

Jiawan Zhang, Jizhou Sun, Yi Zhang, Qianqian Han, Zhou Jin

Molecular Processes Simulation Workshop

Three-Center Nuclear Attraction Integrals for Density Functional Theory and Nonlinear Transformations

In the present work, we present the progress realized in the fast and accurate numerical evaluation of three-center nuclear attraction integrals. These integrals are required for density functional theory and also for any accurate ab initio molecular structure calculations. They occur in many millions of terms, even for small molecules and require rapid and accurate evaluation, best obtained over Slater type functions.The Fourier transform method allowed analytic expressions to be developed for these integrals. These analytic expressions turned out to be extremely difficult to evaluate precisely and rapidly because of the presence of spherical Bessel integrals. Different approaches were used to develop efficient algorithms for the numerical evaluation of these spherical Bessel integrals. These approaches are discussed and comparisons in accuracy and rapidity are presented to demonstrate that the approaches based on nonlinear transformations present a valuable contribution to the literature on molecular integrals over Slater type functions.

Hassan Safouhi
Parallelization of Reaction Dynamics Codes Using P-GRADE: A Case Study

P-GRADE, a graphical tool and programming environment was used to parallelize atomic level reaction dynamics codes. In the reported case study a classical trajectory code written in FORTRAN has been parallelized. The selected level was coarse grain parallelization. P-GRADE allowed us to use automatic schemes, out of which the task farm was selected. The FORTRAN code was separated into an input/output and a working section. The former, enhanced by a data transfer section operates on the master, the latter on the slaves. Small sections for data transfer were written in C language. The P-GRADE environment offers a user-friendly way of monitoring the efficiency of the parallelization. On a 20-processor NPACI Rocks cluster the speed-up is 99 percent proportional to the number of processors.

Ákos Bencsura, György Lendvay
Numerical Implementation of Quantum Fluid Dynamics: A Working Example

In the last years new interest has been addressed to the fluid dynamical formulation of quantum mechanics of Madelung-De Broglie-Bohm. Various attempts to implement numerically this formulation have been presented recently concerning photodissociation and scattering, sharing the use of the lagrangian and semi-lagrangian method to discretize the spatial independent variables. The possibility of using moving numerical grids suited for wave packet propagation, in general computationally cheaper than fixed grids, is attractive but difficult when QFD is applied to molecular scattering. This aspect is discussed in the paper, and a particular solution of the problem is proposed. The result for transmission probability through the Eckart barrier as a function of momentum is obtained by Fourier transforming the final wave packet; the agreement with analytical results is good on a large range, the computational time is reasonable and the algorithm used is relatively simple in comparison to other ones proposed for solving this problem.

Fabrizio Esposito
Numerical Revelation and Analysis of Critical Ignition Conditions for Branch Chain Reactions by Hamiltonian Systematization Methods of Kinetic Models

A new numerical method to calculate explosion limits of gas mixtures based on Hamiltonian systematization of kinetic models is offered. As a criterion for the critical state of reaction system the extreme behaviour of functions describing total concentration of chemical species is selected.The present approach enables numerical calculation of kinetic significance of the individual steps and species in the critical ignition conditions of the reaction. The abilities of this numerical method to calculate explosion limits are illustrated by means of the computer programme VALKIN for the reaction mechanism of hydrogen oxidation.

Gagik A. Martoyan, Levon A. Tavadyan
Computer Simulations in Ion-Atom Collisions

One area in science that computer modelling and its applications are extensively used is ion-atom collisions. In this paper we present computer simulations for various theoretical models used in the study of single ionization of helium by multiply charged ions. We study several different numerical continuum-distorted-wave eikonal-initial-state (CDW-EIS) models and compare to recent experimental data for triply differential cross sections for the scattering plane for large perturbations. We observe that one type of CDW-EIS simulation based on a model potential with physically appropriate short-range and long-range behaviour leads to better agreement with experiment at high projectile charges, low electron energy and high momentum transfer than some of the earlier CDW-EIS models in the literature. It is hoped that further calculations using this CDW-EIS model to simulate triply differential cross sections, both qualitatively and quantitatively may lead to a fuller understanding of single ionization by highly charged ion impact.

S. F. C. O’Rourke, R. T. Pedlow, D. S. F. Crothers
Bond Order Potentials for a priori Simulations of Polyatomic Reactions

A class of potential energy functionals based on bond order variables has been designed with the purpose of providing formulations of the interaction allowing a smooth switch between different arrangements and a proper description of the reaction channel. An application to the six atom Cl + CH4 → HCl + CH3 reaction is considered.

Ernesto Garcia, Carlos Sánchez, Margarita Albertí, Antonio Laganà
Inorganic Phosphates Investigation by Support Vector Machine

We dealt the prediction of crystal chemical features of new sinthesized inorganic phosphates with a supervised-learning regression problem. Then, we analysed correlations between crystal chemical properties of phosphate crystals by a learning machine method, Support Vector Machine (SVM), to develop the detection algorithm. Using structural properties of phosphate crystal structures widely described in the literature, we developed several SVMs able to capture statistical relations between crystal chemical properties of the anhydrous phosphates from the available dataset. In this way, we demonstrated the suitability of SVM for the prediction of structural properties of crystals.

Cinzia Pierro, Francesco Capitelli
Characterization of Equilibrium Structure for N2-N2 Dimer in 1.2Å≤ R≥2.5Å Region Using DFT Method

Density Functional Theory (B3PW91 and B3LYP) has been used to investigate the equilibrium geometries and potential energy curves of singlet ground state of N2-N2 dimer for 1.20Å≤R≤2.50Å. D2 h symmetry proved to be the most stable. The effects of the BSSE upon the optimized geometries and potential energy curves of D2 h group have been studied. The BSSE is found to 1-2% for Density Functional Theory with 6-311++G** basis set.

Ajmal H. Hamdani, S. Shahdin
A Time Dependent Study of the Nitrogen Atom Nitrogen Molecule Reaction

In this paper a preliminary study of the N + N2 reaction making use of quantum time-dependent calculations in Jacobi coordinates is presented.

Antonio Laganà, Leonardo Pacifici, Dimitris Skouteris
From DFT Cluster Calculations to Molecular Dynamics Simulation of N2 Formation on a Silica Model Surface

B3LYP-DFT electronic structure cluster calculations have been performed to evaluate the adsorption properties of N and N2 interacting with Si x O y clusters in a given adsorption site. To check the convergence of the calculated binding energy, clusters of different size were used in the calculations. As expected, the N atom is chemisorbed, E b  ≅ 2.75eV, while N2 is weakly physisorbed. The ab initio results were used to build three PES of the LEPS-type having different activation barrier. The obtained PES have been used in the semiclassical scattering equations and the dynamics of the N2 formation after atom recombination on a model silica surface was studied in great detail.

M. Cacciatore, A. Pieretti, M. Rutigliano, N. Sanna
Molecular Mechanics and Dynamics Calculations to Bridge Molecular Structure Information and Spectroscopic Measurements on Complexes of Aromatic Compounds

Molecular dynamics approaches are applied to the analysis of some properties of the anisole-water and benzene-argon complex. To this end, semiempirical pairwise additive potentials are used. The analysis of the potential energy surface allows to select the most probable conformation among the calculated ones. Then further calculations allow to estimate rotationally resolved spectra and interconversion ratios.

G. Pietraperzia, R. Chelli, M. Becucci, Antonio Riganelli, M. Alberti, Antonio Laganà
Direct Simulation Monte Carlo Modeling of Non Equilibrium Reacting Flows. Issues for the Inclusion into a ab initio Molecular Processes Simulator

Direct Simulation Monte Carlo (DSMC) method for the modeling of non equilibrium reacting flows is presented. The is particularly suitable for the simulation of gas-phase systems with complex boundary conditions and with varying degrees of thermal and chemical non equilibrium. Since the description is done at the kinetic level, detailed information about the elementary processes, as derived from ab initio molecular dynamics calculations, can be used as input physical data for the simulation. These features make the DSMC method an ideal candidate for inclusion into a ab initio Molecular Processes Simulator for the gas phase.

D. Bruno, M. Capitelli, S. Longo, P. Minelli
Molecular Simulation of Reaction and Adsorption in Nanochemical Devices: Increase of Reaction Conversion by Separation of a Product from the Reaction Mixture

We present a novel simulation tool to study fluid mixtures that are simultaneously chemically reacting and adsorbing within a molecularly porous material. The method is a combination of the Reaction Ensemble Monte Carlo method and the Dual Control Volume Grand Canonical Molecular Dynamics technique. The method, termed the Dual Control Cell Reaction Ensemble Molecular Dynamics (DCC-RxMD) method, allows for the calculation of both equilibrium and non-equilibrium transport properties in porous materials, such as diffusion coefficients, permeability and mass flux. Simulation control cells, which are in direct physical contact with the porous solid, are used to maintain the desired reaction and flow conditions for the system. The simulation setup closely mimics an actual experimental system in which the thermodynamic and flow parameters are precisely controlled. We present an application of the method to the dry reforming of methane within a nanoscale reactor in the presence of a semipermeable nanomembrane modelling silicalite. We studied the effects of the nanomembrane structure and porosity on the reaction species permeability by considering three different nanomembrane models. We also studied the effects of an imposed pressure gradient across the nanomembrane on the mass flux of the reaction species. Conversion of syngas (H2/CO) increased significantly in all the nanoscale membrane reactor systems considered. The results of this work demonstrate that the DCC-RxMD method is an attractive computational tool in the design of nanoscale membrane reactors for industrial processes.

William R. Smith, Martin Lísal
Quantum Generalization of Molecular Dynamics Method. Wigner Approach

The new method for solving Wigner-Liouville’s type equations and studying dynamics of quantum particles has been developed within the Wigner formulation of quantum statistical mechanics. This approach combines both molecular dynamics and Monte Carlo methods and computes traces and spectra of the relevant dynamical quantities. Considering, as an application, the quantum dynamics of an ensemble of interacting electrons in an array of random scatterers clearly demonstrates that the many-particle interaction between the electrons can lead to an enhancement of the electrical conductivity.

V. Filinov, M. Bonitz, V. Fortov, P. Levashov
C6NH6  +  Ions as Intermediates in the Reaction between Benzene and N +  Ions

The nitrenium ions of formula C6NH6 +  arising from the reaction between benzene molecules and atomic nitrogen ions, N +  (3P), are studied by means of theoretical methods so that their structures and relative stabilities are investigated. All calculations are carried out by using the DFT hybrid functional B3LYP in conjunction with the split valence 6-31G* basis set.

Marco Di Stefano, Marzio Rosi, Antonio Sgamellotti
Towards a Full Dimensional Exact Quantum Calculation of the Li + HF Reactive Cross Section

Quantum mechanical calculations of the probabilities of the Li + HF(v, j) → LiF(v′, j′) + H elementary reaction have been performed by distributing the computations on two large scale computing facilities and using two different high level approaches. The calculations have been performed for several values of the total angular momentum quantum number, a large batch of energies, and the relevant reactant rotational states in order to progress towards an evaluation of the reactive cross section to compare with the experiment. A particular emphasis is put on the role played by internal energy.

Antonio Laganà, Stefano Crocchianti, Valentina Piermarini
Conformations of 1,2,4,6-Tetrathiepane

Ab initio calculations at HF/6-31+G*//HF/6-31+G* and B3LYP/6-31+G*//HF/6-31+G* level of theory for geometry optimization and MP2/6-31+G*//HF/6-31+G* level for a single point total energy calculation are reported for 1,2,4,6-tetrathiepane. This cyclic polysulfide is predicted to exist as a mixture of two unsymmetrical conformations, which interconvert via a low energy barrier of 11.1 kJ mol− 1. Conformational racemization of these forms can take place via the plane symmetrical boat geometry as a transition state and requires 48.4 kJ mol− 1; the Cs symmetric chair transition state is calculated to be slightly higher (51.1 kJ mol− 1).

Issa Yavari, Arash Jabbari, Shahram Moradi
Fine Grain Parallelization of a Discrete Variable Wavepacket Calculation Using ASSIST-CL

The fine grain parallelizzation of a code performing the time propagation of a wavepacket using a Discrete Variable approach is discussed. To this end a Task Farm model has been implemented. Performances obtained using MPI and the recently proposed skeleton based coordination language ASSIST-CL are compared.

Stefano Gregori, Sergio Tasso, Antonio Laganà

Numerical Models in Biomechanics Session

On the Solution of Contact Problems with Visco-Plastic Friction in the Bingham Rheology: An Application in Biomechanics

The paper deals with the solvability of model problems connected with a stress-strain rate analysis of loaded long human bones filled up by a marrow tissue. The problem is described by a contact problem with (or without) visco-plastic friction in the visco-plastic Bingham rheology. The variational and numerical solutions are discussed.

Jiří Nedoma
On the Stress-Strain Analysis of the Knee Replacement

The paper deals with the stress/strain analysis of an artificial knee joint. Three cases, where femoral part of the knee joint part is cut across under 3, 5 and 7 degrees, are analysed. Finite element method and the nonoverlapping decomposition technique for the contact problem in elasticity are applied. Numerical experiments are presented and discussed.

J. Daněk, F. Denk, I. Hlaváček, J. Nedoma, J. Stehlík, P. Vavřík
Musculoskeletal Modeling of Lumbar Spine under Follower Loads

The lumbar spine can support a much larger compressive load if it is applied along a follower load path that approximates the tangent to the curve of the lumbar spine compared with the vertical load path. In order to investigate the quantitative role of multisegmental muscles to the generation of follower loading patterns, a mathematical model of the human lumbar spine subjected to the follower load was developed in the frontal plane using the finite element method.

Yoon Hyuk Kim, Kyungsoo Kim
Computational Approach to Optimal Transport Network Construction in Biomechanics

Long-distance liquid transport in biosystems is provided by special branching systems of tubes (arteries, veins, plant vessels). Geometry of the systems possesses similar patterns and can be investigated by computer methods of pattern recognition. Here some results on plant leaf venation investigation are presented. The lengths, diameters and branching angles are estimated for the leaves of different shape, size and venation type. The statistical distributions of the measured parameters are similar to the corresponding ones which have been obtained for arterial beds. The both correspond to the model of optimal branching pipeline which provide liquid delivering at minimum total energy consumptions. The biomechanical model of liquid motion in a system consisting of a long thin tube with permeable walls which is embedded into a biological porous medium is considered. The pressure distributions and velocity fields for different geometry of the system are obtained. The main result is when the delivering liquid is completely absorbed by the alive cells in the porous medium the relation between the diameter and the length of the tube and the total volume of the medium which correspond to the measured data is reached.

Natalya Kizilova
Encoding Image Based on Retinal Ganglion Cell

Today’s Computer vision technology has not been showing satisfying results and is far from a real application because currently developed computer vision theory has many assumptions, and it is difficult to apply most of them to real world. Therefore, we will come over the limit of current computer vision technology by developing image recognition model based on retinal ganglion cell. We have constructed the image recognition model based on retinal ganglion cell and had experiment upon recognition and compression processing of information such as retinal ganglion cell by handwritten character database of MNIST.

Sung-Kwan Je, Eui-Young Cha, Jae-Hyun Cho

Scientific Computing Environments (SCE’s) for Imaging in Science Session

A Simple Data Analysis Method for Kinetic Parameters Estimation from Renal Measurements with a Three-Headed SPECT System

We present here a new method for the determination of renal kinetic parameters from time-activity curves (TACs) obtained in dynamic tomographic Nuclear Medicine studies. The method allows to achieve subsecond sampling rates when using three-headed Single Photon Emission Computed Tomography (SPECT) systems.We used three-projections data sets in order to directly reconstruct region-of-interest (ROI) contents and to achieve an accurate description of the TACs. By defining a kinetic model for the behavior of the tracer we were able to relate each sampling point of the renal TAC to the correspondent point of the plasmatic TAC (input function).We analyzed simulated projections in 81 cases of region segmentation errors: the values of the renal clearances were reproduced within about -12% and +4%. The vascular fractions were reproduced within the measurement errors.

Eleonora Vanzi, Andreas Robert Formiconi
Integrating Medical Imaging into a Grid Based Computing Infrastructure

Since the capillary diffusion of powerful computing systems Medical Imaging has undergone a profound transformation providing new ways to physicians to conduct research and perform diagnosis and prevention. In this work a computational environment for medical imaging purposes based on a grid architecture is used to build a medical imaging application. It results from the interaction of scientists devoted to the design and deployment of new tomographic reconstruction techniques, researchers in the field of distributed and parallel architectures and physicians involved with tomographic imaging. The single software, hardware and medical components are described. Their integration into a distributed framework provides mutual benefits to both the scientific and the medical community, and represents the basis for the creation of a global, scientific, virtual community that reaches beyond the limits of space, remotely sharing not only physical resources but also experience and knowledge.

Paola Bonetto, Mario Guarracino, Fabrizio Inguglia
Integrating Scientific Software Libraries in Problem Solving Environments: A Case Study with ScaLAPACK

We describe the reference software architecture of parIDL, a PSE for distributed memory parallel computers that integrates the parallel software library ScaLAPACK, needed to develop and test scientific applications related to adaptive optics simulations. It incorporates a run time support for the execution of the commercial problem solving environment IDL on parallel machines, whose development has begun in a project with astronomy researchers for adaptive optic simulation of terrestrial large binocular telescopes.

L. D’Amore, M. R. Guarracino, G. Laccetti, A. Murli
Parallel/Distributed Film Line Scratch Restoration by Fusion Techniques

Many algorithms have been proposed in literature for digital film restoration; unfortunately, none of them ensures a perfect restoration whichever is the image sequence to be restored. Here, we propose an approach to digital scratch restoration based on image fusion techniques for combining relatively well assested distinct techniques. The method has large memory requirements and is computationally intensive; due to this main reason, we propose parallel versions of the restoration approach, focusing on strategies based on data partition and pipelining to achieve good load balancing. The parallel approach well adapts also to be distributed.

G. Laccetti, L. Maddalena, A. Petrosino
An Interactive Distributed Environment for Digital Film Restoration

The paper presents FESR an interactive environment enabling collaborative digital film restoration over a digital network by connecting seamless the supervisor and operator’s graphical frameworks, a parallel Image Processing server, a video stream server and a DBMS server. It allows a supervisor to remotely define the restoration protocol and to monitor job progress by using a metadata-driven navigation interface. The operators use a graphical desktop to interactively perform parameter steering of restoration filters and to activate and synchronize batch processing of defect detection and restoration filters on the IP server. The video stream server stages digital frames while the meta-data server stores defect masks and other film attributes by using MPEG7 formalism for video-segments representation. System prototype architecture, inspired to component technology is described.

F. Collura, A. Machì, F. Nicotra

Computer Graphics and Geometric ModelingWorkshop (TSCG 2004)

On Triangulations

Triangulations in 2D and 3D are a necessary part of many applications. This paper brings a brief survey of main triangulations used in computer graphics and computational geometry, then focuses on Delaunay and greedy triangulations in 2D and 3D and presents their most important properties and algorithms. Then three applications from computer graphics and image processing areas are presented and results discussed.

Ivana Kolingerová
Probability Distribution of Op-Codes in Edgebreaker

Rapid transmission of 3D mesh models has become important with the use of Internet and with increased number of studies on the compression of various aspects for mesh models. Despite the extensive studies on Edgebreaker for the compression of topology for meshes, the probability distribution of its five op-codes, C, R, E, S, and L, has not yet been rigorously analyzed. In this paper we present the probability distribution of the op-codes which is useful for both the optimization of the compression performances and a priori estimation of compressed file size.

Deok-Soo Kim, Cheol-Hyung Cho, Youngsong Cho, Chang Wook Kang, Hyun Chan Lee, Joon Young Park
Polyhedron Splitting Algorithm for 3D Layer Generation

RP(Rapid Prototyping) is often called as Layered Manufacturing because of layer by layer building strategy. Layer building strategy is classified into two methodologies. One is based on the 2D layer and the other is based on the 3D layer. 2D layer is simply created by the intersection between the polyhedron and a slicing plane whereas 3D layer is created with some constraints such as cuttability and manufacturability. We propose a geometric algorithm that uses the feature of individual triangle to create 3D layers.

Jaeho Lee, JoonYoung Park, Deok-Soo Kim, HyunChan Lee
Synthesis of Mechanical Structures Using a Genetic Algorithm

This paper proposes a representation for the embodiment design and a genetic algorithm suited for the representation. Since embodiment design consists of highly interrelated design processes, how to formalize and integrate the processes has become an important issue. In this paper, embodiment design is modeled as simultaneous multi-objective optimization of parametric designs for parts and of layout generation for structures. The study, thus, involves genotypes that are adequate to represent phenotypes of the models for the genetic algorithm to solve the given problems. We demonstrate the implementation of the genetic algorithm with the result applied to the gear equipment design.

In-Ho Lee, Joo-Heon Cha, Jay-Jung Kim, M. -W. Park
Optimal Direction for Monotone Chain Decomposition

Monotone chain is an important concept in computational geometry. A general polygon or polygonal chain can be decomposed into monotone chains. Described in this paper are two algorithms to find an optimal direction with respect to which a polygonal chain can be split into the minimal number of monotone chains. The first naive algorithm has O(n2) time complexity, while the improved algorithm runs in O(n log n) time, where n is the number of vertices of input chain. The optimal direction can improve the performance of the subsequent geometric processing.

Hayong Shin, Deok-Soo Kim
GTVIS: Fast and Efficient Rendering System for Real-Time Terrain Visualization

The paper presents an improved scheme for the visualization of 3D terrain in real-time using Digital Elevation Model (DEM). The presented method is primarily based on a Real-time Optimally Adapting Mesh (ROAM), and boosts ROAM performance by improving the rendering speed, enhancing the visual continuity and achieving frame rate constancy. Based on the proposed methodoly, Geo-morph Terrain VIsualization System (GTVIS) was created. Experimental results confirm striking improvement in the rendering efficiency, frame rate constancy and visual continuity, while preserving important features of the terrain (such as sharp peaks and valleys).

Russel A. Apu, Marina L. Gavrilova
Target Data Projection in Multivariate Visualization – An Application to Mine Planning

Visualization is a key issue for multivariate data analysis. Multivariate visualization is an active research topic and many efforts have been made in order to find suitable and meaningful visual representations. In this paper we present a technique for data projection in multivariate datasets, named Target Data Projection (TDP). Through this technique a vector is created for each multivariate data item considering a subset of the available variables. A new scalar variable is generated projecting those vectors over a target vector that defines the direction of interest for visual analysis. End-users set up target vectors in order to explore particular relationships by means of application meaningful projections. Hence, it is possible to map a combination of multivariate data into one scalar variable for graphical representation and interaction. This technique has proved to be very flexible and useful in mine planning providing valuable information for decision making.

Leonardo Soto, Ricardo Sánchez, Jorge Amaya
Parametric Freehand Sketches

In this paper we present the 2D parametric freehand sketch component of an experimental prototype called GEGROSS (GEsture & Geometric ReconstructiOn based Sketch System). The module implements a gesture alphabet and a calligraphic interface to manage geometric constraints found in 2D sections, that are later used to perform modeling operations. We use different elements to implement this module. The geometric kernel ACIS stores model data. The constraint manager 2D DCM handles restrictions. Finally, we use the CALI library to define gestural interfaces. In this paper we present a strategy for integrating these tools, and a calligraphic interface we developed to provide dimensional controls over freehand sketches. Our system allows users to build simple sketches composed by line segments and arcs, which are automatically tidied and beautified. Proportional and dimensional information over sketched parts is provided by handwriting their corresponding sizes.

Ferran Naya, Manuel Contero, Nuria Aleixos, Joaquim Jorge
Variable Level of Detail Strips

Some multiresolution models offer variable resolution capabilities. This property allows us to display areas with different levels of detail on the same mesh. In this paper we present a continuous multiresolution model that allows variable resolution. An important characteristic of the model is the use of triangle strips both in the data structure and in the rendering stage. The proposed model extends a previous strip-based multiresolution scheme that implements uniform resolution. We present new data structures, algorithms and some experiments that demonstrate the model efficiency.

J. F. Ramos, M. Chover
Bézier Solutions of the Wave Equation

We study polynomial solutions in the Bézier form of the wave equation in dimensions one and two. We explicitly determine which control points of the Bézier solution at two different times fix the solution.

J. V. Beltran, J. Monterde
Matlab Toolbox for a First Computer Graphics Course for Engineers

This paper describes a new Matlab toolbox designed for teaching a first Computer Graphics course for Engineering students. The aim of the toolbox is to provide the students with a tool for a comprehensive overview on the fundamentals of Computer Graphics in terms of basic algorithms and techniques. Firstly, the paper discusses the design of such a course taking into account its contents, goals, duration and other constraints. Then, the toolbox architecture is described. Finally the application of this toolbox to educational issues is discussed through some illustrative examples. The system has been successfully used by the authors during the last three years at the University of Cantabria. The main conclusions from this experience are also reported.

A. Gálvez, A. Iglesias, C. Otero, R. Togores
A Differential Method for Parametric Surface Intersection

In this paper, a new method for computing the intersection of parametric surfaces is proposed. In our approach, this issue is formulated in terms of an initial value problem of first-order ordinary differential equations (ODEs), which are to be numerically integrated. In order to determine the initial value for this system, a simple procedure based on the vector field associated with the gradient of the distance function between points lying on each of the parametric surfaces is described. Such a procedure yields a starting point on the nearest branch of the intersection curve. The performance of the presented method is analyzed by means of some illustrative examples that contain many of the most common features found in parametric surface intersection problems.

A. Gálvez, J. Puig-Pey, A. Iglesias
A Comparison Study of Metaheuristic Techniques for Providing QoS to Avatars in DVE Systems

Network-server architecture has become a de-facto standard for Distributed Virtual Environment (DVE) systems. In these systems, a large set of remote users share a 3D virtual scene. In order to design scalable DVE systems, different approaches have been proposed to maintain the DVE system working under its saturation point, maximizing system throughput. Also, in order to provide quality of service to avatars in a DVE systems, avatars should be assigned to servers taking into account, among other factors, system throughput and system latency. This highly complex problem is called quality of service (QoS) problem in DVE systems. This paper proposes two different approaches for solving the QoS problem, based on modern heuristics (simulated annealing and GRASP). Performance evaluation results show that the proposed strategies are able no only to provide quality of service to avatars in a DVE system, but also to keep the system away from the saturation point.

P. Morillo, J. M. Orduña, M. Fernández, J. Duato
Visualization of Large Terrain Using Non-restricted Quadtree Triangulations

This paper presents a set of new techniques oriented towards the real-time visualization of large terrains. These techniques are mainly focused on semi-regular triangulations of non-restricted quadtree terrain representations. Despite the fact that the paper shows that triangulations based on non-restricted quadtrees are as simple and efficient as those based on restricted quadtrees, the new triangulations avoid discontinuity problems among the boundaries of different patches without the need for tree balancing and extra triangles addition. Another important feature of the proposed triangulation is that it incorporates an efficient method for building triangle strips and triangle fans for the efficient rendering of the final triangle mesh.

Mariano Pérez, Ricardo Olanda, Marcos Fernández
Boundary Filtering in Surface Reconstruction

One of the methods for 3D data model acquisition is the real object digitization followed by the surface reconstruction. Many algorithms have been developed during past years, each of them with its own advantages and disadvantages. We use for the reconstruction a CRUST algorithm by Nina Amenta, which selects surface triangles from the Delaunay tetrahedronization using information from the dual Voronoi diagram. Unfortunately, these candidate surface triangles do not form a manifold, so it is necessary to perform some other steps for manifold extraction. In this paper we present some improvements and observations for this step.

Michal Varnuška, Ivana Kolingerová
Image Coherence Based Adaptive Sampling for Image Synthesis

Due to generality, simplicity and robustness of Monte Carlo, as well as the high complexity of the computation of global illumination problem, Monte Carlo is a very good choice for synthesizing image accounting for global illumination effects. However, the well-known problem in Monte Carlo based methods for global illumination is noise. We explore adaptive sampling as a method to reduce noise. We introduce a coherence distance map, which is one kind of formulization for image coherence, to conduct the adaptive sampling scheme. Based on the coherence distance map, we construct an elegant probability density function to drive Monte Carlo importance sampling to adaptively controlling the number of required samples per pixel. The proposed algorithm can not only improve image quality efficiently, but also be implemented easily. In addition, our approach is unbiased and thus superior to mostly earlier adaptive sampling techniques.

Qing Xu, Roberto Brunelli, Stefano Messelodi, Jiawan Zhang, Mingchu Li
A Comparison of Multiresolution Modelling in Real-Time Terrain Visualisation

One of the most important challenges in Computer Graphics is real-time rendering of terrain. This is due to the huge number of polygons used to represent these surfaces. Up to now, several works have appeared that attempt to reduce terrain visualisation time, some of which are based on multiresolution modelling. This method adapts the number of polygons of an object as a function of particular criteria in order to reduce the information that has to be computed for each frame. The ultimate purpose of this work is to compare the most popular multiresolution algorithms to have appeared in recent years, while at the same time examining some of the fundamental concepts underlying real-time terrain visualisation.

C. Rebollo, I. Remolar, M. Chover, J. F. Ramos
Photo-realistic 3D Head Modeling Using Multi-view Images

In this paper, we present a system for reconstructing photo-realistic 3D head models from multi-view images. In the proposed system, we perform two-pass bundle adjustments to reconstruct a photo-realistic 3D head model. At the first pass it computes several feature points of a target 3D head and then use these features to modify a generic head to obtain a rough head model. Next, we perform the second pass bundle adjustment to compute a detailed 3D head model. The texture of head models is obtained from multi-views. After texturing on the reconstructed head models, several preliminary results of photo-realistic 3D head models are demonstrated to verify the proposed system.

Tong-Yee Lee, Ping-Hsien Lin, Tz-Hsien Yang
Texture Mapping on Arbitrary 3D Surfaces

Texture mapping is a common technique in computer graphics to render realistic images. Our goal is to achieve a distortion-less texture mapping on arbitrary 3D surfaces. To texture 3D models, we propose a scheme to flatten 3D surfaces into a 2D parametric domain. Our method does not require the two-dimensional boundary of flattened surfaces to be stationary. It consists of three steps: 1) we find high distortion areas in a 2D parametric domain and find a cutting path over these areas, 2) we add virtual points to adaptively find the better boundary of parametric domain instead of a predefined one and 3) finally, we perform an well-known smoothing technique for better texture mapping. The proposed scheme can be efficiently solved by a linear system and it yields an interactive performance. Finally, several preliminary results are demonstrated to verify the proposed scheme.

Tong-Yee Lee, Shaur-Uei Yan
Segmentation-Based Interpolation of 3D Medical Images

This paper introduces a new interpolation algorithm based on images segmentation. Firstly, the algorithm obtains the regions of air, soft tissue and skeleton through segmenting images. Secondly, the algorithm uses matching interpolation in the same density regions, and scales the size of region as the interpolation data to interpolate image in the different density regions. The new image basically satisfies the requirements of medical image interpolation. Compared with linear interpolation, the new algorithm greatly improves the quality of image. The interpolation can be effectively used to construct 3D volume models.

Zhigeng Pan, Xuesong Yin, Guohua Wu
A Bandwidth Reduction Scheme for 3D Texture-Based Volume Rendering on Commodity Graphics Hardware

In this paper, we propose a bandwidth-effective volume rendering scheme which subdivides the volume into the sub-volumes and transmits them to the texture units in visibility order. Each sub-volume is rendered in the same manner as the original volume on the graphics hardware and the corresponding sub-image is blended in the alpha blending unit. The sub-volume oriented processing improves the cache efficiency and allows empty space skipping. Moreover, it is capable of rendering volume datasets that do not fit into the texture memory. Simulations show that the proposed scheme is effective for 3D texture-based volume rendering on commodity graphics hardware by reducing memory bandwidth up to 30 times when compared with the traditional method.

Won-Jong Lee, Woo-Chan Park, Jung-Woo Kim, Tack-Don Han, Sung-Bong Yang, Francis Neelamkavil
An Efficient Image-Based 3D Reconstruction Algorithm for Plants

Image-based 3D reconstruction technique can build the model quickly and easily. In this paper, we propose an efficient image-based method for plant model reconstruction with modified KLT image matching method. The modification makes the new algorithm search the matching points more accurately and helps users to add matching points manually. We present the method to get 3D information from the matching images. An algorithm to unify different coordinate systems has also been introduced in this paper. Finally we discuss an easy texture-mapping way to the 3D vertex model.

Zhigeng Pan, Weixi Hu, Xinyu Guo, Chunjiang Zhao
Where the Truth Lies (in Automatic Theorem Proving in Elementary Geometry)

In this paper we use a new integrated theorem prover (GDI), codeveloped by the second author, to discuss a geometric result due to Maclane, the 83 theorem, which has been declared to be true, according some authors, while other claim it is false. Our approach is based in Gröbner bases computations and illustrates the controversial concept of truth in the algebraic automatic theorem proving model, as well as some of the new features provided by GDI. The potential applications to computer graphics of the idea behind these rather unique features, are also briefly discussed.

T. Recio, F. Botana
Helical Curves on Surfaces for Computer-Aided Geometric Design and Manufacturing

This paper introduces a new method for generating the helical tool-paths for both implicit and parametric surfaces. The basic idea is to describe the helical curves as the solutions of an initial-value problem of ordinary differential equations. This system can be obtained from the fact that the helical curve exhibits a constant angle φ with an arbitrary given vector D, which is assumed to be the axis of the helical curve. The resulting system of differential equations is then integrated by applying standard numerical techniques. The performance of the proposed method is discussed by means of some illustrative examples of helical curves on parametric and implicit surfaces.

J. Puig-Pey, A. Gálvez, A. Iglesias
An Application of Computer Graphics for Landscape Impact Assessment

This communication shows a procedure aimed at assisting in the assessment of the landscape impact caused by new lineal infrastructure services, such as motorways. The computer simulation has demanded the idealisation of the phenomenon to be graphically represented, its theoretical model and the functional analysis of the computational tool that can aid in finding a solution. The core of the problem is that there is a non finite set of landscapes to be considered (and analysed); however, according to the model proposed, if some suitable Computer Graphics libraries are used, it is possible to automatically produce a reduced catalogue of realistic images simulating, each of them, the effect that the new infrastructure will cause on the environment, just at the sites where the landscape results to be more vulnerable.

César Otero, Viola Bruschi, Antonio Cendrero, Akemi Gálvez, Miguel Lázaro, Reinaldo Togores
Fast Stereo Matching Using Block Similarity

A lot of engineering applications related to computer vision use stereo vision algorithms and they usually require fast processing capability near to real time. In this paper, we present a new technique of area-based stereo matching algorithm which provides faster processing capability by using block-based matching. Our algorithm employs block-based matching followed by pixel-based matching. By applying block-based matching hierarchically, representative disparity values are assigned to each block. With these rough results, dense disparity map is acquired under very short search range. This procedure can reduce processing time greatly. We test our matching algorithms for various types of images, and shall show good performance of our stereo matching algorithm in both running time and correctness’ aspect.

Han-Suh Koo, Chang-Sung Jeong
View Morphing Based on Auto-calibration for Generation of In-between Views

Since image morphing methods do not account for changes in viewpoint or object pose, it may be difficult even to express simple 3D transformations. Although previous view morphing can synthesize change in viewpoint, it requires camera viewpoints for automatic generation of in-between views and control points by user input for post-warping. This paper presents a new morphing algorithm that can generate automatically in-between scenes by using auto-calibration. Our method rectifies two images based on the fundamental matrix, and computes a morph that is linear interpolation with a bilinear disparity map, then generates in-between views. The proposed method has an advantage that knowledge of 3D shape and camera settings is unnecessary.

Jin-Young Song, Yong-Ho Hwang, Hyun-Ki Hong

Virtual Reality in Scientific Applications and Learning (VRSAL 2004) Workshop

Immersive Displays Based on a Multi-channel PC Clustered System

As virtual reality technologies have become feasible in the market, new arcade games tend to provide a user with an immersive game environment. This paper proposes a method to generate a wide field of view game display on an arbitrary display surface. This method consists of two parts. One is a multi-channel clustered system and the other is a multi-projection display system. The multi-channel clustered system is a scalable system which generates a wide field of view game display in real time. The multi-projection display system projects this wide field of view game display on an arbitrary display surface, eliminating a distortion caused by the shape of the display surface.

Hunjoo Lee, Kijong Byun
Virtual Reality Technology Applied to Simulate Construction Processes

This paper describes a didactic application that is part of a research project whose main aim is to develop a computer-aided system which will assist design and construction processes. It is based on the visual simulation of construction activities. Geometric Modeling and Virtual Reality techniques are used to visualize the design process and to define user-friendly interfaces in order to access construction information, which could prove useful to Civil Engineering professionals. As a first step, it was developed a prototype that serves as a didactic tool for Civil Engineering students of disciplines concerned with building construction. The construction of a double brick wall is the case studied. Using the virtual three-dimensional model of the wall it is possible to show, in an interactive way, the sequence of the construction process and observe from any point of view the configurations in detail of the building components.

Alcínia Zita Sampaio, Pedro Gameiro Henriques, Pedro Studer
Virtual Reality Applied to Molecular Sciences

The paper describes the developments of the prototype implementation of a portal related to a Virtual Reality application to Molecular Science on a Grid environment, called VMSLab-G, and will describe two running virtual experiments. Some aspects related to the applications of VMSLab-G to research and education are also outlined.

Osvaldo Gervasi, Antonio Riganelli, Antonio Laganà
Design and Implementation of an Online 3D Game Engine

This paper proposes Dream3D, an online 3D game engine. We analyze requirements to build MMORPGs together with the techniques to satisfy them. And then, the techniques are classified into four categories: 3D rendering, animation, server, and network techniques. We design and implement Dream3D to provide all the required functionalities. Related with the technique classification, Dream3D consists of four subsystems: rendering engine, animation engine, server engine, and network engine. For each of the subsystems, we propose an implementation model to satisfy the functionality requirements.

Hunjoo Lee, Taejoon Park
Dynamically Changing Road Networks – Modelling and Visualization in Real Time

In this paper we present a new concept for a driving simulator database. The database consists of a three-dimensional road network, which is not fixed like in conventional databases. The parts lying outside the viewing range of the driver can be changed during simulation.Changes in the road network are either initiated interactively by the researcher or automatically by the system based on predefined conditions. Conventional visualization methods are not usable for such a road network. Therefore we present new visualization algorithms for real time graphics. The visualization is based on a layered coarsening of the geometrical representation. The algorithms presented in this paper guarantee a framerate of at least 60fps on standard PCs.The concept is implemented in a high-fidelity simulator and currently used for psychological research on traffic and driver behaviour.

Christian Mark, Armin Kaußner, Martin Grein, Hartmut Noltemeier
EoL: A Web-Based Distance Assessment System

An implementation of a Web-based distance assessment system, called EoL, developed and used at our University is presented. EoL is based on the outcome of the DASP project, devoted to the evaluation of the competencies acquired by stagers during in-company placements in the Leonardo da Vinci program of the European Commission for lifelong learning. We discuss here the architecture of the system and the related methodological aspects. Results obtained are discussed.

Osvaldo Gervasi, Antonio Laganà
Discovery Knowledge of User Preferences: Ontologies in Fashion Design Recommender Agent System

Solutions for filtering the WWW exist, but they are hampered by the difficulty of discovery knowledge of user preferences in such a dynamic environment. We explore an ontological approach to discovery knowledge of user preference in fashion design recommender agent system. Information filtering can be used for the discovery knowledge of user preference and is therefore a key-technology for the construction of personalized recommeder system. In this paper, we focus in the application of hybrid collaborative filtering and content-based filtering to improve the performance. And we validate our web based fashion design recommender agent system according to discovery knowledge of user preference in on-line experiments. Design merchandizing may meet the consumer’s needs more exactly and easily.

Kyung-Yong Jung, Young-Joo Na, Dong-Hyun Park, Jung-Hyun Lee
When an Ivy League University Puts Its Courses Online, Who’s Going to Need a Local University?

In recent years there has been a prolific growth in on-line courses primarily provided by “less traditional” university providers in order to tap into the vast potential local and overseas markets. These universities are trying to overcome depleted government funding by taking advantage of the ease of Internet access. Traditional internationally renowned elite universities are cautiously monitoring this trend but have not jumped into the fray, preferring to rely on their own unique success factors in providing quality on-campus education. However, they have not dismissed this mode of delivery. The provision of both on-campus and on-line courses by internationally renowned elite universities does not necessarily sound the death knell for other providers, especially the elite on-line provider/universities in other countries. This paper will address the fact that there are important survival factors which are intrinsic to home universities within a country – the traditional on-campus providers, whereby the international universities taking a share in this expanding market will not have such a significant impact. The onerous and challenging implications in implementing quality on-line delivery for teaching and learning in education and training are considered in this paper.

Matthew C. F. Lau, Rebecca B. N. Tan

Web-Based Learning Session

Threads in an Undergraduate Course: A Java Example Illuminating Different Multithreading Approaches

Multithreading is a fundamental approach to expressing parallelism in programs. Since Java is emerging as the de facto standard language for platform independent software development in higher education, there is need for teaching multithreading in the context of Java. We use a simple problem from scientific computing to explain two different multithreading approaches to second-year students. More precisely, a simple boundary value problem is considered, which makes visible the differences between native Java threads and the OpenMP interface. So, the students are able to appreciate the respective merits and drawbacks of a thread package that is integrated into the Java programming language and an approach combining compiler support with library calls.

H. Martin Bücker, Bruno Lang, Hans-Joachim Pflug, Andre Vehreschild
A Comparison of Web Searching Strategies According to Cognitive Styles of Elementary Students

The popularization of the Internet brought about easy access to a huge amount of information, yet it is not easy for one to find information in need. Therefore information users should have appropriate information search competencies to gather, analyze, and utilize it efficiently. In general people have their unique information search strategies, and consequently the retrieved results also significantly vary from one to another. This paper describes an experimental study on web searching strategies of elementary students and analyzes the variation in their searching strategies by examining their individual personalities, specifically their cognitive styles.

Hanil Kim, Miso Yun, Pankoo Kim
The Development and Application of a Web-Based Information Communication Ethics Education System

This paper introduces a Web-based information communication ethics education system. The system is designed and implemented specifically for elementary school students. It is distinguished by the following characteristics: First, the system allows students to learn information communication ethics by discussion, rather than providing information for them to memorize. For this purpose, the system’s design is based on the project-based model (PBM). The second feature of this education system is its support of various types of interactions. Students can exchange their ideas with other students, teachers, as well as experts in the field. This encourages students to organize small groups for themselves. Third, the system permits students to find materials that they can apply to real life.

Suk-Ki Hong, Woochun Jun
An Interaction Model for Web-Based Learning: Cooperative Project

The purpose of this paper is to propose an interaction model, Cooperative Project, and develop the Web-based education system based on this model. Interaction is essential to education, and specifically it is emphasized on the Web-based instruction. Cooperative Project is devised to promote interaction. With Cooperative Project, learners can cooperate smoothly, and complete a project under the Web-based environment where the cooperation between learners can be effectively achieved. The major advantage of this model is that it can motivate learners to process a project with the positive interdependence until the project is completed. Further, this research also shows that Cooperative Project can be applied through the web-based instruction to maximize the educational benefits.

Eunhee Choi, Woochun Jun, Suk-Ki Hong, Young-Cheol Bang
Observing Standards for Web-Based Learning from the Web

Learning technology standardization is a lively process that will last for years until a clear and precise set of recommendations are identified. So far, there is only one official standard and some final specifications in different areas. Most documents are working drafts that are still in their development process before being approved and, most important, accepted by the global elearning community. Currently, many institutions collaborate to produce new updates almost every week. In this situation it is really hard to get familiar with the learning technology standardization process and, even worse, to keep updated. The Workshop for Learning Technologies, within the European Committee for Standardization (CEN/ISSS WS-LT), produced the Learning Technology Standards Observatory, a web portal where those interested in this process can get familiar with it, learn the main differences between related specifications, follow what is going on, what is planned for the near future and furthermore participate in the process itself. This paper presents the main functionalities and content areas within this site and how users can benefit from its use.

Luis Anido, Judith Rodríguez, Manuel Caeiro, Juan Santos

Matrix Approximations with Applications to Science, Engineering, and Computer Science Workshop

On Computing the Spectral Decomposition of Symmetric Arrowhead Matrices

The computation of the spectral decomposition of a symmetric arrowhead matrix is an important problem in applied mathematics [10]. It is also the kernel of divide and conquer algorithms for computing the Schur decomposition of symmetric tridiagonal matrices [2,7,8] and diagonal–plus–semiseparable matrices [3,9]. The eigenvalues of symmetric arrowhead matrices are the zeros of a secular equation [5] and some iterative algorithms have been proposed for their computation [2,7,8]. An important issue of these algorithms is the choice of the initial guess. Let α1 ≤ α2 ≤... ≤ α n − 1 be the entries of the main diagonal of a symmetric arrowhead matrix of order n. Denoted by λ i , i=1, ..., n, the corresponding eigenvalues, it is well know that α i ≤ λ i + 1 ≤ α i + 1, i=1,..., n-2. An algorithm for computing each eigenvalue λ i , i=1, ..., n, of a symmetric arrowhead matrix with monotonic quadratic convergence, independent of the choice of the initial guess in the interval ]α i − 1 i [ is proposed in this paper. Although the eigenvalues of a symmetric arrowhead matrix can be computed efficiently, a loss of orthogonality can occur in the computed matrix of eigenvectors [2,7,8].In this paper we propose also a simple, stable and efficient way to compute the eigenvectors of arrowhead matrices.

Fasma Diele, Nicola Mastronardi, Marc Van Barel, Ellen Van Camp
Relevance Feedback for Content-Based Image Retrieval Using Proximal Support Vector Machine

In this paper, we present a novel relevance feedback algorithm for content-based image retrieval using the PSVM (Proximal Support Vector Machine). The PSVM seeks to find the optimal separating hyperplane by “regularized least squares”. The obtained hyperplane comprises the positive and negative “proximal planes”. We interpret the proximal vectors on the proximal planes as the representatives among training samples, and propose to use the distance from the positive proximal plane as a measure of image dissimilarity. In order to reduce computational time for relevance feedback, we introduce the “expanded sets” derived from the pre-computed dissimilarity matrix, and apply the feedback algorithm to these expanded sets rather than the entire image database, while preserving the comparable precision rate. We demonstrate the efficacy of the proposed scheme using unconstrained image databases that were obtained from the Web.

YoungSik Choi, JiSung Noh
Orthonormality-Constrained INDSCAL with Nonnegative Saliences

INDSCAL is a specific model for simultaneous metric multidimensional scaling (MDS) of several data matrices. In the present work the INDSCAL problem is reformulated and studied as a dynamical system on the product manifold of orthonormal and diagonal matrices. The problem for fitting of the INDSCAL model to the data is solved. The resulting algorithms are globally convergent. Numerical examples illustrate their application.

Nickolay T. Trendafilov
Optical Flow Estimation via Neural Singular Value Decomposition Learning

In the recent contribution [9], it was given a unified view of four neural-network-learning-based singular-value-decomposition algorithms, along with some analytical results that characterize their behavior. In the mentioned paper, no attention was paid to the specific integration of the learning equations which appear under the form of first-order matrix-type ordinary differential equations on the orthogonal group or on the Stiefel manifold. The aim of the present paper is to consider a suitable integration method, based on mathematical geometric integration theory. The obtained algorithm is applied to optical flow computation for motion estimation in image sequences.

Simone Fiori, Nicoletta Del Buono, Tiziano Politi
Numerical Methods Based on Gaussian Quadrature and Continuous Runge-Kutta Integration for Optimal Control Problems

This paper provides a numerical approach for solving optimal control problems governed by ordinary differential equations. Continuous extension of an explicit, fixed step-size Runge-Kutta scheme is used in order to approximate state variables; moreover, the objective function is discretized by means of Gaussian quadrature rules. The resulting scheme represents a nonlinear programming problem, which can be solved by optimization algorithms. With the aim to test the proposed method, it is applied to different problems.

Fasma Diele, Carmela Marangi, Stefania Ragni
Graph Adjacency Matrix Associated with a Data Partition

A frequently recurring problem in several applications is to compare two or more data sets and evaluate the level of similarity. In this paper we describe a technique to compare two data partitions of different data sets. The comparison is obtained by means of matrices called Graph Adjacency Matrices which represent the data sets. Then, a match coefficient returns an estimation of the level of similarity between the data sets.

Giuseppe Acciani, Girolamo Fornarelli, Luciano Liturri
A Continuous Technique for the Weighted Low-Rank Approximation Problem

This paper concerns with the problem of approximating a target matrix with a matrix of lower rank with respect to a weighted norm. Weighted norms can arise in several situations: when some of the entries of the matrix are not observed or need not to be treated equally. A gradient flow approach for solving weighted low rank approximation problems is provided. This approach allows the treatment of both real and complex matrices and exploits some important features of the approximation matrix that optimization techniques do not use. Finally, some numerical examples are provided.

Nicoletta Del Buono, Tiziano Politi

Spatial Statistics and Geographical Information Systems: Algorithms and Applications

A Spatial Multivariate Approach to the Analysis of Accessibility to Health Care Facilities in Canada

The paper discusses an analytical framework for the specification of a multivariate spatial model of the accessibility of health care facilities to the Canadian population. The model draws from simple economic theory to define an optimum level of service provision, based on the demand-supply equilibrium. The focus is on the demand side model: the demand is represented by the population requiring the service, and since the population is spatially distributed over the study region, accessibility of the service becomes the key variable. A set of modules forms the entire model, the main feature of which are the determination of the potential origin region and the specification of an efficient spatial model that explicitly incorporates local factors.

Stefania Bertazzon
Density Analysis on Large Geographical Databases. Search for an Index of Centrality of Services at Urban Scale

Geographical databases are available to date containing detailed and georeferenced data on population, commercial activities, business, transport and services at urban level. Such data allow examining urban phenomena at very detailed scale but also require new methods for analysis, comprehension and visualization of the spatial phenomena. In this paper a density-based method for extracting spatial information from large geographical databases is examined and first results of its application at the urban scale are presented. Kernel Density Estimation is used as a density based technique to detect clusters in spatial data distributions. GIS and spatial analytical methods are examined to detect areas of high services’ supply in an urban environment. The analysis aims at identifying clusters of services in the urban environment and at verifying the correspondence between urban centres and high levels of service.

Giuseppe Borruso, Gabriella Schoier
An Exploratory Spatial Data Analysis (ESDA) Toolkit for the Analysis of Activity/Travel Data

Recent developments in geographic information systems (GIS) and increasing availability of micro-data for cities present an environment that encourages innovations in approaches to visualizing and exploring outcomes of urban processes. We describe an object-oriented, GIS-based environment that facilitates visualization, exploration, and description of household level activity-travel behaviour. The activity pattern of a sample household from the 1994/1995 Portland Household Activity and Travel Behaviour Survey is used in the discussion to demonstrate the current capabilities of our system.

Ronald N. Buliung, Pavlos S. Kanaroglou
Using Formal Ontology for Integrated Spatial Data Mining

With increasingly available amount of data on a geographic space, spatial data mining has attracted much attention in a geographic information system (GIS). In contrast to the prevalent research efforts of developing new algorithms, there has been a lack of effort to re-use existing algorithms for varying domain and task. Researchers have not been quite attentive to controlling factors that guide the modification of algorithms suited to differing problems. In this study, ontology is examined as a means to customize algorithms for different purposes. We also propose the conceptual framework for a spatial data mining (system) driven by formal ontology. The case study demonstrated that formal ontology enabled algorithms to reflect concepts implicit in domain, and to adapt to users’ view, not to mention unburdened efforts to develop new algorithms repetitively.

Sungsoon Hwang
G.I.S. and Fuzzy Sets for the Land Suitability Analysis

This paper reports about uncertainty in defining boundaries, which assume an institutional significance when transposed in planning prescription. Every discipline involved in environmental planning uses different approaches to represent its own vision of reality. Geological sciences or hydraulics evaluate risks by consistent mathematical models which are relevantly different to non linear models emploied in the field of ecology, and at the same time information about significance and value of cultural heritage in a given environment does not easily correspond to a value attribution. These questions represent an interesting field of research, related with the different character of information deriving from different disciplinary approaches, and with the more appropriate way of combining the same information. Different ways of managing values correspond to different ways of giving information. The result is a set of discrete representations of the physical space which correspond to a set of different values referring to areas which are considered homogeneous according to each disciplinary point of view, but very difficult to combine to create landscape units according to the whole of disciplines. The present paper illustrates a reflection on a G.I.S. application in a land suitability study on a sub-regional area of Southern Italy. Emerging questions are related to the need to combine contributions of all environmental information which are represented at different scales, with different interpretative models, with different precision of identification of landscape unit, etc.

Beniamino Murgante, Giuseppe Las Casas
Intelligent Gis and Retail Location Dynamics: A Multi Agent System Integrated with ArcGis

The main step towards building “intelligent” Gis is to connect them with spatial simulation models. Multi Agent Systems (Mas) allow to represent, through a computer code, the behaviour of entities operating in a given environment and the system dynamics that derive from the interactions of such agents. We integrated (through the VBA programming language) a Mas into a Gis, where, through a friendly interface, it is possible, during the simulation, to modify the model by adding individual behavioural rules and/or new typologies of agents. In our urban Mas, two typologies of agents are defined: retail users and retail entrepreneurs, interacting according to a spatial demand-supply matching mechanism. We describe a prototypal application of a model whose aim is simulating, in an urban system, the dynamics of retailing location.

S. Lombardo, M. Petri, D. Zotta
ArcObjects Development in Zone Design Using Visual Basic for Applications

The Modifiable Areal Unit Problem (MAUP) is commonplace in the realm of geographical analysis. The MAUP presents issues concerning scale and configuration of geographic partitions that are representative of finer resolution spatial data. To date, these problems cannot be resolved completely. They can only be manipulated. This “manipulation” can be accomplished using an Automated Zoning Procedure (AZP). In theory, this algorithm amalgamates spatial partitions into a set of zones that are optimized for a function of interest, thus an optimal result is achieved regardless of spatial configuration. Using a Geographic Information Systems (GIS) software environment, an Automated Zoning Procedure was constructed for ArcGIS 8.3 desktop software using ArcObjects development with Visual Basic for Applications (VBA). The fully functional program represents a deterministic, global optimization approach that aggregates n number of zones into a smaller number of m zones where the Moran’s I statistic of spatial autocorrelation is the primary function under optimization.

Sergio Palladini
Searching for 2D Spatial Network Holes

Research involving different forms of networks, such as internet networks, social networks, and cellular networks, has increasingly become an important field of study. From this work, a variety of different scaling laws have been discovered. However, these aspatial laws, stemming from graph theory, often do not apply to spatial networks. When searching for network holes, results from graph theory frequently do not correlate with 2D spatial holes that enforce planarity. We present a general approach for finding holes in a 2D spatial network, and in particular for a network representing street centrelines of an area in Washington, D.C. This methodology involves finding graph holes that can be restricted to 2D spatial holes by examining topological relationships between network features. These spatial network holes gain significance as the number of edges encompassing the hole, and the length of these edges increase. For this reason, our approach is designed to classify these holes into different sets based on the number of edges found and the length of those edges. The results of this application provide valuable insights in the nature of the network, highlighting areas that we know from experience are poorly connected and thus suffer from low accessibility.

Femke Reitsma, Shane Engel
Extension of Geography Markup Language (GML) for Mobile and Location-Based Applications

Geography Markup Language (GML) has been extended for mobile and location-based applications. Three schemas are included in the extension, i.e., voice schema, tracking schema, and POI (Point Of Interest) schema. In order to handle effectively the extension, we also have designed a Spatial XQuery language and its processing modules. Because our work is based on a spatial database system, the Spatial XQuery statements are first translated into SQL statements. By working on an existing spatial database system, we can use of the existing amenities of database systems such as query optimization, spatial indexes, concurrency control, and crash recovery. Translation of the Spatial XQuery into SQL has been explained using examples. Because the results from the spatial database system are in the form of tables, we need to translate again the results into XML statements. A working example of the proposed system as an Emergency Support System is also presented. Prospected application areas of the proposed system are almost all mobile and location-based systems such as LBS (Location-based System), POI systems, tracking systems, ubiquitous systems, and distributed control and management systems.

Young Soo Ahn, Soon-Young Park, Sang Bong Yoo, Hae-Young Bae
A Clustering Method for Large Spatial Databases

The rapid developments in the availability and access to spatially referenced information in a variety of areas, has induced the need for better analysis techniques to understand the various phenomena. In particular spatial clustering algorithms which groups similar spatial objects into classes can be used for the identification of areas sharing common characteristics. The aim of this paper is to present a density-based algorithm for the discover of clusters in large spatial data set which is a modification of a recently proposed algorithm.This is applied to a real data set related to homogeneous agricultural environments.

Gabriella Schoier, Giuseppe Borruso
GeoSurveillance: Software for Monitoring Change in Geographic Patterns

In this paper we first describe how statistical process control methods in general, and cumulative sum methods in particular, may be used to monitor changes in geographic patterns. We give examples for several different possible data scenarios, including those where regional data consist of very small counts each time period. In addition, we describe software we have developed for this purpose. The paper describes both the development of the statistical methods and the software in detail. The authors intend to make the software freely available and downloadable so that researchers interested in a wide variety of applications may potentially make use of the methods.

Peter Rogerson, Ikuho Yamada
From Axial Maps to Mark Point Parameter Analysis (Ma.P.P.A.) – A GIS Implemented Method to Automate Configurational Analysis

Methods based on configurational theory were proved to be highly effective in urban space analysis, in order to highlight the distribution of the levels of attractiveness towards activities. Nevertheless, both Axial Analysis and Visibility Graph Analysis (the most prominent analytic techniques) appear affected by some evident faults, that somehow do limit their actual use: in particular, those faults affect the definition of the axial system, due to the arbitrariness in drawing the lines that cover the urban grid and the lacking of correspondence between them and the streets of the settlement. In VGA, that studies the configurational features of the internal points of the grid, the problem concerns the heaviness of the data (thousands of numerical values) and the difficulty in referring them to a single urban space (either a street or a square).In other words, on a one hand we can’t but appreciate the useful capability of configurational variables in reproducing the distribution of attractiveness towards activities; on the other hand we complain about their actual use, because of the lacking of automation in defining the system, as well as the poor correspondence between it and the observed urban structure. On such basis, our research aims at enriching the configurational approach with ArcGIS tools: our GIS application works both as the data source for the configurational model (so as to automatically and objectively construct the system) and as its receiver (so as to recipe its outputs, to elaborate them and to graphically render them, with reference to the actual elements of the grid). Besides, our GIS implementation allows an easy interface of configurational analysis with the most widespread urban models.

V. Cutini, M. Petri, A. Santucci
Computing Foraging Paths for Shore-Birds Using Fractal Dimensions and Pecking Success from Footprint Surveys on Mudflats: An Application for Red-Necked Stints in the Moroshechnaya River Estuary, Kamchatka-Russian Far East

Foraging strategies for Red-necked Stints (Calidris ruficollis) at migration staging sites are virtually unknown, nor exist any methods to achieve such crucial knowledge. Here, for the first time a non-invasive and quantitative computing method of foraging paths is presented from a study carried out during fall migration 1999 for fine-sediment mudflats of the Moroshechnaya River Estuary (56° 50’ N, 156° 10’ E), eastern Sea of Okhotsk, Russian Far East. Footprint surveys on a mudflat were used in order to compute the distances and angles between foraging patches, including pecking success information. It was then analyzed how the pecking success at a foraging patch affects the selected distance between foraging patches and turning angles; no correlations were found. However, with increasing scale (size of divider to measure the foraging path) the fractal dimension of the foraging path generally increases, peaking at intermediate scales and then decreases at larger scales. This indicates for the working scale (grain size 1cm, extend 40cm) that Red-necked Stints make their foraging decisions using a static approach (< 20cm step-length, and 10-40° absolute change of angle). On a smaller scale, and once prey was located, they forage in food patches along ‘cracks’ of the mudflat. It is suggested that fractal dimensions of foraging paths can serve as a basic and quantitative description for the arrangement and distribution of prey patches in mudflats relevant to shorebirds. The presented approach has large potential to describe the efficiency of foraging shorebirds when analyzing pecking success at mudflats along their entire flyways.

Falk Huettmann
Backmatter
Metadaten
Titel
Computational Science and Its Applications – ICCSA 2004
herausgegeben von
Antonio Laganá
Marina L. Gavrilova
Vipin Kumar
Youngsong Mun
C. J. Kenneth Tan
Osvaldo Gervasi
Copyright-Jahr
2004
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-24709-8
Print ISBN
978-3-540-22056-5
DOI
https://doi.org/10.1007/b98051