Skip to main content

Über dieses Buch

This ?ve-volume set was compiled following the 2006 International Conference on Computational Science and its Applications, ICCSA 2006, held in Glasgow, UK, during May 8–11, 2006. It represents the outstanding collection of almost 664 refereed papers selected from over 2,450 submissions to ICCSA 2006. Computational science has ?rmly established itself as a vital part of many scienti?c investigations, a?ecting researchers and practitioners in areas ranging from applications such as aerospace and automotive, to emerging technologies such as bioinformatics and nanotechnologies, to core disciplines such as ma- ematics, physics, and chemistry. Due to the shear size of many challenges in computational science, the use of supercomputing, parallel processing, and - phisticated algorithms is inevitable and becomes a part of fundamental theore- cal research as well as endeavors in emerging ?elds. Together, these far-reaching scienti?c areas contributed to shaping this conference in the realms of state-- the-art computational science researchand applications, encompassing the fac- itating theoretical foundations and the innovative applications of such results in other areas.



Workshop on Computational Geometry and Applications (CGA 2006)

Upper Bound on Dilation of Triangulations of Cyclic Polygons

Given a planar graph G, the dilation between two points of a Euclidean graph is defined as the ratio of the length of the shortest path between the points to the Euclidean distance between the points. The dilation of a graph is defined as the maximum over all vertex pairs (u,v) of the dilation between u and v. In this paper we consider the upper bound on the dilation of triangulation over the set of vertices of a cyclic polygon. We have shown that if the triangulation is a fan (i.e. every edge of the triangulation starts from the same vertex), the dilation will be at most approximately 1.48454. We also show that if the triangulation is a star the dilation will be at most 1.18839.

Narayanasetty Amarnadh, Pinaki Mitra

Optimal Guard Placement Problem Under L-Visibility

Two points




in the presence of polygonal obstacles are


if the length of the shortest path avoiding obstacles is no more than


. For a given convex polygon


, Gewali et al [4]. addressed the guard placement problem on the exterior boundary that will cover the maximum area exterior to the polygon under


. They proposed a linear time algorithm for some given value of


. When the length


is greater than half of the perimeter, they declared that problem as open. Here we address that open problem and present an algorithm whose time complexity is linear in number of vertices of the polygon.

Debabrata Bardhan, Sasanka Roy, Sandip Das

Visibility Maps of Segments and Triangles in 3D



be a set of


disjoint triangles in three-dimensional space, let


be a line segment, and let


be a triangle, both disjoint from


. We consider the visibility map of


with respect to


, i.e., the portions of


that are visible from


. The visibility map of


is defined analogously. We look at two different notions of visibility: strong (complete) visibility, and weak (partial) visibility. The trivial Ω(



) lower bound for the combinatorial complexity of the strong visibility map of both




is almost tight: we prove an







) upper bound for both structures. Furthermore, we prove that the weak visibility map of


has complexity Θ(



), and the weak visibility map of


has complexity Θ(



). If


is a polyhedral terrain, the complexity of the weak visibility map is Ω(



) and





), both for a segment and a triangle. We also present efficient algorithms to compute all discussed structures.

Esther Moet, Christian Knauer, Marc van Kreveld

Non-euclidean Metrics and Chordal Space Structures

This paper proposes the use of the

stereographic projection

within the realm of Computational Geometry for the design of tree-dimensional space structures arising from planar power diagrams. In order that such structures (to which we apply the term


) can approximate a broad catalogue of quadrics, it will be necessary to formulate this projection under

non-Euclidean metrics


José Andrés Díaz Severiano, César Otero Gonzalez, Reinaldo Togores Fernandez, Cristina Manchado del Val

Algorithms for Rectangular Covering Problems

Many applications like picture processing, data compression or pattern recognition require a covering of a set of points most often located in the (discrete) plane by rectangles due to some cost constraints. In this paper we provide exact dynamic programming algorithms for covering point sets by regular rectangles, that have to obey certain boundary conditions. The objective function is to minimize sum of area, circumference and number of patches used. This objective function may be motivated by requirements of numerically solving PDE’s by discretization over (adaptive multi-)grids.

Stefan Porschen

Backward Error Analysis in Computational Geometry

A recent paper, published in Algorithms—ESA2004, presented examples designed to illustrate that using floating-point arithmetic in algorithms for computational geometry may cause implementations to fail. The stated purpose was to demonstrate, to students and implementors, the inadequacy of floating-point arithmetic for geometric computations. The examples presented were both useful and insightful, but certain of the accompanying remarks were misleading. One such remark is that researchers in numerical analysis may believe that simple approaches are available to overcome the problems of finite-precision arithmetic. Another is the reference, as a general statement, to the inadequacy of floating-point arithmetic for geometric computations.

In this paper it will be shown how the now-classical backward error analysis can be applied in the area of computational geometry. This analysis is relevant in the context of uncertain data, which may well be the practical context for computational-geometry algorithms such as, say, those for computing convex hulls. The exposition will illustrate the fact that the backward error analysis does not pretend to overcome the problem of finite precision: it merely provides a tool to distinguish, in a fairly routine way, those algorithms that overcome the problem

to whatever extent it is possible to do so.

It will also be shown, by using one of the examples of failure presented in the principal reference, that often the situation in computational geometry is exactly parallel to other areas, such as the numerical solution of linear equations, or the algebraic eigenvalue problem. Indeed, the example mentioned can be viewed simply as an example of an unstable algorithm, for a problem for which computational geometry has already discovered provably stable algorithms.

Di Jiang, Neil F. Stewart

Reply to “Backward Error Analysis ...”

The algorithms of computational geometry are designed for a machine model with exact real arithmetic. Substituting floating-point arithmetic for the assumed real arithmetic may cause implementations to fail. Although this is well known, it is not common knowledge. There is no paper that systematically discusses what can go wrong and provides simple examples for the di.erent ways in which floating-point implementations can fail. Due to this lack of examples, instructors of computational geometry have little material for demonstrating the inadequacy of floating-point arithmetic for geometric computations, students of computational geometry and implementers of geometric algorithms still underestimate the seriousness of the problem, and researchers in our and neighboring disciplines still believe that simple approaches are able to overcome the problem. In this paper, we study simple algorithms for two simple geometric problems, namely computing convex hulls and triangulations of point sets, and show how they can fail and explain why they fail when executed with floating-point arithmetic.

Lutz Kettner, Kurt Mehlhorn, Sylvain Pion, Stefan Schirra, Chee Yap

Two Map Labeling Algorithms for GIS Applications

Driven by the industrial challenge of labeling maps for GIS applications, we investigate the problem of computing the largest convex partitioning of the map


such that the rectangular axis-parallel label


can be placed in it. The map region to be labeled is in general non-convex and may contain holes. Our main result is the new polygonal area removal (PAR) algorithm to identify the area within


where the center of the label


can be placed. We then derive a new and faster algorithm based on the sweep technique that determines the complete set of maximum inscribed rectangles (MIR) in


in the most common case when rectangle sides have an axis-parallel orientation. The set of all maximum inscribed rectangles is then post-processed to produce the best size/orientation combination of the final label placement depending on the specific requirements from the end users.

Marina L. Gavrilova

Surface Reconstruction from Large Point Clouds Using Virtual Shared Memory Manager

Surface reconstruction is a common problem in computer graphics. Given a set of points sampled from some surface, a triangle mesh interpolating or approximating the points is to be obtained. One of very often used techniques for solving this problem is the selection of surface triangles from the set of Delaunay tetrahedronization faces. In the case of large data, it is difficult to obtain the tetrahedronization due to its huge memory requirements. One of possible solutions is to use distributed computing. Here, we describe the newly developed VSM (Virtual Shared Memory) distributed toolkit and its utilization for the task of surface reconstruction. By our approach, we were able to process dataset having 1.4M points on 4xP4 interconnected via 100Mb Ethernet in 6 hours. About 5GB of memory was consumed during the reconstruction.

Josef Kohout, Michal Varnuška, Ivana Kolingerová

Computing Terrain Multi-visibility Maps for a Set of View Segments Using Graphics Hardware

A multi-visibility map is the subdivision of the domain of a terrain into different regions that, according to different criteria, encodes the visibility concerning a set of view elements. In this paper we present an approach for computing a discrete approximation of a multi-visibility map of a triangulated terrain corresponding to a set of view segments, for weak and strong visibility, by using graphics hardware.

Narcis Coll, Marta Fort, Narcis Madern, J. Antoni Sellarès

Fast Intersections for Subdivision Surfaces

Subdivision surface intersections can be costly to compute. They require the intersection of high resolution meshes in order to obtain accurate results, which can lead to slow performance and high memory usage. In this paper we show how the strong convex hull property can lead to a method for efficiently computing intersections at high resolutions. Consequently, the method can be used with any subdivision scheme that has the strong convex hull property. In this method, a bipartite graph structure is used to track potentially intersecting faces.

Aaron Severn, Faramarz Samavati

A β-Shape from the Voronoi Diagram of Atoms for Protein Structure Analysis

In this paper, we present a


-shape and a


-complex for a set of atoms with arbitrary sizes for a faster response to the topological queries among atoms. These concepts are the generalizations of the well-known


-shape and


-complex (and their weighted counterparts as well). To compute a


-shape, we first compute the Voronoi diagram of atoms and then transform the Voronoi diagram to a quasi-triangulation which is the topological dual of the Voronoi diagram. Then, we compute a


-complex from the quasi-triangulation by analyzing the valid intervals for each simplex in the quasi-triangulation. It is shown that a


-complex can be computed in




) time in the worst case from the Voronoi diagram of atoms, where


is the number of simplices in the quasi-triangulation. Then, a


-shape for a particular


consisting of


simplices can be located in






) time in the worst case from the simplicies in the


-complex sorted according to the interval values.

Jeongyeon Seo, Donguk Kim, Cheol-Hyung Cho, Deok-Soo Kim

Reduction of the Search Space in the Edge-Tracing Algorithm for the Voronoi Diagram of 3D Balls

Voronoi diagram for 3D balls can be applicable to various fields in science and engineering. The edge-tracing algorithm constructs the Voronoi diagram in O(


) time in the worst-case where




are the numbers of edges and balls, respectively. The computation time of the algorithm is dominated by finding the end vertex of a given edge since all edges in the Voronoi diagram should be traced essentially. In this paper, we define the feasible region which a ball to define the end vertex of a given edge should intersect. Then, balls which do not intersect the feasible region are filtered out before finding an end vertex since they cannot define an end vertex. Therefore, we improve the runtime-performance of the edge-tracing algorithm via the feasible region.

Youngsong Cho, Donguk Kim, Hyun Chan Lee, Joon Young Park, Deok-Soo Kim

Routing Properties of the Localized Delaunay Triangulation over Heterogeneous Ad-Hoc Wireless Networks

We explore the extremal properties of the Localized Delaunay Triangulation over networks with heterogeneous ranges. We find theoretical bounds on these properties and compare them with those found via experimentation.

Mark D. Watson, J. Mark Keil

A Speculative Approach to Clipping Line Segments

The Nicholl-Lee-Nicholl (NLN) algorithm for clipping line segments against a rectangular window in the plane (

Computer Graphics


,4 pp 253–262) was proved to be optimal recently in terms of the minimum and maximum number of comparisons and the number of predicates used. A new algorithm is proposed that does not use predicates, but calculates intersections speculatively. Surprisingly, this approach not only leads to a much simpler algorithm, but also takes fewer operations in many cases, including the worst case. It is proved that the new algorithm never takes more operations than the optimal algorithm. Experimental results demonstrate that the new algorithm is 80% to 560% faster than long-established, widely known algorithms.

Frank Dévai

An Efficient Algorithm for Mobile Guarded Guards in Simple Grids

A set of mobile guards in a grid is guarded if at any point on its patrol segment every guard can be seen by at least one other guard. Herein we discuss a class of polygon-bounded grids and simple grids for which we propose a quadratic time algorithm for solving the problem of finding the minimum set of mobile guarded guards (the MinMGG problem). Recall that the MinMGG problem is NP-hard even for grids every segment of which crosses at most three other segments. We also provide an






) time algorithm for the MinMGG problem in horizontally or vertically unobstructed grids. Finally, we investigate complete rectangular grids with obstacles. We show that if both the vertical and the horizontal sizes of the grid are larger than the number of obstacles




+2 mobile guarded guards always suffice to cover the grid.

Adrian Kosowski, Michał Małafiejski, Paweł Żyliński

Approximation of Optimal Moving Paths of Huge Robot Reclaimer with a 3D Range Finder

This paper proposes a simple method for approximating the optimal moving paths of a huge robot reclaimer located in the outdoor material stock yard with emphasis on safety, energy consumption, and transfer time. The reclaimer is equipped with a 3D range finder to measure the shape of material piles in the yard, and the material yard is modeled into 3D space where 2D section of grid type is constructed in several layers. To define a safety function against moving between grids, a simplified Voronoi diagram that has a minimized extension error of vertex is used. In addition, the function of energy consumption and transfer time required when the control point of the reclaimer moves between 3D grids is defined. This is used as a cost evaluation factor of path optimization along with the safety function. The proposed method can be readily applied to low-performance industrial control devices.

Kwan-Hee Lee, Hyo-Jung Bae, Sung-Je Hong

Fault Tolerant Guarding of Grids

In this paper we deal with one of the art gallery problems, namely the problem of fault tolerant guarding of grids, which is defined as the problem of finding two disjoint guard sets in a grid. Although determining the existence of such a structure is easy in general grids, the task of minimising the number of guards taken over both teams is shown to be NP-hard even for subcubic grids. Moreover, we propose a 6/5-approximation algorithm for solving the fault tolerant guard problem in grids.

Adrian Kosowski, Michał Małafiejski, Paweł Żyliński

Tunable Bounding Volumes for Monte Carlo Applications

Luminaire sampling plays an important role in global illumination calculation using Monte Carlo integration. A conventional approach generates samples on the surface of the luminaire, resulting in rendered images with high variance of noise. In this paper, we present an efficient solid angle sampling technique using tunable bounding volumes for global illumination calculation. In contrast to the conventional approach, our technique derives samples from the solid angle subtended by the luminaire. In the construction process, we build a convex, frustum-like polyhedron as a bounding volume for a light source. Front-facing polygons of the bounding volume are then projected onto the unit hemisphere around the shaded point. These projected polygons represent the approximated solid angle subtended by the luminaire. The third step samples the projected spherical polygons on which a number of stratified samples are generated. We employ various types of light sources including ellipse, elliptic cylinder, elliptic cone and elliptic paraboloid. We perform our technique for Monte Carlo Direct Lighting and Monte Carlo Path Tracing applications. Under similar sample numbers, our technique produces images with less variance of noise compared to the conventional method. In addition, our technique provides roughly equal image quality in less execution time. Our approach is simple, efficient, and applicable to many types of luminaries for global illumination calculation.

Yuan-Yu Tsai, Chung-Ming Wang, Chung-Hsien Chang, Yu-Ming Cheng

A Data Hiding Algorithm for Point-Sampled Geometry

We present a data hiding algorithm for point-sampled geometry. The basic idea is to consider every point of a model as a message point. We propose a Virtual Multi-Level Embed Procedure to embed information based on shifting the message point, the order of which is assigned by principal component analysis. We modify the message point by its virtual geometrical property, which leads to a change of the position of the orthogonal projection of the message point on the virtual base, the height of the virtual triangle, and the arc length of the virtual circle. Experimental results show that the proposed technique is efficient and secure, has high capacity and low distortion, and is robust against affine transformations. The technique has proven to be feasible in data hiding.

Yu-Ming Cheng, Chung-Ming Wang, Yuan-Yu Tsai

An Algorithm for Rendering Vertexes and Edges of Polyhedron with High Order Geometric Continuity

The surface rendering for simultaneously connecting with several neighboring surfaces with high order geometric continuous is quite complicated in the computer graphics and CAGD. This paper proposed a new simple geometric algorithm for generating and rendering the smooth connecting surfaces for the vertexes and edges of the polyhedron. The rectangular Bézier surfaces are generated to connect the two adjacent planes, and the triangular Bézier surfaces are generated to connect its several neighboring surfaces with high order geometric continuity. The control net points of the connecting surfaces are calculated by the geometric drawing method according to the geometric continuity connection condition or the share common plane condition. All the surfaces and planes can be connected smoothly with G


geometry continuity in the polyhedron. The algorithm can be popularized to the situation of higher order geometric continuous connection easily.

Xiaolin Lu

Workshop on Virtual Reality in Scientific Applications and Learning (VRSAL 2006)

SimVIZ – A Desktop Virtual Environment for Visualization and Analysis of Protein Multiple Simulation Trajectories

In silico

protein conformation simulation generates massive amounts of data which needs to be properly visualized and analyzed. We are applying Desktop Information-Rich Virtual Environments (

Desktop IRVE’s

) techniques and concepts to aid multiple trajectory simulation analysis, improving user experience and developing a problem-solving environment to help the decision making process. We will present


, a tool which integrates visualization to simulation analysis, improving previous knowledge about trajectories. This environment shows informative panels, Contact Maps, RMSD charts, the Ramachandran Plot and a Parallel Coordinate multidimensional visualization of simulation output in a single rendering scene.


also opens multiple trajectories along with user associated information concerning many aspects of the simulation.


is an integrated problem solving environment of multiple trajectories of protein simulations, offering various kinds of analysis and visualization tools used by the community to validate protein structures or to gather a better understanding of the protein folding process.

Ricardo M. Czekster, Osmar Norberto de Souza

Immersive Molecular Virtual Reality Based on X3D and Web Services

An immersive Molecular Virtual Reality environment in which the user interacts with a X3D virtual world using immersive devices and dynamic gestures is presented.

The user through the virtual world invokes a Web server to perform a real-time simulation and find a stable molecular structure.

The outcome of the simulation is the representation of the stable structure of the molecular system in the X3D world.

Osvaldo Gervasi, Sergio Tasso, Antonio Laganà

Yeast Naked DNA Spatial Organization Predisposes to Transcriptional Regulation

This paper presents a new structural-based approach to explore spatial organization of naked DNA on a whole chromosome sequence and its biological features related to gene regulation. A 3D trajectory representation on full-length yeast chromosomes based on Bolshoy’s conformation model is discussed. These trajectories are predicted by our visualizing system ADN-Viewer. Observations show interesting geometric properties of chromosomes dealing with variability of 3D structures and the fact that regions linearly distant could be spatially close. These new observed phenomena are correlated then with biological considerations. In particular, transcriptional co-regulation of the data of Lee

et al.

, 2002 are exploited. A characterization parameter (RLS), ratio of linear distance and 3D one, was computed for each couple of genes. The co-regulated genes are found to be either linearly distant and spatially close, or linearly close. The co-regulated genes arranged in 1D-clusters could be detected directly in raw data. But, our model offers new predictions of co-regulated genes thanks to 3D-clusters. Then, we concluded that yeast naked DNA spatial organization seems to predispose to transcriptional regulation.

Oriane Matte-Tailliez, Joan Hérisson, Nicolas Ferey, Olivier Magneau, Pierre Emmanuel Gros, François Képès, Rachid Gherbi

Interactive Real-Time 3D Visualization of Grid Portal for Quantum Mechanics

This paper describes work-in-progress towards developing grid portal for quantum mechanics (QMWISE) featuring real-time 3D visualization. The goal of QMWISE project is to provide a powerful and convenient system for computational chemistry using grid technologies. In this paper, we propose an environment which enables computational chemists to calculate quantum mechanics problems and explore the result data through interactive real-time 3D visualization.

Sung-Wook Byun, Yun-Kyoung Lee, Chang-Sung Jeong

Development of a Virtual Reality Bicycle Simulator for Rehabilitation Training of Postural Balance

Development of a virtual-reality training system based on an exercise bicycle and a personal-computer for rehabilitation of motor functions and balance capability in elderly patients has been described. The system can take three inputs from the bicycle: the pedaling speed, the direction of the handles, and the location of the center of pressure on the supporting base of the bicycle. These inputs allow user interactions with a virtual environment that increase motivation of the user. A series of experiments were conducted with three different visual feedback conditions: without visual feedback of balance information; with visual feedback of weight shift; or with visual feedback of the center of pressure. The assessment of balance parameters obtained from the experiments showed the improvement of balance capability on the bicycle. Especially, the visual feedback of weight shift most increased the effectiveness of the training. The findings suggest that the system might be used as a balance and motor function rehabilitation system with further objective measurements of balance ability of the patients in longer terms.

Nam-Gyun Kim, Yong-Yook Kim, Tae-Kyu Kwon

Virtual Reality Edutainment: Cost-Effective Development of Personalised Software Applications

Virtual Reality edutainment can provide pleasant environments for educating students through VR-games. However, as a side effect, it may impose extra difficulties to students due to the complexity of VR-game user interfaces. Moreover, creating tutoring systems that combine teaching with the sophisticated environments of VR-games can be very complicated for instructional designers. Solutions to these problems have been given through VR-Multi-Author that provides a user-friendly authoring environment to instructional designers for the creation of personalised tutoring systems that perform student-player modelling and they operate as virtual reality adventure games. The common practice of modelling the student has been expanded to include domain-independent characteristics of players such as their level of game-playing competence on top of domain-dependent characteristics such as the level of knowledge of a student in a particular domain.

Maria Virvou, Konstantinos Manos, George Katsionis

3D Panoramic Mosaicking to Suppress the Ghost Effect at Far-Range Scene for Urban Area Visualization

3D image mosaicking is useful for 3D visualization of the roadside scene of urban area by projecting 2D images to the 3D planes. When a sequence of images are filmed from a side-looking video camera passing far-range areas, the ghost effect in which same objects appear repeatedly occurs. To suppress such ghost effect, the far-range areas are detected by using the distance between the image frame and the 3D coordinate of tracked optical flows. The ghost effects are suppressed by projecting the part of image frames onto 3D multiple planes utilizing vectors passing the focal point of frames and a vertical slit. The vertical slit is calculated by utilizing the first and last frames of the far-range areas. We demonstrated algorithm that creates efficient 3D panoramic mosaics without the ghost effect at the far-range area.

Jaechoon Chon, Eihan Shimizu, Ryosuke Shibasaki

VR Based Knowledge Transfer in Medical Technology and Techniques

This paper reports on an ongoing project’s use of virtual reality (VR) technologies to support vocational training. Three-dimensional interactive VR models can be employed to support the innovative transfer of knowledge about complex equipment and special processes of neuromedical technology and neurobiology. The target group for the type of knowledge transfer in this project is specialized medical-technical staff members who, in basic and advanced training, have to develop competencies to properly perform immunohistochemical analyses with quality results. The project is developing and testing interactive 3-D scenarios that are expected to lead to new approaches to transferring declarative and procedural knowledge. In addition, project work is also dealing with issues of scene modeling, learning-based design and increased levels of interaction between users and trainees.

Wolfram Schoor, Rüdiger Mecke, Martina Rehfeld

A Diagnostic Model Using a Clustering Scheme

It has been recognized that it is a challenging problem to deal with the situation where learners have diverse computing backgrounds and the learning content to be covered is also in the broad coverage. In the case, it’s required to devise a sophisticated diagnostic model for applying a proper teaching-learning method. We have drawn a scheme which can be applied to that case efficiently by using clustering algorithms based on web technology. In our approach, we focus on finding out methods for classifying both learners and learning content on the web. To make classification and manipulation of learning content ease, we reorganize learning content in order to have discrete form by introducing the concept of the knowledge unit which is extracted from each topic. Also, to make classification and diagnostic ease, we develop questions to measure them and analyze each question using item response theory (IRT) on the web. From the experiment of students sampled using our method, we show that learners with various backgrounds and the learning content with distribution on the broad range can be categorized effectively into the groups with homogeneous property. Also, we describe how to apply our proposed scheme to the introductory courses at postsecondary level.

Seong Baeg Kim, Kyoung Mi Yang, Cheol Min Kim

Ontology-Based Edutainment System

Recently the Semantic Web is emerging as a solution to problems in the existing Web. The outstanding features of the Semantic Web could contribute to establishing more effective e-learning systems than a conventional web-based e-learning system. However, studies on its applications to e-learning are insufficient. In this paper, we present a new edutainment e-learning system that uses the Semantic Web technology. We extract learning contents from middle school textbooks to generate our ontology. With the ontology, we are developing a learner assessment system, a learning guidance system, and an edutainment learning system. The learner assessment system assesses the current state of a learner’s knowledge in the ontology. The learning guidance system categorizes the learners based on the results of the assessment system and dynamically generates a learning course to each learner. The edutainment learning system provides templates for making game-like items. We shall describe them respectively and state relations among them. Our system will be a good model of edutainment e-learning systems based on Semantic Web technologies.

Cheol Min Kim, Hye Sun Kim, Seong Baeg Kim

Workshop on Visual Computing and Multimedia (VCM 2006)

A Fast Video Encoding Algorithm by Skipping the Operations on Zero-Valued DCT Coefficients

In this paper, we propose a computation-efficient DCT-based encoding algorithm for the standard video encoders. Most video coding standards consist of DCT and associated modules such as quantization, zigzag scan, VLC (Variable Length Coders). Commercial and standard video encoders implement the modules independently. In this paper, the relationship between the function modules is investigated and the unnecessary operations are removed. The simulation results show that the proposed method reduces the computation complexity of IDCT by 40 % compared with standard implementation.

Kook-Yeol Yoo

Efficient Block Matching for Ray-Space Predictive Coding in Free-Viewpoint Television Systems

Free viewpoint television (FTV) based on ray-space representation was adopted in the MPEG draft because the ray-space can generate an arbitrary viewpoint view without complicated analysis and rendering process. Ray-Space predictive coding is one of main techniques in ray-space based FTV systems, in which block matching and compensation plays a very important role to improve coding performance. However, the computational burden of block matching for ray-space coding is the problem to be solved. Ray-space representation of FTV is introduced and inter-slice correlations of ray-space data are analyzed. Then, a fast adaptive block matching algorithm for ray-space predictive coding is proposed to reduce the encoding complexity. Experimental results show that the proposed block matching method reduce the computational burden and speed up the coding process greatly.

Gangyi Jiang, Feng Shao, Mei Yu, Ken Chen, Tae-Young Choi

New Approach to Complexity Reduction of Intra Prediction in Advanced Multimedia Compression

H.264 is the new multimedia compression standard developed by VCEG and MPEG. In order to maximize the compression performance, H.264 adopts RDO mode decision techniques at the cost of very remarkable computational complexity. To reduce encoder complexity associated with the intra-prediction mode selection, in this paper, a fast intra prediction mode selection scheme is proposed. It is based on three improvement parts, including improvement of the intra prediction algorithm structure, fast selection between intra 4×4 block mode and intra 16×16 block mode, and the fast 4×4 luma block mode selection based on the luma similarity between reference pixels. Experimental results show that the proposed scheme outperforms Pan’s method, and compared with the test model JM8.6 provided by JVT, the proposed scheme reduces 83.10% coding time for intra frames and 50.94% coding time for all the frames while maintaining the PSNR and bit rate.

Mei Yu, Gangyi Jiang, Shiping Li, Fucui Li, Tae-young Choi

Reduction of Mode Decision Complexity in H.264/AVC Using Adaptive Selection of Reference Frame and Intra Prediction Mode

Rate-constrained coding is one of the many coding-efficiency oriented tools of H.264/AVC, however, the mode decision process of Rate Distortion Optimization (RDO), is characterized by high computational com- plexity. Many fast mode decision algorithms have been proposed to reduce the computational complexity of mode decision. In this paper, two algorithms are proposed for reduction of mode decision in H.264/AVC, fast reference frame selection and selective intra prediction mode decision. Fast reference frame selection is efficient for inter predication, and selective intra prediction mode decision can effectively reduce excessive calculation load of intra prediction mode decision. The simulation results demonstrated that the proposed two methods applied together could reduce the encoding time of the overall sequences by an average of 44.63%, without any noticeable degradation in coding efficiency.

Woongho Lee, Jungho Lee, Ik-hwan Cho, Dongseok Jeong

3D Visualization for Tele-medical Diagnosis

It is widely accepted that 3D visualization of medical images helps in patient diagnosis. To visualize 3D images requires large computation loads. Therefore fast 3D visualization requires an expensive volume rendering board. Some web-based streaming techniques were suggested to construct high quality 3D visualizations even though only using a personal computer without a volume rendering board. These techniques could share the volume rendering board and diagnose 3D images together. To make a web-based 3D visualization system, the issues of time delay depending on limited network bandwidths need to be resolved. This delay directly influences the image quality. Through this study, we evaluated optimized streaming conditions to visualize and control the 3D medical image which has a high quality on the web. To construct this system, we primarily used three tools which included a VolumePro1000 board, WMV9 (Windows Media Video 9 codec) and TCP-IP based socket functions. The VolumePro1000 board could calculate quickly heavy matrixes of 3D images using Phong’s shading and shear-warp factorization. WMV9 was able to efficiently compress live images and to apply image streaming techniques. TCP-IP based socket functions transmitted messages to control the 3D images. We developed a 3D visualization system and tested image qualities and transmission conditions of different compression rates for variable network conditions. It was very advantageous that the WMV9 encoding could be decoded automatically in many platforms (desktop, PDA, notebook, as long as the platform had Windows Media Player. We expect the 3D visualization system to be utilized in various biomedical fields such as IIGS (Interactive Image Guided Surgery), CAD (Computer Aided Diagnosis) and Tele-medicine technologies.

Sun K. Yoo, Jaehong Key, Kuiwon Choi

Content Based Image Retrieval Based on a Nonlinear Similarity Model

In this paper, we propose a new nonlinear paradigm to clustering, indexing and searching for content-based image retrieval (CBIR). The scheme is designed for

approximate searches

and all the work is performed in a transformed

feature space

. We first (1) map the input space into a feature space via a nonlinear map, (2) compute the top eigenvectors in that feature space, and (3) capture cluster structure based on the eigenvectors. We (4) describe each cluster with a

minimal hypersphere

containing all objects in the cluster, (5) derive the similarity measure for each cluster individually and (6) construct a

bitmap index

for each cluster. Finally we (7) model the similarity query as a

hyper-rectangular range query

and search the clusters near the query point. Our preliminary experimental results for our new framework demonstrate considerable effectiveness and efficiency in CBIR.

Guang-Ho Cha

Technical Session on Web Based Learning (WBL 2006)

Simple and Powerful Interactive E-Learning System Using VXML: Design and Implementation of Web and PSTN Linked Efficient Learning System

E-learning is increasingly being integrated into organizations and universities as a new means of learning and supporting learners. Even it has many advantages to support learners using multimedia systems, but it still has many weaknesses. A representative weakness using e-learning system is it needs specific extra equipments (like computers, monitors and keyboards and so on) and learners are requested unmoving status in the courses of lessons. But, nowadays most people make busy living. To say another word, learners have not enough time to hold unmoving status during their lessons. For this reason, in this paper we implement a novel architecture of e-learning system using VXML linked with PSTN (Public Switched Telephony Network). Our newly proposed system supports telephone-based learning as well as computer-based learning. Learners can choose their learning tools by the existing state of things at each time. Learners can choose computers as a learning tool when they are in the house, on the other case, learners can choose their telephones as a learning tool on the street. To verify performance of the implemented e-learning system, we conducted learning efficiency test after implementation.

Jeong-Hoon Shin, Kwang-Seok Hong

A Web-Based Tool for Entity-Relationship Modeling

A web-based tool developed to automatically correct conceptual database schema is presented. This tool has been integrated into a more general e-learning platform and is used to reinforce teaching and learning on introductory database courses. This platform assigns to each student a set of database problems selected from a common repository. The student has to design a entity-relationship schema and enter it into the system through a user friendly interface specifically designed for it. The correction tool corrects the design and shows detected errors giving advice of how to solve them. The student has the chance to send a new solution. These steps can be repeated as many times as required until a correct solution is obtained.

Ferran Prados, Imma Boada, Josep Soler, Jordi Poch

Reusable Learning Objects (RLOs) for Computer Science Students

The purpose of this study is to introduce an instructional technology known as the “learning object”. After a review of the literature, the designing steps of a learning object (LO) are tried to explain. As a learning object standard a few details are given about SCORM-Content Aggregation Model and common metadata elements. At the end, a case study about designing of a learning object with the subject of “Congestion Control in Computer Networks” is tried to give. The content is prepared in English and it contains 13 pictures within 22 .htm and 3 .exe files. The basic principles of the network congestion and the designed congestion control algorithms are given in the htm pages. Simulators are used to show the working way of the related algorithms step by step. The LO design is made with an open source program RELOAD Editor, according to the ADL SCORM package. The designed LO has been tested on computer engineering students and positive feedbacks are received.

Birim Balci, Mustafa Murat Inceoglu

WebQuest Markup Language (WQML) for Sharable Inquiry-Based Learning

WebQuest is a model for constructivist inquiry-based learning in which the information used by learners is collected from the Web. A WebQuest exists in form of a Web site that contains a defined set of componential Web pages. In this paper, we specify the WebQuest Markup Language (WQML) for WebQuest construction. WQML enables WebQuests to be implemented as sharable courseware objects and thus to be interoperable with most learning management systems (LMS).

Sebastian Fleissner, Yuen-Yan Chan, Tsz Hon Yuen, Victor Ng

Technical Session on Computer Graphics (TSCG 2006)

Practical Boolean Operations on Point-Sampled Models

Boolean operation is an important way in geometry modeling. This paper proposes a novel Boolean operations algorithm for point-sampled models based on implicit function transforming. In the algorithm, the point models are converted to implicit surfaces at first, and then Boolean operations for implicit surface are used to the point models. The simple forms of Boolean operations for implicit surfaces are presented. The method of RBF variational interpolation based on scattered points is used to convert the point models into implicit surfaces. Using this algorithm, complex point model can be constructed from several point models. This Boolean operations algorithm for point models is also suitable for Boolean operations for mesh models. It can implement the editing process of Cut-and-Paste for mesh models.

Xujia Qin, Weihong Wang, Qu Li

Dynamic Brush Stroke Generation for an Impressionist Effect

We propose dynamic brush stroke generation for an impressionist effect of source images, using reference data. Colors used are formed by actual palette colors from artists. To create the palette, we have referred mostly to colors used in Van Gogh’s works and determined the color of brush strokes by transferring it to the most similar one, through comparing colors used in source images and the palette colors. Also, by referring to the edge orientation of source images, we have applied a brush stroke direction that surrounds the edges. The sizes were determined depending on the different sizes of the objects from wide to narrow brushes. Finally, we applied spline curve and line shapes. The brush strokes created in such method, were applied separately according to its segmented images, and composed after rendering.

Youngsup Park, Kyunghyun Yoon

Image-Based 3D Face Modeling from Stereo Images

This paper presents an automatic and novel method to generate a realistic 3D face model from stereo images. Typically, an image-based 3D face modeling system is in need of human intervention in facial feature extraction stage. To remove this human intervention, we propose HT(Hue-Tint) skin color model for facial feature extraction. Based on the proposed chrominance model, we can detect facial region and extract facial feature positions. Subsequently, the facial features are adjusted by using edge information of the detected facial region along with the proportions of the face. Moreover, the proposed facial extraction method can effectively eliminate the epipolar constraints caused by using stereo vision approach. In order to produce a realistic 3D face model, we adopt RBF(Radial-Based Function) to deform the generic face model according to the detected facial feature points from stereo images. For deformation locality parameter of RBF is critical since it can have significant impact on the quality of deformation. Thus, we propose new parameter decision rule that is applicable to scattered data interpolation. It makes clusters of feature points to detect points under the influence of each width parameter. From the experiments, we can show the proposed approach efficiently detects facial feature points and produces a realistic 3D face model.

Kyongpil Min, Junchul Chun

Perception-Guided Simplification for Real Time Navigation of Very Large-Scale Terrain Environments

Interactive navigation over very large-scale terrain environments is a highly challenging topic since the model involved often contains huge amount of geometries and textures, even exceeds the capacity of system memory. In this paper, we put forward a high-performance technique for real-time walkthrough in very large-scale terrain environments. By the technique, perception-guided simplification is adopted through application of a new error metric-

Constrained Normal Cone

in the preprocessing stage. Both silhouette-preserving and shading-preserving criteria are satisfied in our navigation system. Furthermore, the pre-extracted

Potential Silhouette

from each block can be used to construct the incremental horizon at run time for delicate visibility testing. The consequent visibility information then can prevent the unnecessary data from loading into the main memory, so that the workload of out-of-core data paging and the number of triangles rendered are substantially reduced, and the frame rates are highly increased. Experiment results show the high performance of our novel approach.

Sheng Li, JunFeng Ji, XueHui Liu, GuoPing Wang, EnHua Wu

3D Building Reconstruction from LIDAR Data

As a fast data acquisition technique, Light Detection and Ranging (LIDAR) can be widely used in many applications, such as visualization, GIS and mobile communication. Since manual surface reconstruction is very costly and time consuming, the development of automated algorithms is of great importance. In this paper a fully automated technique to extract urban building models from LIDAR data is presented. First, LIDAR points are re-sampled into regular grids with the optimal pixel size. After filling holes in the range image, Delaunay Triangulation is utilized to generate 3D triangle meshes. Building height mask is then applied to extract building roof points. Finally, a geometric primitive-based fitting approach is adopted to verify and refine the reconstructed models. The algorithm is tested on two buildings from a locally acquired LIDAR data set. The results indicate that our approach is suitable for automatically producing urban building models from LIDAR data.

Yuan Luo, Marina L. Gavrilova

Efficient Computation of Elliptic Gabriel Graph

Searching neighboring points around a point in a point set has been important for various applications and there have been extensive studies such as the minimum spanning tree, relative neighborhood graph, Delaunay triangulation, Gabriel graph, and so on.

Observing the fact that the previous approaches of neighbor search may possibly sample severely biased neighbors in a set of unevenly distributed points, an elliptic Gabriel graph has recently been proposed. By extending the influence region from a circle to an ellipse, the elliptic Gabriel graph generalizes the ordinary Gabriel graph. Hence, the skewness in the sampled neighbors is rather reduced.

In this paper, we present a simple observation which allows to compute the correct elliptic Gabriel graph efficiently by reducing the search space.

Changhee Lee, Donguk Kim, Hayong Shin, Deok-Soo Kim

Systematic Sampling in Image-Synthesis

In this paper we investigate systematic sampling in the image- synthesis context. Systematic sampling has been widely used in stereology to improve the efficiency of different probes in experimental design. These designs are theoretically based on estimators of 1-dimensional and 2-dimensional integrals. For the particular case of the characteristic function, the variance of these estimators has been shown to be asymptotically


− − 3/2

, which improves on the




− − 1

) behaviour of independent estimators using uniform sampling. Thus, when no a priori knowledge of the integrand function is available, like in several image synthesis techniques, systematic sampling efficiently reduces the computational cost.

Mateu Sbert, Jaume Rigau, Miquel Feixas, Laszlo Neumann

History-Based Selective Boolean Operations for Feature-Based Multi-resolution Modeling

The feature-based multi-resolution models of a solid represent an object at different levels of detail (LOD), particularly in the unit of feature, according to a certain LOD criterion. The need of feature-based multi-resolution modeling is currently increasing for various engineering tasks, such as analysis, network-based collaborative design, virtual prototyping and manufacturing. To provide feature-based multi-resolution models, it is essential to generate a valid and unique solid model at each LOD after feature rearrangement. To meet this requirement, we propose the

history-based selective Boolean operations

that satisfy the commutative law between union and subtraction by considering the history of the Boolean operations for the features. Owing to the commutative properties, the feature-based multi-resolution modeling technique based on these operations guarantees the same resulting shape as the original, and provides a unique and reasonable shape at each intermediate LOD after arbitrary feature rearrangement.

Sang Hun Lee, Kunwoo Lee, Sungchan Kim

Path Finding Method for Various Applications

Increasing the robustness and flexibility of the tool path finding method may broaden the application fields of pocket machining. We aim to develop a path finding method for various applications. Through an integration of the entire tool path finding process, the method becomes not merely optimal to generate an efficient path but also robust enough to be applied to any system with any configuration. The flexibility of the devised method enables us to broaden the application fields of pocketing to fields such as prototype printed circuit board manufacturing. The devised method is applied to generate a clearing path for prototype printed circuit boards. The results verify that the method is concise and simple, but robust and flexible enough to achieve the optimal path in any configuration.

Jaesung Song, Manseung Seo, Masahiko Onosato

Natural-Textured Mesh Stream Modeling from Depth Image-Based Representation

This paper presents modeling techniques to generate natural-textured 3D mesh stream from depth image-based representation (DIBR). Although DIBR is a useful representation for expressing 2.5D information of dynamic real objects, its usage is limited to point-based applications. In order to generate smooth and textured 3D mesh models, depth images are captured using active depth sensors, and they are processed with segmentation, noise filtering, and adaptive sampling technique based on the depth variation. 3D meshes are reconstructed by constrained Delaunay triangulation and smoothened with the 3D Gaussian filter. Each mesh model is parameterized for texture mapping of a corresponding color image. Proposed procedures are automated to generate 3D mesh stream from hundreds of image sequence without user interventions. Final output is a natural-textured mesh model per frame, which can be used for arbitrary view synthesis in virtual reality or broadcasting applications.

Seung Man Kim, Jeung Chul Park, Kwan H. Lee

Workload Characterization in Multiplayer Online Games

In recent years, distributed virtual environments (DVEs) have become a major trend in distributed applications, mainly due to the enormous popularity of multiplayer online games in the entertainment industry. Although the workload generated by avatars in a DVE system has already been characterized, the special features of multiplayer online games make these applications to require a particular workload characterization.

This paper presents the workload characterization of multiplayer online games. This characterization is based on real traces, and it shows that the movement patterns of avatars used to develop optimization techniques for DVE systems can be extrapolated to First Person Shooting networked games. However, the workload that each avatar adds to the game server is higher than the one proposed in the literature.

Pedro Morillo, Juan Manuel Orduña, Marcos Fernández

Delaunay-Based Polygon Morphing Across a Change in Topology

We present a new object-based algorithm for morphing between two shapes with an arbitrary number of polygons and arbitrarily different topology. Many solutions have been proposed for morphing between two polygons. However, there has been little attention to morphing between different numbers of polygons, across a change in topology. A modified conforming Delaunay triangulation is used to build the vertex correspondence. Polygon evolution is used to smooth the morph. The morph requires no user interaction, avoids self-intersection, uses dynamic vertex correspondence, and follows nonlinear vertex paths.

Xiaqing Wu, John K. Johnstone

Jittering Reduction in Marker-Based Augmented Reality Systems

Augmented Reality systems have recently become widely used. This is due to the new open source libraries that have emerged for fast application development. In this paper we address one of the most relevant problems in this type of systems, oscillation in the camera pose estimates. We study the oscillation of a system developed using the ARToolkit library. We apply both average and Kalman filters to stabilize the estimates. Using filter substantially reduces oscillation, thus improving the system’s usability.

Monica Rubio, Arturo Quintana, Hebert Pérez-Rosés, Ricardo Quirós, Emilio Camahort

Workshop on Modeling Complex Systems (MCS 2006)

A Bias-Variance-Complexity Trade-Off Framework for Complex System Modeling

This study proposes a new complex system modeling approach by extending a bias-variance trade-off into a bias-variance-complexity trade-off framework. In the framework, the computational complexity is introduced for system modeling. For testing purposes, complex financial system data are used for modeling. Empirical results obtained reveal that this novel approach performs well in complex system modeling and can improve the performance of complex systems by way of model ensemble within the framework.

Lean Yu, Kin Keung Lai, Shouyang Wang, Wei Huang

A Neural Network Strategy for 3D Surface Registration

3D surface registration is commonly used in shape analysis, surface representation, and medical image aided surgery. This technique is extremely computationally expensive and sometimes will lead to bad result configured with unstructured mass data for its’ iterative searching procedure and ill-suited distance function. In this paper, we propose a novel neural network strategy for surface registration. Before that, a typical preprocessing procedure-mesh PCA is used for coordinate direction normalization. The results and comparisons show such neural network method is a promising approach for 3D shape matching.

Heng Liu, Jingqi Yan, David Zhang

Parallel Hierarchical Methods for Complex Systems Optimization

The paper is concerned with computational research for large scale systems. The focus is on the hierarchical optimization methods that can be successfully applied to large scale optimization problems. A key issue is the possibility of solving several less dimension problems instead of one global high dimension task. Particular emphasis is laid on coarse granularity parallel implementation and its effectiveness. The paper discusses the usage of price coordination for real-life systems optimization. The results of numerical experiments performed for mean-variance portfolio selection using cluster of computers are presented and discussed.

Ewa Niewiadomska-Szynkiewicz

Numerical Modelling of Coastal Currents

A numerical model has been developed for the simulation of transformations of traveling coastal waves and wave induced coastal currents. The model is applicable to varying bottom topographies and has two components, a wave propagation model and a wave driven current model. Wave propagation model is based on nonlinear parabolic mild slope equation and could simulate wave shoaling, refraction, diffraction and breaking. Different wave approach angles can be investigated on the same computational grid. Wave driven current model is based on vertically averaged non-linear shallow water equations. In the solution method, partial differential equations are replaced by a set of finite difference equations on a space staggered grid. Model has been applied to Obaköy coastal waters located at the Mediterranean coast of Turkey where there exist current measurements.

Lale Balas, Asu İnan, İpek Yıldız

A Novel Model for Bacterial Foraging in Varying Environments

This paper presents the study of modelling bacterial foraging behaviours in varying environments. The purpose of the study is to investigate a novel biologically inspired methodology for complex system modelling and computation, particularly for optimisation of complex dynamic systems, although this paper is mainly concerned with a novel modelling methodology. Our study focuses on the use of individual-based modelling (IbM) method to simulate the activities of bacteria and the evolvement of bacterial colonies. For this study, an architecture and a mathematical framework are designed to model bacterial foraging patterns. Under this architecture, the interactions between the environment and bacteria are investigated. A bacterial chemotaxis algorithm is derived in the framework and simulation studies are undertaken to evaluate this algorithm. The simulation results show that the proposed algorithm can reflect the bacterial behaviours and population evolution in varying environments, and also explore its potential for optimisation of dynamic systems.

W. J. Tang, Q. H. Wu, J. R. Saunders

Stochastic Modeling of Cytoplasmic Reactions in Complex Biological Systems

The use of “in silico” stochastic event based modeling can identify the dynamic interactions of different processes in a complex biological system. This requires the computation of the time taken by different events in the system based on their biological functions. One such important event is the reactions between the molecules inside the cytoplasm of a cell. We present a mathematical formulation for the estimation of the reaction time between two molecules within a cell based on the system state. We derive expressions for the average and second moment of the time for reaction to be used by our stochastic event-based simulation. Unlike rate equations, the proposed model does not require the assumption of concentration stability for multiple molecule reactions.

Preetam Ghosh, Samik Ghosh, Kalyan Basu, Sajal Das, Simon Daefler

Modelling of Complex Cryptographic Systems in Terms of Simple Cellular Automata

In this work, it is shown that binary sequences generated by a class of linear cellular automata equal the output sequences of nonlinear sequence generators. Emphasis is on cryptographic characteristics of such sequences (period, linear complexity or number of different output sequences). These simple linear automata easily model complex nonlinear generators with application in secret key cryptography.

Amparo Fúster-Sabater, Pino Caballero-Gil, Maria Eugenia Pazo-Robles

Modeling Supply Chain Complexity Using a Distributed Multi-objective Genetic Algorithm

The aim of this paper is to use a Distributed Multi-objective Genetic Algorithm (DMOGA) to model and solve a three Sub-chains model within the supply chain (SC) problem for optimality. It is widely accepted that all SC problems are characterized by decisions that can be conflicting by nature, distributed, and constrained. Modeling these complex problems using multiples objectives, constrained satisfaction, and distribution algorithms gives the decision maker a set of optimal or near-optimal solutions from which to choose. This paper discusses some literature in SC optimization, proposes the use of the DMOGA to solve for optimality in SC optimization problems, and provides the implementation of the DMOGA to a simulated hypothetical SC problem having three Sub-chains. It is then followed by a discussion on the algorithm’s performance based on simulation results.

Khalid Al-Mutawah, Vincent Lee, Yen Cheung

Different Responses of Two Types of Class II Neurons for Fluctuated Inputs

We investigated the statistical characteristics of spike sequences of two types of Class II neurons, neurons with subcritical or supercritical Hopf bifurcations, with uncorrelated fluctuation inputs by two statistical coefficients; coefficient of variation and skewness coefficient. We used the Morris-Lecar model and the Hindmarsh-Rose model as neuron models. As a result, even if the models belong to the same class, the interspike interval statistics exhibit different characteristics. We also discovered that the origin of the differences comes from a precise bifurcation structure, and the differences also affect the relationship on variation of input and variation of output. The results indicate that we have to introduce at least three classes by its bifurcation types to classify the neurons.

Ryosuke Hosaka, Yutaka Sakai, Tohru Ikeguchi, Shuji Yoshizawa

Branching Method for Computation of Probability of Given Data Stream Transmission

Methods for network reliability and reliability polynomials computation are presented. Networks with absolutely reliable edges and unreliable nodes are considered. Tasks for reliability computation using such quality of service criteria as ability to transmit given data stream with time restrictions and without them are considered and corresponding algorithms are developed.

Mikhail Y. Murzin, Alexey S. Rodionov

An Agent-Based Approach to Immune Modelling

This study focuses on trying to understand why the range of experience with respect to HIV infection is so diverse, especially as regards to the latency period. The challenge is to determine what assumptions can be made about the nature of the experience of antigenic invasion and diversity that can be modelled, tested and argued plausibly. To investigate this, an agent-based approach is used to extract high-level behaviour which cannot be described analytically from the set of interaction rules at the cellular level. A prototype model encompasses local variation in baseline properties contributing to the individual disease experience and is included in a network which mimics the chain of lymphatic nodes. Dealing with massively multi-agent systems requires major computational efforts. However, parallelisation methods are a natural consequence and advantage of the multi-agent approach. These are implemented using the MPI library.

Dimitri Perrin, Heather J. Ruskin, John Burns, Martin Crane

Comparison of Homogeneous and Heterogeneous Motorised Traffic at Signalised and Two-Way Stop Control Single Lane Intersection

Results of a microscopic model of mixed motorised traffic consisting of short vehicles, (e.g. cars), and long vehicles, (taken to be double the length of the short vehicles), for an urban two-way single lane intersection are presented here. We model the intersection using both signalised and un-signalised stop control rules. The model allows for the detection of bottleneck activity in both homogenous and heterogeneous traffic conditions, and was validated by means of field data collected in Dublin, Ireland. The validated model was used to study the impact of inclusion of long vehicles on traffic performance in an urban environment. Traffic mix is, however, taken to be dominated by short vehicles overall, in argument with observed live data collected.

Puspita Deo, Heather J. Ruskin

A New Algorithm for Complex Stochastic Boolean Systems

Many different complex systems depend on a large number


of mutually independent random Boolean variables. For these systems, each one of the 2


possible situations is described by its corresponding binary


-tuple, (






) , of 0s and 1s, and it has its own occurrence probability Pr{(






)}. In this context, this paper provides a simple algorithm for rapidly generating all the binary


-tuples of 0s and 1s whose occurrence probabilities are always greater than or equal to (less than or equal to) the occurrence probability Pr{(






)} of an arbitrary fixed binary


-tuple (






) ∈ {0,1}


. The results can be applied to many stochastic Boolean phenomena related to any scientific, technical or social area. All required previous results are described in detail, so that the presentation is self-contained.

Luis González

Theoretical Steps Towards Modelling Resilience in Complex Systems

This paper reports on theoretical work aimed at providing a harmonious set of tools for tackling the thorny problem of resilience in complex systems. Specifically, key features of resilience are laid out, and the ramifications on necessary theoretical and implementational machinery are analysed. These ramifications constitute a problem definition that, to the authors’ knowledge, no extant system is sufficiently sophisticated to meet. It is, however, possible to identify existing components that can be combined to provide the necessary expressivity. In particular, theoretical ecology has individual based modelling approaches that are consonant with artificial intelligence techniques in multi-agent systems, and in philosophical logic, channel theory provides a mechanism for modelling both system energy and system information flow. The paper demonstrates that it is possible to integrate these components into a coherent theoretical framework, laying a foundation for implementation and testing.

Cathy Hawes, Chris Reed

Collaborative Multidiscipline/Multiscale Analysis, Modeling, Simulation and Integration in Complex Systems: System Biology

Analysis, Modeling and Integration of Complex Systems are difficult. Collaboration among dissimilar disciplines is difficult. But, integrated interdisciplinary collaboration is essential in making sense of complex systems. This creates a dilemma with complex systems at the end of one horn and interdisciplinary collaboration at the other. There is another wrinkle, combining the conceptual spaces at the ends of the horns makes each much more difficult.

Rather than trying to pass between the horns of this dilemma, we weave a semantic unification space between them. This does more than ironing out the wrinkle; the threads of common image schemas, cognitive metaphors and conceptual interfaces form a mesh between the organizations of each problem, making the combined problem easier that either is separately. This paper presents a naturally valid, both discipline specific and discipline integrating, framework and a new foundationsl semantic mechanism for making multidisciplinary sense of complex systems

Thomas J. Wheeler

Workshop on Structures and Molecular Processes (SMP 2006)

On the Structuring of the Computational Chemistry Virtual Organization COMPCHEM

The first moves to structure the COMPCHEM Grid virtual organization of the computational chemistry community are discussed. A tool based on a credit system developed to guarantee its sustainability is presented.

Antonio Laganà, Antonio Riganelli, Osvaldo Gervasi

Computing Molecular Energy Surfaces on a Grid

SUPSIM is a web application, based on the Java Servlet technologies, to compute ab-initio potential energy surfaces of molecular systems for subsequent dynamical properties calculation. We describe the architecture of SUPSIM, its current implementation, and its possible future developments to tackle chemical systems of increasing complexity.

Loriano Storchi, Francesco Tarantelli, Antonio Laganà

Electronic States in Three Dimensional Quantum Dot/Wetting Layer Structures

Although self-assembled quantum dots are grown on wetting layers, most simulations exclude the wetting layer. The neglected effects on the electronic structure of a pyramidal InAs quantum dot embedded in a GaAs matrix are investigated based on the effective one electronic band Hamiltonian, the energy and position dependent electron effective mass approximation, and a finite height hard-wall 3D confinement potential. By comparing quantum dots with wetting layers and a dot without a wetting layer, we find that the presence of a wetting layer may effect the electronic structure essentially.

Marta Markiewicz, Heinrich Voss

A Web Based Application to Fit Potential Energy Functionals to ab Initio Values

The implementation of a prototype Internet portal devoted to the fitting of

ab initio

potential energy values for three atom reactions is discussed. The application has been designed to run as a part of a Grid simulator of molecular processes.

Leonardo Arteconi, Antonio Laganà, Leonardo Pacifici

Dynamic Load-Balancing for the STEM-II Air Quality Model

The aim of this work is to improve load balance of the MPI parallel version of the STEM-II air quality model. Several dynamic data distributions are proposed and evaluated on different systems: homogeneous and dedicated, and heterogeneous and/or non-dedicated. Results prove that dynamic distribution strategies perform better than traditional static distributions. Although all the data distributions presented here have been developed to be used with the STEM–II air quality model, they are also very suitable for use in other parallel applications.

J. Carlos Mouriño, María J. Martín, Patricia González, Ramón Doallo

New Molecular Mechanism of Dextran Extension in Single Molecule AFM

A dextran monomer and a 10mer under constant pulling speed or constant force were studied using the atomistic simulations. Molecular dynamics (MD) with the Amber94 and Amber-Glycam04 forcefields were performed. The main result of the present Amber-based MD simulations is that the experimental plateau of the force-extension dependence for dextran can be explained by a transition of the glucopyranose rings in the dextran monomers from a chair (




) to a inverted chair (




) conformation whereas chair to boat transitions occur at higher forces. MD simulation of coarse-grained model of dextran consisting of two- or three-state monomers were performed to clarify the molecular mechanism of dextran extension.

Igor Neelov, David Adolf, Tom McLeish

Atom-Bond Additive Potentials for Benzene-Rare Gas Clusters

In the process of constructing a realistic

a priori

modeling of the solvation processes of the benzene molecule we have carried out a systematic comparison of the popular atom atom representation of the interaction with the recently proposed atom bond model for the series of benzene-rare gas systems. Calculated static and dynamic properties of this family of systems are discussed.

Margarita Albertí, Antonio Laganà, Fernando Pirani, Massimiliano Porrini, David Cappelletti

A Simplified Myoglobin Model for Molecular Dynamics Calculations

The Myoglobin active site has been investigated using Molecular dynamics means in order to model its behaviour as gas molecules carrier. The simulations carried out using the Dl_poly package and the Dreiding force field were able to rationalize some of the observed properties of the system and well compared with DFT results.

Federico Filomia, Noelia Faginas Lago

Parallel Calculation of Propane Bulk Properties

The density of propane bulk system (in gas and liquid phase) have been estimated using molecular dynamics calculations. The effect of adopting two different force fields (OPLS/AMBER and Atom-Bond), varying the number of processors and increasing the numbers of molecules has been analysed.

Alessandro Costantini, Antonio Laganà, Fernando Pirani

Ab-Initio Multi-reference Study of a Bistable Spiro Molecule

CAS-SCF and MRCI calculations are presented, in order to investigate the electronic states involved in the intramolecular charge-transfer process of a bistable spiro cation. The potential energy curves of the ground and the first three excited states have been calculated, and a double well potential has been obtained for the ground state. The effect of dynamical correlation was found to be crucial for a quantitative description of this system. Our results also indicate the usefulnes of a local-orbital description of bistable systems.

Wissam Helal, Benoît Bories, Stefano Evangelisti, Thierry Leininger, Daniel Maynau

On the Algorithm of Calculation of the Equilibrium Gas-Phase Concentration at the Particle Surface in the Kinetic Models of Aerosol Dynamics

The paper is dedicated to the description of the algorithm of calculation of the equilibrium gas-phase concentration at the surface of the aerosol particle – a parameter, which defines the rate of mass transport between the gas and aerosol phases in the kinetic models of atmospheric aerosol dynamics. Some problems concerning deducing of the surface equilibrium gas-phase concentrations from thermodynamic equilibrium aerosol models are discussed. It is shown that computational algorithm should be split in two steps; the sought quantity is defined on the second step, after ion concentration calculation.

Elena N. Stankova

Study of the Passage of an H +  Ion Along a Carbon Nanotube Using Quantum Wavepacket Dynamics

The passage of an H


ion along a carbon nanotube is studied using a time-dependent wavepacket method. The initial state of the problem can be completely specified in terms of the mean energy of the ion along the nanotube, its radial energy (which is necessarily quantised given the wall boundary condition) and its angular momentum along an axis parallel to the nanotube. Its time-dependent flux across two boundaries on the two ends of the nanotube is monitored and examined for various initial conditions. Such calculations can serve to model more complicated systems, such as the migration of ions along cellular membranes.

Dimitris Skouteris, Antonio Laganá

Workshop on Specific Aspects of Computational Physics and Wavelet Analysis for Modelling Suddenly-Emerging Phenomena in Nonlinear Physics, and Nonlinear Applied Mathematics (PULSES 2006)

Invariance Properties of Practical Test-Functions Used for Generating Asymmetrical Pulses

This paper presents a heuristic algorithm for generating asymmetrical practical test functions using MATLAB procedures. Based on the fact that differential equations can generate only functions similar to test functions (defined as practical test functions), the invariance general properties suitable for generating symmetrical pulses as related to the middle of the working interval are presented. Then some possibilities for obtaining asymmetrical pulses as related to this middle of the working interval using the derivative of such symmetrical pulse are studied, for certain differential equations corresponding to second order systems (with unity-step input and for an input represented by a gaussian pulse). Finally it is shown that we can obtain an oscillating system by joining such working intervals and restoring the initial null conditions for a second order system, in an adequate manner.

Stefan Pusca

Simulating Superradiant Laser Pulses Using Partial Fraction Decomposition and Derivative Procedures

Some phenomena in physics (such as the phenomenon of photonic echo) appears for an external observer as non-causal pulses suddenly emerging from an active medium (prepared by some other optical pulses). Such a pulse is very hard to be simulated without using physical quantities corresponding to the internal state of a great number of atoms. Moreover, the high intensity of the photonic echo pulse is proportional to



, where


represents the number of atoms involved in emission. An attempt of simulating such pulses without using a great number of variables consists in the use of test-functions, which can be put in correspondence with acausal pulses in physics. However, such an attempt can not explain the dependence on



for the intensity. This study will show that this problem can be solved in a simple manner using principles of partial fraction decomposition and derivative procedures.

Theodora Toma, Stefan Pusca, Cristian Morarescu

Simulating Delayed Pulses in Organic Materials

Fatty acids and cholesterol are important substances for the living matter, especially for the biological membrane [1]. Since the liquid crystal state of these substances can give information on some membrane mechanism [2], their answer to some external stimuli within the mesomorphism interval has been widely studied. The possibility of inducing a non-linearity in such systems could lead to a radical change of their dynamics. Interesting non-linear optical laser based answers were obtained in different thin film samples. We analyzed these effect answers and the measurement procedures. For simulating the generation of delayed pulses inside organic mixtures a mathematical model based on practical test-functions has been used. The input pulse (usually represented by an optical or electromagnetic pulse) generates a delayed pulse inside the material medium, and thus a modulated input pulse represented by a gaussian function modulated by a sine function has been used for simulation, with good results.

Bogdan Lazar, Andreea Sterian, Stefan Pusca, Viorel Paun, Cristian Toma, Cristian Morarescu

Wave Propagation of Shannon Wavelets

The problem of the evolution of solitary profile having the form of a Shannon wavelet is considered as solution of a generalization of the Burger equation. Some nonlinear effects such as the breaking down of the wave into localized chaotic oscillations are shown.

Carlo Cattani

Finite Element Analysis of the Thermoforming Manufacturing Process Using the Hyperelastic Mooney-Rivlin Model

Thermoforming is a manufacturing process widely used to produce thin thermoplastic parts. In this process a previously extruded thermoplastic sheet is clamped and then heated and formed into a mold cavity using a differential pressure. In this paper a finite element model of the thermoforming process of an ABS sheet is proposed and numerical results are compared to data from literature. Thermoplastic sheet is modelled according to the membrane formulation. An implicit time scheme has been adopted for the integration algorithm. Mechanical behaviour of the processing material is assumed as hyperelastic, according to the two parameters Mooney-Rivlin model. Mathematical formulation of the mechanical model is exposed. The proposed model allows to evaluate material thinning, stresses, strains and contact status between the processing material and the die.

Pierpaolo Carlone, Gaetano Salvatore Palazzo

Analysis of Compatibility with Experimental Data of Fractal Descriptions of the Fracture Parameters

In order to check if the Fractal theory could be a useful tool for some quantitative descriptions of the fracture parameters, the present work studied different theoretical models (e.g. the Bazant’s Size Effect Law (SEL) [1], the Modified Size Effect Law [2,3] and the Carpinteri’s MultiFractal Scaling Law (MFSL) [4] of the fracture parameters of concrete specimen, and the compatibility of some of the above studied theoretical models relative to the experimental data, using certain recent procedures to study the global and local compatibility. The fracture parameters can be considered as main quantities for computational procedures for modeling the fracture of a certain ensemble (a suddenly emerging phenomena). In the next phase, the thermoelastic generation of ultrasonic perturbations in semitransparent solids was analyzed (using computer simulation) so as to find similarities with material properties as fractal dimensions, when the heat source is a laser radiation. The algorithm, the numerical analysis has taken into account three main physical phenomena: the absorption of electromagnetic energy in substance with heat generation; thermal diffusion with electromagnetic energy based heat source and elastodynamic wave generation by thermoelastic expansion.

Dan Iordache, Stefan Pusca, Ghiocel Toma, Viorel Paun, Andreea Sterian, Cristian Morarescu

Inner Potential of Generating Pulses as a Consequence of Recurrent Principles and Specific Computing Architecture

This study presents the existence of inner potential of generating pulses in physics based on existence of recurrent formulation of fundamental physics principles. It is shown that basic principles in physics (like the principle of constant light in vacuum in any reference system and the uncertainty principle in quantum theory) make use in an implicit manner of terms which are defining also the conclusion. For example, the idea of constant light speed implies the use of a measuring method based on a clocks’ synchronization performed using a supposed antecedent light signal transmitted and reflected towards the observer. In a similar manner, the uncertainty principle implies the existence of a measuring method for position or time correlated with a subsequent measurement for momentum or energy (measurements which also make use of position and time). For avoiding logic contradictions, it is shown that the most simple solution consists in defining the class of reference systems (large-scale elements which are not affected by propagation phenomena or interaction)) and the class of transient phenomena (small-scale bodies or waves which undergo an interaction). The inner potential of such a classification (based on different-scale elements) is also shown, together with a specific architecture.

Cristian Morarescu

Phenomenological and State Coefficients in Viscoanelastic Medium of Order One (with Memory)

The aim of this work is to indicate a method to measure experimentally some phenomenological an state coefficients, and so to verify some inequalities that follow from principle of entropy production.

In particular, we will consider the Kluitenberg-Ciancio model for a visco-anelastic medium of order one with memory and the relative rheological equation in which, neglecting cross effects between viscous and inelastic flow, four phenomenological an state coefficients appear.

Assuming an harmonic deformation as cause and the stress as effect we will compare the solution of the Kluitenberg-Ciancio’s equation with a well known expression of the stress obtained, by experimentally considerations in the linear response theory, as function of the frequency of deformation at a constant temperature. This comparison will be able to determine the phenomenological an state coefficients as function of frequency dependent quantities experimentally measurable.

This method will be applied to polymeric materials as Polyisobutilene.

Vincenzo Ciancio, Francesco Farsaci, Antonino Bartolotta

Analysis of Singularities by Short Haar Wavelet Transform

Wavelets give significant information on the evolution of a time series. In particular, due to their localization properties the significant local changes in observed data (both in time and in frequency) can be easily detected by a limited set of their corresponding wavelet coefficients. Some examples will be given, in the following, showing the effectiveness of this method.

Armando Ciancio, Carlo Cattani

Spatial Aspects of Interaction Between High-Energy Pulses and Waves Considered as Suddenly Emerging Phenomena

As it is known, practical test-functions are very useful for modeling suddenly emerging phenomena [1]. By this study we are trying to use some specific features of these functions for modeling aspects connected with interactions between electromagnetic pulses and material bodies for the relativistic case, when the material body moves with speed


as related to the reference system where this pulse was emitted. At the beginning the problem appearing for high-energy electromagnetic pulses interacting with very small particles is presented; in this case, the model considering the transformation of a received wave train by observer’s material medium must be replaced, while the energy of a received high-energy pulse can be higher than the whole energy



of some small (elementary) particles. Thus it results that in this case the associated wave-train corresponding to the body should be transformed also by the received pulse (scaling aspects having to be taken into consideration). Then it is shown that integral properties of test functions can be used for modeling smooth transitions when the resulting force changes its sign.

Alexandru Toma, Stefan Pusca, Cristian Morarescu

Noise Level Estimation Using Haar Wavelet Packet Trees for Sensor Robust Outlier Detection

The paper is related to the on-line noise variance estimation. In practical use, it is important to estimate the noise level from the data rather than to assume that the noise level is known. The paper presented a free thresholding method related to the on-line peak noise variance estimation even for signal with small S/N ratio. The basic idea is to characterize the noise like an


part of the measured signal. This is performed through the wavelet tree by choosing the subspaces where the median value of the wavelet components has minimum. The paper provides to show nice general properties of the wavelet packets on which the proposed procedure is based. The developed algorithm is totally general even though is applied by using Haar wavelet packets and it is present in some industrial software platforms to detect sensor outliers. More, it is currently integrated in the inferential modeling platform of the Advanced Control and Simulation Solution Responsible Unit within ABB’s industry division.

Paolo Mercorelli, Alexander Frick

A Theoretical Multiscale Analysis of Electrical Field for Fuel Cells Stack Structures

Fuel cell stack systems are under intensive development for mobile and stationary power applications. In particular, Proton Exchange Membrane (PEM) Fuel Cells (also known as Polymer Electrolyte Membrane Fuel Cells) are currently in a more mature stage for ground vehicle applications. This paper proposes a theoretical innovative approach to the analysis of the electrochemical transient behavior (anode-cathode). The transient behavior due to the electrochemical dynamic may impact the behavior of the resulting load current. Boundary conditions influence the resulting electric field, the boundary condition are strongly depending of






physical parameters. Maxwell’s equations are used in order to describe the model. Solutions through dyadic harmonic wavelets at different levels of resolution are presented. Wavelets approach, through their different space-time levels of resolution, can favorable describe the segmented space structure of the stack. In the meantime, transient dynamic inside of the stack can be adaptively studied. An outlook closes the paper.

Carlo Cattani, Paolo Mercorelli, Francesco Villecco, Klaus Harbusch

Workshop on Geocomputation (GC 2006)

Tangent-Linear Models by Augmented LL-Parsers

We describe a novel method for the generation of tangent-linear code by augmentation of LL-parsers generated by the software tool ANTLR. The main advantage of this approach to source code augmentation is the missing requirement for an internal representation of the original program. We consider this work as the basis for further investigations into how far this technique can be extended in the context of more sophisticated transformations, for example, the automatic generation of adjoint codes. Our prototype tool AD_C_ANTLR currently accepts a subset of the ANSI C standard. We discuss its theoretical basis, and we present a case study to underline the elegance of the parser-based approach to source augmentation.

Uwe Naumann, Andre Vehreschild

GIS-T Web Services: A New Design Model for Developing GIS Customized ITS Application Systems

The GIS-T web services can provide the hosted spatial data and GIS functionalities that can be accessed and integrated into the different customized ITS applications. This paper presents the system design for building the Web GIS based intelligent transportation application system with GIS-T web service technology. The GIS-T web services are designed to perform basic geo-processing tasks, such as address matching, map image display, and routing, without maintaining GIS tools or the associated geographical data. The system architecture is designed as a multi-layer framework that integrates the web services, Servlet/JSP functions and GIS APIs based on the J2EE infrastructure.

A prototype of a Web-GIS transportation planning system is designed and implemented with the proposed GIS-T web service platform. Through the GIS-T web service, the hosted spatial data and GIS functionalities that can also be accessed and integrated to other different ITS application systems. It has an important application prospect in the Web GIS based ITS development and application.

Xiaolin Lu

An Algorithm on Extraction of Saline-Alkalized Land by Image Segmentation Based on ETM + Image

Based on ETM+ remote sensing image, according to geographical distribution and spectral characteristics of saline-alkalized land, especially that of light, and moderate saline-alkalized land, an algorithm of semi-supervised multi-grade mean weighed seed region searching is presented. Experimental results indicate that the algorithm has high operation speed, high segmentation accuracy and certain fault tolerance, and also, to a great extent, over-segmentation problem can be perfectly solved. But because of mixed distribution of sand land and saline-alkalized land, they can not be effectively segmented by the algorithm.

Li Jianping, Zhang Bai, Zhang Shuqing, Wang Zongming

GIS Web Service Using Context Information in Mobile Environments

Recently the computing environment has been moved to open architectures that include Web technologies. Web Service is one of import component of the new paradigm. This paper presents a design and implementation of GIS Web Service for mobile devices. As many mobile devices are equipped with GPS (Global Positioning System), it is required to handle the position information more effectively. We have extended the proxy program in the client device to actively send the context information to the server. Based on the context information the server determines the optimal service mode to a particular client. A working example of location-based GIS Web Service is also presented. By using Web Service standards and XML messages we can achieve the maximal interoperability for heterogeneous mobile devices.

Xun Li, Woochul Shin, Li Li, Sang Bong Yoo

The System for Predicting the Traffic Flow with the Real-Time Traffic Information

One of the common services of telematics is the car navigation that finds the shortest path from source to target. Until now, some routing algorithms of the car navigation do not consider the real-time traffic information and use the static shortest path algorithm. In this paper, we proposed the method to predict the traffic flow in the future. This prediction combines two methods. The former is an accumulated speed pattern, which means the analysis results for all past speeds of each road by classifying the same day and the same time interval. The latter is the Kalman filter. We predicted the traffic flows of each segment by combining the two methods. By experiment, we showed our algorithm gave a better precise prediction than only an accumulated speed pattern that is used commonly. The result can be applied to the car navigation to support a dynamic shortest path. In addition, it can give users the travel information to avoid the traffic congestion areas.

Mi-Gyung Cho, Young Jung Yu, SungSoo Kim

Versioning Based Spatial Record Management Technique for Non-blocking Search Operations of Moving Objects

This paper proposes the multi-version based spatial record management technique for non-blocking search-operation for managing moving objects. The storage manager used for LBS should consider the efficient and concurrent control of searching and updating operations of moving objects. Traditionally, the In-place update method with lock is used for updating records in storage manager. But, this method cannot guarantee that each transaction conflicts on the same object. Unlike this, the multi-version concurrency control is a standard technique for avoiding conflicts between reads and writes of the same object. When multi-version technique is applied to spatial database systems, search and update-transactions can access different versions individually. But, the storage will be wasted as the version of whole spatial record is needed to be stored even if only the aspatial data is updated. In multi-version based spatial record management technique, each of aspatial data versions and spatial data versions are managed separately to improve concurrency and reduce the wastage of storage.

Ho Seok Kim, Hee Taek Kim, Myung Keun Kim, Gyoung Bae Kim, Hae Young Bae

A Wavelet Packets Watermarking Algorithm Based on Chaos Encryption

In this paper, a wavelet packets watermarking algorithm based on chaos encryption for still digital images is presented. The watermarking is changed into bit sequence by modular arithmetic, at the same time, a random sequence is gotten by using a chaos map and deal with it also by modular arithmetic. After getting these two sequences, we do XOR arithmetic, then getting a new encrypted watermarking sequence. At the embedding stage, wavelet packets decomposition is used for original image, then embedding encrypted watermarking sequence into some frequency bands of original image by repeatedly embedding four times. At the extracting stage, wavelet packets decomposition for original image and watermarked image are used to inverse process. The experiment results demonstrated the new algorithm is robust for scaling, cropping, JPEG compression and noise attack.

Jin Cong, Yan Jiang, Zhiguo Qu, Zhongmei Zhang

Workshop on Pattern Recognition and Ubiquitous Computing (PRUC 2006)

On a Face Recognition by the Modified Nonsingular Discriminant Analysis for a Ubiquitous Computing

This paper presents an efficient face recognition by the modified nonsingular discriminant analysis for a ubiquitous computing. It is popular to extract discriminant features using Fisher linear discriminant analysis (LDA) for general face recognition. In this paper, we propose the modified nonsingular discriminant analysis in order to overcome the problem of small sample size and prone to be unrealizable due to the singularity of scatter matrices. The scatter matrix of transformed features is nonsingular. From the experiments on facial databases, we find that the modified nonsingular discriminant feature extraction achieves significant face recognition performance compared to other LDA-related methods for a limited range of sample sizes and class numbers. Also, recognition by the modified nonsingular discriminant analysis by using TMS320C6711 DSP Vision Board is set to highlight the advantages of our algorithm.

Jin Ok Kim, Kwang Hoon Jung, Chin Hyun Chung

Rapid Determination of Compound Rifampicin Tablets Using Near Infrared Spectroscopy with Artificial Neural Network

This paper has investigated the application of near infrared (NIR) spectroscopy with artificial neural network (ANN) for synchronous and rapid determination of rifampicin, isoniazid and pyrazinamide in compound rifampicin tablets. We have developed Back-Propagation (BP) Networks which adopted Levenberg-Marquardt training algorithm and Log-sigmoid transfer function basing on NIR spectra of samples and contents of rifampicin, isoniazid and pyrazinamide. The degree of approximation, a new evaluation criterion of the network was employed, which proved the accuracy of the predicted results. The BP Networks have been optimized by selecting suitable topologic structure parameters and the best numbers of training. Using these BP Networks for predicting the amounts of rifampicin, isoniazid and pyrazinamide in prediction set, the root mean square error of prediction (RMSEP) are 0.00668, 0.00508 and 0.00680. These results demonstrate that this method is feasible. This method is convenient, rapid, has no pretreatment and no pollution.

Weiliang Guo, Qingfan Meng, Jiahui Lu, Chaojun Jiang, Yanchun Liang, Lirong Teng

An Efficient Search Algorithm for High-Dimensional Indexing Using Cell Based MBR

Among the many issues in high dimensional index structures using Minimum Bounding Rectangle(MBR), the reduction of fan-out and increase of overlapping area are the key factors in reduction of search speed. It is known that the usage of only minimum and maximum distance in MBR’s pruning process lowers the accuracy of search. In this paper, we present an index structure using cell based MBR in which fan-out gets increased and overlapping is avoided, and a search algorithm which reflects the distribution status of data in MBR to the search. The proposed index structure produces MBR as Vector Approximation-file(VA-file)’s cell units and produces child-MBR by dividing cells. The search algorithm raises the search accuracy by executing pruning using centroid of values included in MBR other than the minimum and maximum distance of cell based MBR and query vector in the k-nn query concerned. Through experiment, we find that the proposed search algorithm has improved its search speed and its accuracy in comparison with existing algorithm.

Bo-Hyun Wang, Byung-Wook Lee

Bimanual Hand Tracking

This paper proposes a novel real-time hand tracking algorithm in the presence of occlusion. For this purpose, we construct a limb model and maintain the model obtained from ARKLT methods with respect to second-order auto-regression model and Kanade-Lucas-Tomasi(KLT) features, respectively. Furthermore, this method do not require to categorize types of superimposed hand motion based on directivity obtained by the slope’s direction of KLT regression. Thus, we can develop a method of hand tracking for gesture and activity recognition techniques frequently used in conjunction with Human-Robot Interaction (HRI) components.

Hye-Jin Kim, Keun-Chang Kwak, Jaeyeon Lee

On a Feature Extraction by LMCUH Algorithm for a Ubiquitous Computing

This paper proposes an algorithm to detect human faces under various environments. In the first step, information on three color spaces of various features is used to determine histogram of color in the first frame of an image. The histogram obtained by interpolation after combining three color of the image is used as an input of LMCUH network. In the second step, the neural network of Levenberg – Marquadt training algorithm minimizes the error. Next, we find the face in test image by using the trained sets. This method is especially suited for various scales, rotations, lighting levels, or occlusions of the target image. Experimental results show that two – dimensional images of a face can be effectively implemented by using artificial neural network training under various environments. Thus, we can detect the face effectively and this can inevitably lead to the Ubiquitous Computing Environment.

Jin Ok Kim, Jun Yeong Jang, Chin Hyun Chung

Autoregressive Models of Speech Signal Variability in the Speech Commands Statistical Distinction

In the task of speech commands (SC) statistical distinction the SC variability models application considerably simplifies both the likelihood ratio construction procedure, and the likelihood ratio expression itself, reducing it to well-known criterion


-square. Computer modeling allows us to use SC variability models at SC distinction decision rule synthesis.

Victor Krasheninnikov, Andrey Armer, Natalia Krasheninnikova, Valery Derevyankin, Victor Kozhevnikov, Nikolay Makarov

u-Table: A Tabletop Interface for Multiple Users

A tabletop user interface called


is proposed in this paper.


provides a cooperative place to share multimedia data among portable devices. We developed simple methods for device recognition and fingertip detection. Users can intuitively manipulate data and devices on


using their fingertips. We propose new interaction methods for the tabletop interface such as a rotational window, a rotational pop-up menu, an inertial widget, and a transformable tabletop. Then, we developed utility programs for multimedia data communications and application programs. The performance of the fingertip detection algorithm was evaluated.

Jangho Lee, Jee-In Kim

Implementation of Embedded System for Intelligent Image Recognition and Processing

The main challenges for image recognition in an embedded system design are the uncertainty of the mobile user’s location and the limitation of measurement algorithm. This paper demonstrates an implementation of an embedded system which is used for vehicle tracking and license plate recognition. The image of the plate is scanned by the camera attached to a mobile device and then encoded data is transmitted along with the location information to a server that is located in a remote place through a wireless communication network. In order to track a user’s location, the image recognition is estimated with spatial and attribute information receiving priority.

In this paper, we present an embedded system for intelligent interface between vehicle and users, employing spatial relative distance algorithm. Comparing the image and location information provides comparable results, and confirms that the embedded system is fairly evaluated by the intelligent image recognition technique.

Taikyeong Jeong, Jinsuk Kang, Yeon Sung Choi

Workshop on Data Storage Devices and Systems (DSDS 2006)

A Software Reproduction of Virtual Memory for Deeply Embedded Systems

Both the hardware cost and power consumption of computer systems heavily depend on the size of main memory, namely DRAM. This becomes important especially in tiny embedded systems (e.g., micro sensors) since they are produced in a large-scale and have to operate as long as possible, e.g., ten years. Although several methods have been developed to reduce the program code and data size, most of them need extra hardware devices, making them unsuitable for the tiny systems. For example, virtual memory system needs both MMU and TLB devices to execute large-size program on a small memory. This paper presents a software reproduction of the virtual memory system especially focusing on paging mechanism. In order to logically expand the physical memory space, the proposed method compacts, compresses, and swaps in/out heap memory blocks, which typically form over half of the whole memory size. A prototype implementation verifies that the proposed method can expand memory capacity by over twice. As a result, large size programs run in parallel with a reasonable overhead, comparable to that of hardware-based VM systems.

Keun Soo Yim, Jae Don Lee, Jungkeun Park, Jeong-Joon Yoo, Chaeseok Im, Yeonseung Ryu

Block-Level Storage Security Architectures

With the unprecedented growth of storage systems in all of today’s society, threats on stored sensitive information have become the critical issues that need to be addressed. Further, compared to the transient risks of data in-flight, the risks associated with data-at-rest are more enduring. While there have been many strategies and mechanisms to implement storage security on data-at-rest, these solutions implemented on application level or operating system level have several shortcomings, including weak security and heavy burden on sever load. In this paper, we propose two hardware security structures based on block level, namely, store-and-forward architecture and cut-through architecture. In our approach, we design and implement these architectures based on FPGA. Our experimental results show that our schemes achieve transparency and completeness in real time without decreasing performance of system.

Shichao Ma, Jizhong Han, Zhensong Wang

An Intelligent Garbage Collection Algorithm for Flash Memory Storages

Flash memory cannot be overwritten unless erased in advance. In order to avoid having to erase during every update, non-in-place-update schemes have been used. Since updates are not performed in place, obsolete data are later reclaimed by garbage collection. In this paper, we study a new garbage collection algorithm to reduce the cleaning cost such as the number of erase operations and the number of data copies. The proposed scheme automatically predicts the future I/O workload and intelligently selects the victims according to the predicted I/O workload. Experimental results show that the proposed scheme performs well especially when the degree of locality is high.

Long-zhe Han, Yeonseung Ryu, Tae-sun Chung, Myungho Lee, Sukwon Hong

Design and Implementation of a Security Framework Based on the Object-Based Storage Device Standard

Recently, demands on security frameworks adopted for network-based storage mechanisms have considerably increased, as these mechanisms are appointed in important roles within industrial areas. However, there has been minimal research and development of these frameworks. In this paper, the design and implementation of a security framework, based on the Object-based Storage Device (OSD) standard [1], is presented. In the proposed security framework compared with the OSD standard, several different functionalities exist.

Kwangsun Ko, Gu Su Kim, June Kim, JungHyun Han, Ungmo Kim, Young Ik Eom

A Snappy B+-Trees Index Reconstruction for Main-Memory Storage Systems

A main memory system employs a main memory rather than a disk as a primary storage and efficiently supports various real time applications that require high performance. The time to recover the system from failure needs to be shortened for real time service, and fast index reconstruction is an essential step for data recovery. In this paper, we present a snappy B+-Tree reconstruction algorithm called Max-PL. The basic Max-PL (called Max) stores the max keys of the leaf nodes at backup time and reconstructs the B+-Tree index structure using the pre-stored max keys at restoration time. Max-PL employs a parallelism to Max in order to improve the performance. We analyze the time complexity of the algorithm, and perform the experimental evaluation to compare its performance with others. Using Max-PL, we achieve a speedup of 2 over Batch Construction and 6.7 over B+-tree Insertion at least.

Ig-hoon Lee, Junho Shim, Sang-goo Lee

An Approximate Analytic Performance Model of Object-Based Storage

The Object-based storage has emerged as an important new field, distinguished from traditional network storage architecture by its focus on large-scale storage resource sharing and high-performance orientation. This paper presents a Stochastic Petri Nets (SPN) model that can be used to evaluate the performance of the object-based storage. But the problem of state space’s explosion of SPN limits its ability to analyze complex and large-scale systems. An approximate analysis technique is proposed to replace the complexity model with the performance equivalence simplification model. This simplified model was used to validate the both well scalability and high performance of the OBS as the number of clients and OSD nodes increase.

Qun Liu, Dan Feng

OASIS: Implementation of a Cluster File System Using Object-Based Storage Devices

An emerging object-based storage device (OSD) architecture facilitates the creation of self-managed, secure, and shared storage. Despite of its potential of greatly improving the scalability and performance of distributed storage systems, only high-end applications direct their attentions to OSD. Currently, it is necessary for mid/entry-levels applications to employ the OSD technology. In this paper, we present OASIS, a cluster file system using OSD, which is designed to combine the iSCSI protocol with OSD SCSI. Our experiments show that our system matches up to performance of existing systems and achieves linear scalability of performance as the number of clients increased.

Young-Kyun Kim, Hong-Yeon Kim, Sang-Min Lee, June Kim, Myoung-Joon Kim

G-SCAN: A Novel Real-Time Disk Scheduling Using Grouping and Branch-and-Bound Strategy

For mixed-media workload, disk scheduling strategies have to guarantee the QoS (Quality of Service) of requests with timing constraints while optimizing the disk utilization. In this paper, we present a novel real-time disk scheduling algorithm which optimizes the seek time overhead of all requests while meeting the deadlines of requests with timing constraints. Our algorithm first arranges the requests in the queue by SCAN order and clusters the several adjacent requests into group. And then it finds a feasible schedule which meets the different QoS of each requests using branch-and-bound strategy. Through trace-driven simulation, we show that our algorithm outperforms other algorithms in terms of response time, throughput, and QoS guarantee for real-time requests.

Taeseok Kim, Eungkyu Song, Yunji Koh, Youjip Won, Kern Koh

A New Key Management Scheme for Distributed Encrypted Storage Systems

Data on a storage device are easier targets for malicious attackers. Storing data in an encrypted form is an effective way to improve data security. In an encrypted storage system, key management is one of the most challenging tasks. In this paper, we propose a new key management scheme for distributed encrypted storage that has various salient features. First, in the proposed scheme, encryption keys are not directly known to users. Due to this property, the security of the encrypted data is not deteriorated though some users that have shared the data lose the access right. Second, in the proposed scheme, even if some components of the system are attacked, the security of the system is still guaranteed. Third, the system provides high availability by exploiting the secret sharing scheme.

Myungjin Lee, Hyokyung Bahn, Kijoon Chae

Technical Session on Technologies and Techniques for Distributed Data Mining (TTDM 2006)

WSRF Services for Composing Distributed Data Mining Applications on Grids: Functionality and Performance

The Web Services Resource Framework (WSRF) has recently emerged as the standard for the implementation of Grid applications. WSRF can be exploited for developing high-level services for distributed data mining applications. This paper describes Weka4WS, a framework that extends the widely-used Weka toolkit for supporting distributed data mining on WSRF-enabled Grids. Weka4WS adopts the WSRF technology for running remote data mining algorithms and managing distributed computations. The paper describes the implementation of Weka4WS using the WSRF libraries and services provided by Globus Toolkit 4. A performance analysis of Weka4WS for executing distributed data mining tasks in two network scenarios is presented.

Domenico Talia, Paolo Trunfio, Oreste Verta

On Mining 2 Step Walking Pattern from Mobile Users

Knowledge extraction from mobile user data analyzes data collected from mobile users, such as their user movement data in order to derive useful knowledge. User movement data is stored in a database which records the (




) coordinates that users have visited at any given point of time, for each mobile users. In this paper, we present a novel method for mining 2 step walking pattern from mobile users. The result of 2 step walking pattern provides the knowledge of how mobile users walks from one location of interest (


) to another in any given 2 steps. Case study for Walking-Matrix and Walking-Graph are provided along with performance evaluation.

John Goh, David Taniar

Effect of Similar Behaving Attributes in Mining of Fuzzy Association Rules in the Large Databases

Association rule mining is an active data mining research area. Recent years have witnessed many efforts on discovering fuzzy associations. The key strength of fuzzy association rule mining is its completeness. This strength, however, comes with a major drawback. It often produces a huge number of fuzzy associations. This is particularly true for datasets whose attributes are highly correlated. The huge number of fuzzy associations makes it very difficult for a human user to analyze them. Existing research has shown that most of the discovered rules are actually redundant or insignificant. In this paper, we propose a novel technique to overcome this problem.The approach is effective because experiment results show that the set of produced rules is typically very small. Our solution also reduces the size of average transactions and dataset. Our performance study shows that this solution has a superior performance over the other algorithms.

Zahra Farzanyar, Mohammadreza Kangavari, Sattar Hashemi

General Tracks

The Study on Three Stages in Retinal Mechanism

Recently many researches have been studied in the human vision model to solve the problem of the machine vision. In retina, input data was processed information processing that was data reduction, edge detection, and emphasizing region. The processed image was recognized by brain. Starting from research on the human visual system, we proposed algorithms that process data reduction and edge detection by using wavelet transform and emphasizing region contrast. The proposed retinal model improves sharpening spatial edges in the amacrine outputs. In simulations, the proposed model shows processing the retina outputs in the levels and compares with outputs.

Sung-Kwan Je, Kwang-Baek Kim, Jae-Hyun Cho, Eui-Young Cha

Curvature Monotony Condition for Rational Quadratic B-spline Curves

The monotone curvature condition for rational quadratic B-spline curves is studied in this paper. At first, we present the necessary and sufficient conditions of monotone curvature for the uniform rational quadratic B-spline segment and we compare it to the curvature condition of rational quadratic Bezier curve. Then, we give the sufficient condition of monotone curvature for the nonuniform rational quadratic B-spline segment. At last, we obtain the condition of monotone curvature for general rational quadratic B-spline curves with any number of control points.

Zhong Li, Lizhuang Ma, Dereck Meek, Wuzheng Tan, Zhihong Mao, Mingxi Zhao

Estimation of Input Ranking Using Input Sensitivity Approach

In feed-forward neural networks, all inputs contribute to a greater or lesser extent when calculating the outputs. Therefore, inputs may be ordered from the greatest contributor to the least. Input ranking is non-trivial – cursory examination of the weight and bias matrices fails to reveal ranking. Solving the ranking issue allows the elimination of inputs with little influence on output. This paper presents a new method of determining the input sensitivity of three-layer feed-forward neural networks. Specifically, sensitivity of an input is independent of the magnitudes of the remaining inputs, providing an unambiguous ranking of input importance. Small changes to influential inputs will result in great changes to output. This concept motivated the theoretical approach to input ranking. Examination of theoretical results will demonstrate the correctness of this approach.

Sanggil Kang, Steve Morphet

A Collaborative Design Environment

This paper presents a collaborative virtual design environment based on the globally shared product model conforming to STEP Standard. The platform provides an integration environment and changes paradigms between I_DEAS, Smarteam (PDM software) and Conceptual Innovation Design System (CIDS). The platform is illustrated for sugarcane harvester design analysis as an example, and provides a collaborative intelligent environment for the design of products, aiming at integrating people, process and data in the product development.

Wuzheng Tan, Lizhuang Ma, Zhihong Mao, Zhong Li

Towards a High Integrity XML Link Update in Object-Relational Database

With the increasing usage of XML database, XML update has become an important issue in the database community. How updates affect the XML documents need to be investigated further. In this paper we propose a methodology to maintain the integrity of updated XML documents by maintaining the consistency of XML Link. XLink and its subsequent XPointer are W3C standards and used to provide referential purpose among XML documents or nodes.

Since XML Link is embedded as an attribute in an XML instance, our proposal can be used for schema-less documents and for instance-based references. Our proposal is targeted for Object-Relational Storage, one of the most widely used repositories for XML document. While the XML documents are stored as a CLOB XML Type, our update methodologies are implemented as a set of functions that perform checking mechanism before updates.

Eric Pardede, J. Wenny Rahayu, David Taniar

Various Types of Attacks and Solutions Regarding Secure Remote User Access over Insecure Networks

In 2005, Peyravian-Jeffries proposed secure password-based protocols for remote user authentication, password change, and session key establishment over insecure networks. These protocols, however, are still susceptible to a stolen-verifier attack. Accordingly, the current paper demonstrates the vulnerability of their protocols to a stolen-verifier attack and then, a simple solution to resolve such a problem is presented. In contrast to these protocols, the proposed solution can resist the stolen-verifier attack.

Eun-Jun Yoon, Kee-Young Yoo

Revealing Statistical Independence of Two Experimental Data Sets: An Improvement on Spearman’s Algorithm

A high effective statistical independence test procedure derived from Spearman’s Rank Correlation Test is presented, applicable to all kind of continuous variables (normal or not, even of unknown probability law). Some relevant practical signal processing test examples as well as a Monte Carlo performance comparison with Spearman’s Rank Correlation Test capabilities are explained. Besides describing the test procedure algorithm, the paper reveals, from an engineering point of view, some significant aspects concerning the understanding (perception) of the important and not simple concepts,


testing dependence versus statistical independence.

Bogdan Badea, Adriana Vlad

Parallelizable Computational Technique for Singularly Perturbed Boundary Value Problems Using Spline

In this paper, we considered singularly perturbed self–adjoint boundary-value problems and proposed a computational technique based on spline scheme, which is also suitable for parallel computing. The whole domain is divided into three non-overlapping subdomains and corresponding subproblems are obtained by using zeroth order approximations of the solution at boundaries of these subproblems. The subproblems corresponding to boundary layer regions are solved using adaptive spline scheme. Numerical example is provided to show the efficiency and accuracy.

Subject Classification:

AMS 65L10 CR G1.7.

Rajesh Kumar Bawa


Weitere Informationen