Skip to main content

2003 | Buch

Computer Aided Systems Theory - EUROCAST 2003

9th International Workshop on Computer Aided Systems Theory Las Palmas de Gran Canaria, Spain, February 24-28, 2003 Revised Selected Papers

herausgegeben von: Roberto Moreno-Díaz, Franz Pichler

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The concept of CAST as Computer Aided Systems Theory, was introduced by F. Pichler of Linz in the late 80’s to include those computer theoretical and practical developments as tools to solve problems in System Science. It was considered as the third component (the other two being CAD and CAM) that will provide for a complete picture of the path from Computer and Systems Sciences to practical developments in Science and Engineering. The University of Linz organized the ?rst CAST workshop in April 1988, which demonstrated the acceptance of the concepts by the scienti?c and technical community. Next, the University of Las Palmas de Gran Canaria joined the University of Linz to organize the ?rst international meeting on CAST, (Las Palmas February 1989), under the name EUROCAST’89, that was a very successful gathering of systems theorists, computer scientists and engineers from most of European countries, North America and Japan. ItwasagreedthatEUROCASTinternationalconferenceswouldbeorganized every two years. Thus, the following EUROCAST meetings took place in Krems (1991), Las Palmas (1993), Innsbruck (1995), Las Palmas (1997), Vienna (1999) and Las Palmas(2001), in addition to an extra-European CAST Conference in Ottawain1994.SelectedpapersfromthosemeetingswerepublishedbySpringer- Verlag Lecture Notes in Computer Science nos. 410, 585, 763, 1030, 1333, 1728 and 2178 and in several special issues of Cybernetics and Systems: an lnternat- nal Journal. EUROCAST and CAST meetings are de?nitely consolidated, as it is demonstrated by the number and quality of the contributions over the years.

Inhaltsverzeichnis

Frontmatter

Complex Systems Tools and Applications

On Modeling and Simulation of Flows of Water by 3D-Cellular Automata

By an earlier paper [1] a 3-D cellular automata model CA was suggested to investigate a possible existence of stable clusters in liquid water in (or after) a state of turbulence. In order to take in the model molecular forces into account, we proposed a model for the molecular level of description. To make such a cellular model as simple as possible the geometrical form of each cell was assumed to be a (regular) cube. Regarding the topology of the cellular automata model CA we assumed for each cell c the associated neighborhood N(c) consisting of the 6 cells c0, c1,...,c5 which are adjacent to c (“von Neumann neighborhood”).

Franz Pichler
Representation and Processing of Complex Knowledge

In this article results of earlier publications on the subject are summarized with some extensions and notational improvements. In the introduction a heuristic outline of human knowledge, its aquisition and its processing is given to motivate the following mathematical representation. The basic mathematical concepts are hierarchies of generalized relations and operations on these, corresponding to knowledge and knowledge processing. Included are objects with space and time parameterization, valuations of objects by partial ordered values, neighborhood and similarity relations, i.e. topologies, and variable objects.

Rudolf F. Albrecht, Gábor Németh
How Many Rounds to KO?, or Complexity Increase by Cryptographic Map Iteration

Iterating a highly non-linear mapping is the basis of the classic schema for building block ciphers, in the form of Feistel networks. The number of rounds of such constructions is a critical parameter. In this paper, the number of rounds needed to reach a certain minimum complexity bound is proposed as a valid measure to assess the cryptographic significance of certain boolean functions. The most remarkable facts arising from this approach are the dependency of the number of rounds on some predefined weaknesses of the tested functions, and the failure to pass the proposed tests when complexity measures are chosen ad hoc to address those weaknesses.

Juan David González Cobas, José Antonio López Brugos
A Non-standard Genetic Algorithm Approach to Solve Constrained School Timetabling Problems

In this paper a non-standard constructive evolutionary approach is described to solve efficiently timetabling problems, so necessary at every school and university. The basic scheme of Genetic Algorithms has been used, but some operators have been adapted to the characteristics of the problem under consideration, improving its resolution capability. Although the design requires more effort, it helps to reduce the search space. Feasible timetables have been considered as hard constraints in the problem formulation. It has been proved that modified genetic algorithms can be very useful in this kind of problems for the easiness of including new constrains and the effectiveness of the resolution, exhibiting a good compromise between computational time and optimal reaching.

Cristina Fernández, Matilde Santos
Application of Uncertain Variables to Task and Resource Distribution in Complex Computer Systems

The paper is concerned with an allocation problem for a class of complex computer systems described by knowledge representations with unknown parameters (in particular, unknown execution times). The unknown parameters are assumed to be values of uncertain variables characterized by certainty distributions given by an expert. The formulation and solution of the optimal allocation problem for a set of parallel processors are presented. The extensions to cascade structure and to cases with uncertain and random parameters are discussed. A simple example illustrates the presented approach.

Z. Bubnicki
A Framework for Modelling the User Interaction with a Complex System

Nowadays, the increasing performance of modern computers enables interactive systems focus on the graphical handling of information. This increase in computational power allows the designer to develop better interfaces oriented to more intuitive human-computer interaction using an interaction style more adapted to the user and system characteristic. However, when the interaction is more intuitive for the user, the design and implementation phase requires great effort by the designer. In this paper, we propose a suitable framework to analyse and design these complex systems, and a formal model that allows us to prove the system properties and validates the specification.

Ma L. Rodríguez Almendros, Ma J. Rodríguez Fórtiz, M. Gea Megías
A Categorical Approach to NP-Hard Optimization Problems

Aiming at developing a theoretical framework for the formal study of NP-hard optimization problems, which is built on precise mathematical foundations, we have focused on structural properties of optimization problems related to approximative issue. From the observation that, intuitively, there are many connections among categorical concepts and structural complexity notions, in this work we present a categorical approach to cope with some questions originally studied within Computational Complexity Theory. After defining the polynomial time soluble optimization problems category OPTS and the optimization problems category OPT, a comparison mechanism between them and an approximation system to each optimization problem have been introduced, following the basic idea of categorical shape theory. In this direction, we consider new insights and a deeper understanding of some basic questions inside the Structural Complexity field, by an universal language.

Liara Aparecida dos Santos Leal, Dalcidio Moraes Claudio, Laira Vieira Toscani, Paulo Blauth Menezes

Logic and Formal Tools

A Formulation for Language Independent Prelogical Deductive Inference

A reality, supposedly described by an Object Attrribute Table, and an unspecified language capable to describe subsets of objects are assumed. The requirements for an statement to be true are investigated, using set theoretical concepts only. The results lead to a sufficient condition for a statement to be deduced from a set of other statements (premises), by means of some algorithmic approach.

Josep Miró
Multi-agent Simulation in Random Game Generator

On the purpose to construct an agent-based simulation system derived from game theory, it seems to be quite significant to establish mathematical tools. It is also natural to consider them to be the contribution of applying method of systems theory to the subjects in social science.

Takuhei Shimogawa
The Zero Array: A Twilight Zone

Descriptive knowledge about a multivalued data table or Object Attribute Table can be expressed by means of a binary Boolean based language. Efforts to design computer programs that determine declarations have been made. This endeavor is mainly directed to binary declarations.This paper firstly proposes a multivalued languaje, based on arrays, that allows a multivalued description of the knowledge contained in a data table, by means of array expressions.The zero array is singled out and its interpretation analyzed, following the discussion the projection arrays or projeact-ars are introduced. Finally, a relation between formal concepts and the project-ars is examined.

Margaret Miró-Juliá
Invariants and Symmetries among Adaptive Agents

In this paper we present agents and systems at different order of complexity. At the first order we have agents which acts are transformations from one state to another. At the second order the actions of the agents transform one context with rules and data to another context with other rules and data. In the transformation of the second order the new rules and the new data are projected in the new context. Rules of the new context must be adapted to the new rules obtained by the transformation. The adaptation process can be obtained by a close loop control inside the possible set of the rules included in the new context. Close loop control at the second order can be obtained by the second order action of the agents. Invariants and symmetry are possible when we move from one context to another. A set of contexts can be transformed in another set of contexts by a third order action of a meta-agent. In conclusion a hierarchical tower of agents and meta agents can be built in a way to generate complex transformations.

Germano Resconi
Generalizing Programs via Subsumption

In this paper we present a class of operators for Machine Learning based on Logic Programming which represents a characterization of the subsumption relation in the following sense: The clause C1 subsumes the clause C2 iff C1 can be reached from C2 by applying these operators. We give a formalization of the closeness among clauses based on these operators and an algorithm to compute it as well as a bound for a quick estimation. We extend the operator to programs and we also get a characterization of the subsumption between programs. Finally, a weak metric is presented to compute the closeness among programs based on subsumption.

Miguel A. Gutiérrez-Naranjo, José A. Alonso-Jiménez, Joaquín Borrego-Díaz

Social and Intelligent Systems

Modeling with Archetypes: An Effective Approach to Dealing with Complexity

In the face of growing complexities, agents who lead or manage organizations must revert to better models. This proposition is based on the Conant/Ashby Theorem, which says that the results of a management process are determined by the quality of the model on which that process is based. The author proposes that archetype-based modeling is a promising way to enhance the behavioral repertory of agents in organizations and society: It aids in achieving better models and thus in coping with complexity more effectively. Experiences with System Dynamics modeling of real-world issues and problems as well as the pertinent results achieved are reported. The special flavor of the cases described lies in a new quality and speed of learning-by-modeling, which was unachievable a few years ago. This is now enabled by a) an advanced methodology of modeling and model validation, b) the conceptual maturity of systems archetypes, and c) the availability of powerful simulation software.

Markus Schwaninger
Equal Opportunities Analysis in the University: The Gender Perspective

The social systems’ complexity is a consequence of the human presence. Measurements and evaluations are rather qualitative and, in many cases, heuristic and / or linguistic. Human beings are not equally treated, with a clear discrimination based on age, gender, race or culture, among many other reasons. Women discrimination is the focus of our study in this paper. The university, as a social system, is investigated. The goal is to determine the degree of gender discrimination and to provide the tools to evaluate different actions to improve the equal opportunity (EO) principle in its operation.

I. J. Benítez, P. Albertos, E. Barberá, J. L. Díez, M. Sarrió
Approximate Solutions to Semi Markov Decision Processes through Markov Chain Montecarlo Methods

We explore the possibilities of Markov Chain Monte Carlo simulation methods to solve sequential decision processes evolving stochastically in time. The application areas of such processes are fairly wide, embedded typically in the Decision Analysis framework, such as preventive maintenance of systems, where we shall find our illustrative examples.

Arminda Moreno-Díaz, Miguel A. Virto, Jacinto Martín, David Ríos Insua
Knowledge Base for Evidence Based Medicine with Bioinformatics Components

This paper presents an approach for a multilevel knowledge base system for evidence-based medicine. A sequence of events called patient trial is extracted from computer patient records. These events describe one flow of therapy for a concrete disease. Each event is represented by state and time. We introduce a measure between states, which is used to calculate the best alignment between different patient trials. The alignment measure calculates the distance between two sequences of patient states, which represents the similarity of the course of disease. Based on that similarity- value classes are introduced by using specific clustering methods. These classes can be extended by gene expression data on micro-arrays leading to finer clustering containing similar trials – called trial families. For easy checking if a new trial belongs to a family we use profiles of Hidden Markov models to detect potential membership in a family.

Witold Jacak, Karin Pröll, Jerzy Rozenblit
Diversified Approach to Methodology and Technology in Distributed Intelligent Building Systems

The conception of Intelligent Building was created in North America by building-automation systems designers and manufacturers. It was used to describe buildings where microprocessor-based technologies were used in order to improve the buildings’ efficiency. This paper discusses the basic issues related to the design of an intelligent building. It offers various solutions to the issues of data acquisition and processing, including the most important ones that are based on the distributed building automation system.

Andrzej Jablonski, Ryszard Klempous, Benedykt Licznerski
Temporal Approaches in Data Mining. A Case Study in Agricultural Environment

In this work we present an study of the application of data mining techniques in an interesting domain, the Integrated Control. We found that in this dynamic domain, non-temporal data mining techniques are not suitable, needing to find a good temporal model. After a comparative study of several candidate models, we propose the use of the inter-transaction approach.

Francisco Guil, Alfonso Bosch, Samuel Túnez, Roque Marín
Personalized Guided Routes in an Adaptive Evolutionary Hypermedia System

In this paper we describe an adaptation method for adaptive hypermedia systems, consisting in personalized guided routes for the SEM-HP model. SEM-HP is a layered, systemic, semantic and evolutionary model for the development of adaptive hypermedia systems, which adapt to the particular features and interests of each user. For evolution it uses a Metasystem, which offers to the author a set of evolutionary actions that permit the hypermedia system to evolve in a flexible and consistent way. In SEM-HP a hypermedia system is composed by four subsystems, each of which offers a different functionality to the user and to other subsystems. User adaptation is carried out by the learning subsystem, which keeps and updates an user model, which includes the user knowledge about the informational elements offered by the system, his preferences and his goal, which is to reach a certain degree of knowledge. Guided routes direct the user through the hypermedia system, so the user goal can be reached in an optimal way.

Nuria Medina-Medina, Fernando Molina-Ortiz, Lina García-Cabrera, José Parets-Llorca
Temporal Data Management and Knowledge Acquisition Issues in Medical Decision Support Systems

The development of data-intensive systems in medical domains has received increasing attention in recent years. In this work we present in depth some parts of ACUDES (Architecture for Intensive Care Units Decision Support) in which traditional techniques for managing and representing time have been integrated with a temporal diagnosis task in a decision support platform. ACUDES has been designed to manage the information regarding patient evolution and to describe the patients evolution in terms of the temporal sequence diseases suffered. These functionalities are supported by an ontology which simplifies the knowledge acquisition and sharing, while guaranteeing the semantic consistency of patients’ evolution data. This work will be focused on the Temporal Data Management and the Knowledge Acquisition Tool.

M. Campos, J. Palma, B. Llamas, A. González, M. Menárguez, R. Marín

Distributed Computing

Development of a Scalable, Fault Tolerant, and Low Cost Cluster-Based e-Payment System with a Distributed Functional Kernel

In this paper is presented the design and implementation of a payment gateway. It acts as a mediator among an e-Commerce, a final customer computer, mobile phone or other network access device and a bank host. The gateway was designed using modern software engineering tools (such as unified modeling language, design patterns, etc.), implemented using a functional language (Erlang/OTP) and supports the typical features of other common payment gateways. The design and implementation pays a lot of attention to its architecture (using distributed computing to improve service availability -scalability, fault-tolerance-) and to the chance of apply formal verification tools and techniques to improve system design and performance, and to ensure system correctness. The architecture and ideas proposed in this paper are intended to be a first step towards the implementation of other and more complex payment systems.

C. Abalde, V. Gulías, J. Freire, J. Sánchez, J. García-Tizón
Generative Communication with Semantic Matching in Distributed Heterogeneous Environments

Different standard middleware proposals have emerged to provide computing models and communication among components in open distributed systems. Nowadays, Internet is becoming an increasingly relevant alternative to middleware platforms, due to the success of Web services in solving problems of application-to-application integration in distributed and highly heterogeneous environments. However, a coordination model is necessary to build open and flexible systems from active and independent distributed components. In this paper, we present a Web-enabled Coordination Service to orchestrate heterogeneous applications based on the Generative Communication model with semantic matching. Our aim is to use Internet as a real distributed computing platform, considering heterogeneous semantic interoperability.

Pedro Álvarez, José A. Bañares, Eloy J. Mata, Pedro R. Muro-Medrano, Julio Rubio
Mapping Nautilus Language into Java: Towards a Specification and Programming Environment for Distributed Systems

This paper describes some of the features of Nautilus specification/ programming language that make it interesting to develop complex systems and then explains how to map a Nautilus construction into Java constructions. The Nautilus constructions presented are: actions (including nondeterminism and intra-action concurrency), aggregations and refinements.

Claudio Naoto Fuzitaki, Paulo Blauth Menezes, Júlio Pereira Machado, Simone André da Costa
Design of a Medical Application Using XML Based Data Interchange

The recent proliferation of computing devices and the contexts in which they are used demand diversity in distributed applications as well. The objective of our research is the development of a medical framework where information from patients can be accessed from heterogeneous and (possibly) mobile computing environments. Moreover, high availability and reliability are also milestones in that system. The former objective is achieved by using eXtensible Markup Language (XML) for the communication medium, in combination with eXtensible Stylesheet Language (XSL) transformations to allow different kinds of clients access the data. High availability is achieved by using a concurrent and distributed language, Erlang/OTP, for the development on the server side. Also, in the server side, techniques coming from the formal methods area are applied to improve the system design and performance and to ensure the system correctness. And finally, reliability, confidentially and authentication, fundamental items in the data communications, are accomplished by mean of the Secure Socket Layer (SSL) protocol.

C. Mariño, C. Abalde, M. G. Penedo, M. Penas
Partial-Order Reduction in Model Checking Object-Oriented Petri Nets

The main problem being faced in finite-state model checking is the state space explosion problem. For coping with it, many advanced methods for reducing state spaces have been proposed. One of the most successful methods (especially when dealing with software systems) is the so-called partial-order reduction. In the paper, we examine how this method can be used in the context of object-oriented Petri nets, which bring in features like dynamic instantiation, late binding, garbage collection, etc.

Milan Češka, Luděk Haša, Tomáš Vojnar
On the Strong Co–induction in Coq

In this paper, we provide a library in Coq containing intuitionistic proofs of some facts that are on the basis of formal verification tools such as Model Checking or Theorem Proving: the Reduction Lemma [8] [17] and the correspondent on minimum fixed points [1].In order to improve usability, most of the proofs are given in a general frame of partial order relations and not only in the specific complete lattice of a power-set.

J. L. Freire Nistal, A. Blanco Ferro, Victor M. Gulías, E. Freire Brañas

Autonomous and Control Systems

A Throttle and Brake Fuzzy Controller: Towards the Automatic Car

It is known that the techniques under the topic of Soft Computing have a strong capability of learning and cognition, as well as a good tolerance to uncertainty and imprecision. Due to these properties they can be applied successfully to Intelligent Vehicle Systems. In particular Fuzzy Logic is very adequate to build qualitative (or linguistic) models, of many kinds of systems without an extensive knowledge of their mathematical models. The throttle and brake pedal and steering wheel, are the most important actuators for driving. The aim of this paper is to integrate in a qualitative model the vehicle operation and the driver behavior in such a way that an unmanned guiding system can be developed around it [1] [2]. The automation of both pedals permits to direct the speed control from a computer and so, to automate several driving functions such as speed adaption, emergency brake, optimum speed selection, safe headway maintenance, etc. The building and design of fuzzy controllers for automatic driving is based on the drivers’ know-how and experience and the study of their behavior in maneuvers. The use of fuzzy controllers allows achieving a human like vehicle operation. The results of this research show a good performance of fuzzy controllers that behave in a very human way, adding the precision data from a DGPS source, and the safety of a driver without human lacks such as tiredness, sensorial defects or aggressive standings.

J. E. Naranjo, J. Reviejo, C. González, R. García, T. de Pedro
ADVOCATE II: ADVanced On-Board Diagnosis and Control of Autonomous Systems II

A way to improve the reliability and to reduce costs in autonomous robots is to add intelligence to on-board diagnosis and control systems to avoid expensive hardware redundancy and inopportune mission abortion. According to this, the main goal of the ADVOCATE II project is to adapt legacy piloting software around a generic SOAP (Simple Object Access Protocol) architecture on which intelligent modules could be plugged. Artificial Intelligent (AI) modules using Belief Bayesian Networks (BBN), Neuro-Symbolic Systems (NSS), and Fuzzy Logic (FL) are coordinated to help the operator or piloting system manage fault detection, risk assessment, and recovery plans. In this paper, the specification of the ADVOCATE II system is presented.

Miguel Angel Sotelo, Luis Miguel Bergasa, Ramón Flores, Manuel Ocaña, Marie-Hélène Doussin, Luis Magdalena, Joerg Kalwa, Anders L. Madsen, Michel Perrier, Damien Roland, Pietro Corigliano
Segmentation of Traffic Images for Automatic Car Driving

This paper addresses the automatic analysis and segmentation of real-life traffic images aimed at providing the necessary and sufficient information for automatic car driving. The paper focuses on the basic task of segmenting the lane boundaries. As the general objective is to build a very robust segmentation module, able to cope with any kind of road and motorway and for any kind of surroundings and background, either rural or urban, we face a complex problem of texture analysis and classification which we have approached by applying the frequency histogram of connected elements (FHCE). To assure an efficient design of the segmentation module, a thorough experimentation with numerous traffic images has been undertaken. In particular, the optimum design of the crucial parameters of the FHCE (namely, the structurant morphological element, the connectivity level and the scanning window) has been carried out with special care. Experimental results are finally presented and discussed.

Miguel Ángel Patricio, Darío Maravall
Vision Based Intelligent System for Autonomous and Assisted Downtown Driving

Autonomous and assisted driving in city urban areas is a challeging topic that needs to be addressed during the following ten to twenty years. In the current work an attempt in this direction is carried out by using vision-based systems not only for autonomous vehicle driving, but in helping the driver recognize vehicles, and traffic signs. Some well consolidated results have been attained on a private test circuit using a commercial Citroen Berlingo as described in this paper.

Miguel Ángel Sotelo, Miguel Ángel García, Ramón Flores
Using Fractional Calculus for Lateral and Longitudinal Control of Autonomous Vehicles

Here it is presented the use of Fractional Order Controllers (FOC) applied to the path-tracking problem in an autonomous electric vehicle. A lateral dynamic model of a industrial vehicle has been taken into account to implement conventional and Fractional Order Controllers. Several control schemes with these controllers have been simulated and compared . First, different controllers with similar parameters have been implemented and then they have been improved by using optimization methods. The preliminary results are presented here.

J. I. Suárez, B. M. Vinagre, A. J. Calderón, C. A. Monje, Y. Q. Chen

Computational Methods in Biomathematics

Recent Advances in the Walking Tree Method for Biological Sequence Alignment

The meaning of biological sequences is a central problem of modern biology. Although string matching is well-understood in the edit-distance model, biological strings with transpositions and inversions violate this model’s assumptions. To align biologically reasonable strings, we proposed the Walking Tree Method [4,5,6,7,8]; an approximate string alignment method that can handle insertion, deletions, substitutions, translocations, and more than one level of inversions. Our earlier versions were able to align whole bacterial genomes (1 Mbps) and discover and verify genes. As extremely long sequences can now be deciphered rapidly and accurately without amplification [2,3,15], speeding up the method becomes necessary. Via a technique that we call recurrence reduction in which some computations can be looked up rather than recomputed, we are able to significantly improve the performance, e.g. 400% for a 1-million base pair alignment. In theory, our method can align a length |P| string with a length |T| string in time |P||T|/(nlog |P|) using n processors in parallel. In practice, we can align 10 Mbps strings within a week using 30 processors.

Paul Cull, Tai Hsu
Towards Some Computational Problems Arising in Biological Modeling

Time-nonhomogeneous diffusion processes confined by a time dependent reflecting boundary are investigated to obtain a system of integral equations concerning the transition pdf in the presence of the reflecting boundary. For Gauss-Markov processes restricted by particular time dependent reflecting boundaries a closed form solution of the transition pdf is derived. Furthermore, the first passage time problem through time-dependent thresholds is analized and a nonsingular second-kind Volterra integral equation for the first passage time pdf is obtained.

Virginia Giorno, Amelia G. Nobile, Enrica Pirozzi, Luigi M. Ricciardi
Single Point Algorithms in Genetic Linkage Analysis

In this paper we provide a mathematical model that describes genetic inheritance. In terms of our framework, we describe two commonly used algorithms that calculate the set of founder allele assignment that are compatible with some genotype information and a given inheritance pattern. These calculations cater for the first step in the multi point linkage analysis. The first algorithm we investigate is a part of the Genhunter software package developed at the Whitehead Institute at MIT and is developed by Kruglyak et al [8]; the second one occurs in the software package Allegro provided by Gudbjartsson et al [5] at DeCode Genetics in Reykjavik. We also state and prove formally the correctness of these algorithms.

Daniel Gudbjartsson, Jens A. Hansen, Anna Ingólfsdóttir, Jacob Johnsen, John Knudsen
A Self-adaptive Model for Selective Pressure Handling within the Theory of Genetic Algorithms

In this paper we introduce a new generic selection method for Genetic Algorithms. The main difference of this selection principle in contrast to conventional selection models is given by the fact that it considers not only the fitness of an individual compared to the fitness of the total population in order to determine the possibility of being selected. Additionally, in a second selection step, the fitness of an offspring is compared to the fitness of its own parents. By this means the evolutionary process is continued mainly with offspring that have been created by advantageous combination of their parents’ attributes. A self-adaptive feature of this approach is realized in that way that it depends on the actual stadium of the evolutionary process how many individuals have to be created in order to produce a sufficient amount of ‘successful’ offspring. The experimental part of the paper documents the ability of this new selection operator to drastically improve the solution quality. Especially the bad properties of rather disadvantageous crossover operators can be compensated almost completely.

Michael Affenzeller, Stefan Wagner
Computational Methods for the Evaluation of Neuron’s Firing Densities

Some analytical and computational methods are outlined, that are suitable to determine the upcrossing first passage time probability density for some Gauss-Markov processes that have been used to model the time course of neuron’s membrane potential. In such a framework, the neuronal firing probability density is identified with that of the first passage time upcrossing of the considered process through a preassigned threshold function. In order to obtain reliable evaluations of these densities, ad hoc numerical and simulation algorithms are implemented.

Elvira Di Nardo, Amelia G. Nobile, Enrica Pirozzi, Luigi M. Ricciardi
Developing the Use of Process Algebra in the Derivation and Analysis of Mathematical Models of Infectious Disease

We introduce a series of descriptions of disease spread using the process algebra WSCCS and compare the derived mean field equations with the traditional ordinary differential equation model. Even the preliminary work presented here brings to light interesting theoretical questions about the “best” way to defined the model.

R. Norman, C. Shankland
On Representing Biological Systems through Multiset Rewriting

We model qualitative and quantitative aspects of metabolic pathways by using a stochastic version of Multiset Rewriting (SMSR). They offer a natural way of describing both the static and the dynamic aspects of metabolic pathways. We argue that, due to its simple conceptual model, SMSR may be conveniently used as an intermediate language where many higher level specification languages may be compiled (e.g., as in the security protocol example). As a first step, we show also how SMSR may be used to simulate Stochastic Petri Nets for describing metabolic pathways.

S. Bistarelli, I. Cervesato, G. Lenzini, R. Marangoni, F. Martinelli

Natural and Artificial Neural Nets

A Model of Neural Inspiration for Local Accumulative Computation

This paper explores the computational capacity of a novel local computational model that expands the conventional analogical and logical dynamic neural models, based on the charge and discharge of a capacity or in the use of a D flip-flop. The local memory capacity is augmented to behave as an S states automaton and some control elements are added to the memory. The analogical or digital calculus equivalent part of the balance between excitation and inhibition is also generalised to include the measure of specific spatio-temporal features over temporal expansions of the input space (dendritic field). This model is denominated as accumulative computation and is inspired in biological short-term memory mechanisms. The work describes the model’s general specifications, including its architecture, the different working modes and the learning parameters. Then, some possible software and hardware implementations (using FPGAs) are proposed, and, finally, its potential usefulness in real time motion detection tasks is illustrated.

José Mira, Miguel A. Fernández, Maria T. López, Ana E. Delgado, Antonio Fernández-Caballero
Emergent Reasoning from Coordination of Perception and Action: An Example Taken from Robotics

The paper presents a manipulator arm that is able to acquire primitive reasoning abilities from the pure coordination of perception and action. First, the problem of dynamic collision avoidance is considered, as a test-bed for autonomous coordination of perception and action. The paper introduces a biomimetic approach that departs from the conventional, analytical approach, as it does not employ formal descriptions of the locations and shape of the obstacles, nor does it solve the kinematic equations of the robotic arm. Instead, the method follows the perception-reason-action cycle and is based on a reinforcement learning process guided by perceptual feedback. From this perspective, obstacle avoidance is modeled as a multi- objective optimization process. The paper also investigates the possibilities for the robot to acquire a very simple reasoning ability by means of if-then-else rules, that trascend its previous reactive behaviors based on pure interaction between perception and action.

Darío Maravall, Javier de Lope
Inverse Kinematics for Humanoid Robots Using Artificial Neural Networks

The area of inverse kinematics of robots, mainly manipulators, has been widely researched, and several solutions exist. The solutions provided by analytical methods are specific to a particular robot configuration and are not applicable to other robots. Apart from this drawback, legged robots are inherently redundant because they need to have real humanoid configurations. This degree of redundancy makes the development of an analytical solution for the inverse kinematics practically unapproachable. For this reason, our proposed method considers the use of artificial neural networks to solve the inverse kinematics of the articulated chain that represents the robot’s legs. Since the robot should always remain stable and never fall, the learning set presented to the artificial neural network can be conveniently filtered to eliminate the undesired robot configurations and reduce the training process complexity.

Javier de Lope, Rafaela González-Careaga, Telmo Zarraonandia, Darío Maravall
Neurosymbolic Integration: The Knowledge Level Approach

The time when the connectionist and symbolic perspectives of Artificial Intelligence (AI) competed against each other is now over. The rivalry was due essentially to ignorance on the implications of the knowledge level, as introduced by Newell and Marr. Now it is generally accepted that they are different and complementary forms of modeling and operationalizing the inferences in terms of which a problem solving method (PSM) decomposes a task. All these tasks, methods, inferences, and formal operators belong to a broad library of reusable components for knowledge modeling. The final configuration of a problem solving method, with symbolic and connectionist components, is only dependent on the particular balance between data and knowledge available for the specific application under consideration. Various approaches have been explored for neurosymbolic integration. In this paper we propose a classification of these approaches (unified, hybrid and system level) and strongly support that the integration has to be made at the knowledge level and in the domain of the external observer (the “house” of models).

J. Mira, A. E. Delgado, M. J. Taboada
On Parallel Channel Modeling of Retinal Processes

Based on previous work on parallel processing for visual motion detection and analysis ([1], [2], [3]) in which interesting relationships among number and random distribution of cells, overlapping degree and receptive field size with fault tolerance, accuracy of computations and performance cost were shown in practice, we now focus our attention on modeling a two parallel channel visual peripheral system consisting of three layers of neuron-like cells that account for motion and shape information of perceived objects. A tetra-processor UltraSparc SUN computer is used for simulation and video-outputs in false color show the two-channel activity of the system. A train of input images presenting a moving target is analyzed by the neuron-like processing layers and the results presented as a video movie showing a colour coded version of neural activity.

J. C. Rodríguez-Rodríguez, A. Quesada-Arencibia, R. Moreno-Díaz Jr., K. N. Leibovic
Geometric Image of Statistical Learning (Morphogenetic Neuron)

We present a neural model called Morphogenetic Neuron. This model generates a geometric image of the data given by a table of features. A point in the n-dimensional geometric space is a set of the values of the attributes of the features. In each point we compute a value of a field by a linear superposition of the values of the attributes in the point. The coefficients of the linear superposition are the same for all the points and are invariant for any symmetric transformations of the geometric space. The morphogenetic neuron can compute the coefficients by data without recursive methods, to reproduce the wanted function by samples (classification , learning and so on) . Non-linear primitive functions cannot be represented in the morphogenetic geometric space. Primitive non-linear functions are considered as new coordinates for which the dimensions of the space are incremented. The geometry in general is non Euclidean and its structure is determined by the positions of the points in the space. The type of geometry is one of the main difference respects to the classical statistical learning and other neuron models. Connection between statistic properties and coefficients are founded.

Elisa Alghisi Manganello, Germano Resconi
Systems and Computational Tools for Neuronal Retinal Models

This paper presents some systems theoretical and computational tools which allow for qualitative and quantitative models to explain the rate of firing (outputs) of ganglion retinal cells to various stimuli, mostly in higher vertebrates. Section 1 presents the neurophysiological bases of the formal tools, in which the main point is that specialized computation by some ganglion cells is to be performed at the inner plexiform layer, via amacrines. Section 2, presents the minima prerequisites for a qualitative model of linear and non-linear behavior of ganglia for higher vertebrates. The points here are that signals reaching the inner plexiform layer are fast and retarded versions of the input image, and that non-linear local lateral interaction accounts for non-linearities. Section 3, is devoted to computational representations that will permit to go from qualitative to quantitative formal models. The computational tools are based on generalized Newton Filters.

Roberto Moreno-Díaz, Gabriel de Blasio

Neuroinformatics and Neuroimaging

A Novel Gauss-Markov Random Field Approach for Regularization of Diffusion Tensor Maps

In this paper we propose a novel Gaussian MRF approach for regularization of tensor fields for fiber tract enhancement. The model follows the Bayesian paradigm: prior and transition. Both models are given by Gaussian distributions. The prior and the posterior distributions are Gauss-MRFs. The prior MRF promotes local spatial interactions. The posterior MRF promotes that local spatial interactions which are compatible with the observed data. All the parameters of the model are estimated directly from the data. The regularized solution is given by means of the Simulated Annealing algorithm. Two measures of regularization are proposed for quantifying the results. A complete volume DR-MRI data have been processed with the current approach. Some results are presented by using some visually meaningful tensor representations and quantitatively assessed by the proposed measures of regularization.

Marcos Martín-Fernández, Raul San Josá-Estépar, Carl-Fredrik Westin, Carlos Alberola-López
Coloring of DT-MRI Fiber Traces Using Laplacian Eigenmaps

We propose a novel post processing method for visualization of fiber traces from DT-MRI data. Using a recently proposed non-linear dimensionality reduction technique, Laplacian eigenmaps [3], we create a mapping from a set of fiber traces to a low dimensional Euclidean space. Laplacian eigenmaps constructs this mapping so that similar traces are mapped to similar points, given a custom made pairwise similarity measure for fiber traces. We demonstrate that when the low-dimensional space is the RGB color space, this can be used to visualize fiber traces in a way which enhances the perception of fiber bundles and connectivity in the human brain.

Anders Brun, Hae-Jeong Park, Hans Knutsson, Carl-Fredrik Westin
DT-MRI Images : Estimation, Regularization, and Application

Diffusion-Tensor MRI is a technique allowing the measurement of the water molecule motion in the tissues fibers, by the mean of rendering multiple MRI images under different oriented magnetic fields. This large set of raw data is then further estimated into a volume of diffusion tensors (i.e. 3× 3 symmetric and positive-definite matrices) that describe through their spectral elements, the diffusivities and the main directions of the tissues fibers. We address two crucial issues encountered for this process : diffusion tensor estimation and regularization. After a review on existing algorithms, we propose alternative variational formalisms that lead to new and improved results, thanks to the introduction of important tensor constraint priors (positivity, symmetry) in the considered schemes. We finally illustrate how our set of techniques can be applied to enhance fiber tracking in the white matter of the brain.

D. Tschumperlé, R. Deriche
An Efficient Algorithm for Multiple Sclerosis Lesion Segmentation from Brain MRI

We propose a novel method for the segmentation of Multiple Sclerosis (MS) lesions in MRI. The method is based on a three-step approach: first a conventional k-NN classifier is applied to pre-classify gray matter (GM), white matter (WM), cerebro-spinal fluid (CSF) and MS lesions from a set of prototypes selected by an expert. Second, the classification of problematic patterns is resolved computing a fast distance transformation (DT) algorithm from the set of prototypes in the Euclidean space defined by the MRI dataset. Finally, a connected component filtering algorithm is used to remove lesion voxels not connected to the real lesions. This method uses distance information together with intensity information to improve the accuracy of lesion segmentation and, thus, it is specially useful when MS lesions have similar intensity values than other tissues. It is also well suited for interactive segmentations due to its efficiency. Results are shown on real MRI data as wall as on a standard database of synthetic images.

Rubén Cárdenes, Simon K. Warfield, Elsa M. Macías, Jose Aurelio Santana, Juan Ruiz-Alzola
Dynamical Components Analysis of FMRI Data: A Second Order Solution

In this paper, we present a new way to derive low dimensional representations of functional MRI datasets. This is done by introducing a state-space formalism, where the state corresponds to the components whose dynamical structure is of interest. The rank of the selected state space is chosen by statistical comparison with null datasets. We study the validity of our estimation scheme on a synthetic dataset, and show on a real dataset how the interpretation of the complex dynamics of fMRI data is facilitated by the use of low-dimensional, denoised representations. This methods makes a minimal use of priors on the data structure, so that it is very practical for exploratory data analysis.

Bertrand Thirion, Olivier Faugeras
Tensor Field Regularization Using Normalized Convolution

This paper presents a filtering technique for regularizing tensor fields. We use a nonlinear filtering technique termed normalized convolution [Knutsson and Westin 1993], a general method for filtering missing and uncertain data. In the present work we extend the signal certainty function to depend on locally derived certainty information in addition to the a priory voxel certainty. This results in reduced blurring between regions of different signal characteristics, and increased robustness to outliers. A driving application for this work has been filtering of data from Diffusion Tensor MRI.

Carl-Fredrik Westin, Hans Knutsson
Volumetric Texture Description and Discriminant Feature Selection for MRI

This paper considers the problem of texture description and feature selection for the classification of tissues in 3D Magnetic Resonance data. Joint statistical measures like grey-level co-occurrence matrices (GLCM) are commonly used for analysis texture in medical imaging because they are simple to implement but are prohibitively expensive to compute when extended to 3D. Furthermore, the issue of feature selection which recognises the fact that some features will be either redundant or irrelevant is seldom addressed by workers in texture classification. In this work, we develop a texture classification strategy by a sub-band filtering technique similar to a Gabor decomposition that is readily and cheaply extended to 3D. We further propose a generalised sequential feature selection method based on a measure of feature relevance that reduces the number of features required for classification by selecting a set of discriminant features conditioned on a set training texture samples. We describe and illustrate the methodology by quantitatively analysing a variety of images: synthetic phantom data, natural textures, and MRI of human knees.

Abhir Bhalerao, Constantino Carlos Reyes-Aldasoro
White Matter Mapping in DT-MRI Using Geometric Flows

We present a 3D geometric flow designed to evolve in Diffusion Tensor Magnetic Resonance Images(DT-MRI) along fiber tracts by measuring the diffusive similarity between voxels. Therefore we define a front propagation speed that is proportional to the similarity between the tensors lying on the surface and its neighbor in the propagation direction. The method is based on the assumption that successive voxels in a tract have similar diffusion properties. The front propagation is implemented using level set methods by Osher and Sethian [1] to simplify the handling of topology changes and provides an elegant tool for smoothing the segmented tracts. While many methods demand a regularized tensor field, our geometrical flow performs a regularization as it evolves along the fibers. This is done by a curvature dependent smoothing term adapted for thin tubular structures. The purpose of our approach is to get a quantitative measure of the diffusion in segmented fiber tracts. This kind of information can also be used for white matter registration and for surgical planning.

Lisa Jonasson, Patric Hagmann, Xavier Bresson, Reto Meuli, Olivier Cuisenaire, Jean-Philippe Thiran
Anisotropic Regularization of Posterior Probability Maps Using Vector Space Projections. Application to MRI Segmentation

In this paper we address the problem of regularized data classification. To this extent we propose to regularize spatially the class-posterior probability maps, to be used by a MAP classification rule, by applying a non-iterative anisotropic filter to each of the class-posterior maps. Since the filter cannot guarantee that the smoothed maps preserve their probabilities meaning (i.e., probabilities must be in the range [0,1] and the class-probabilities must sum up to one), we project the smoothed maps onto a probability subspace. Promising results are presented for synthetic and real MRI datasets.

M. A. Rodriguez-Florido, R. Cárdenes, C. -F. Westin, C. Alberola, J. Ruiz-Alzola
Fast Entropy-Based Nonrigid Registration

Computer vision tasks such as learning, recognition, classification or segmentation applied to spatial data often requires spatial normalization of repeated features and structures. Spatial normalization, or in other words, image registration, is still a big hurdle for the image processing community. Its formulation often relies on the fact that correspondence is achieved when a similarity measure is maximized. This paper presents a novel similarity measuring technique based on a coupling function inside a template matching framework. It allows using any entropy-based similarity metric, which is crucial for registration using different acquisition devices. Results are presented using this technique on a multiresolution incremental scheme.

Eduardo Suárez, Jose A. Santana, Eduardo Rovaris, Carl-Fredrik Westin, Juan Ruiz-Alzola

Image Processing

3D Reconstruction from a Vascular Tree Model

In this paper, we present a vascular tree model made with synthetic materials and which allows us to obtain images to make a 3D reconstruction. We have used PVC tubes of several diameters and lengths that will let us evaluate the accuracy of our 3D reconstruction. In order to calibrate the camera we have used a corner detector. Also we have used Optical Flow techniques to follow the points through the images going and going back. We describe two general techniques to extract a sequence of corresponding points from multiple views of an object. The resulting sequence of points will be used later to reconstruct a set of 3D points representing the object surfaces on the scene. We have made the 3D reconstruction choosing by chance a couple of images and we have calculated the projection error. After several repetitions, we have found the best 3D location for the point.

Luis Álvarez, Karina Baños, Carmelo Cuenca, Julio Esclarín, Javier Sánchez
ESKMod, a CommonKADS Knowledge Model Integrating Multiple Classic Edge Based Segmentation Algorithms

Bibliography offers us a wide set of image segmentation methods, which are usually oriented to solve specific problems. These are evaluated, mainly, in specific work fields, as the analysis of industrial pieces, classification of agricultural products or remote sensing. Actually, main efforts are concentrated in the definition of new algorithms, generating a wider collection of alternative methods. Taking account of our experience about applying segmentation methods to diverse work fields, we realized the advantages of integrating previously used methods into a single segmentation model, with multiple alternatives. In this work, we present how some classic methods are integrated into a single segmentation knowledge model, using the CommonKADS modeling tools. Our final objective is building a model of the applicatibility of the alternatives in order to relate them with searched results. Here, we present how this composed knowledge model was assembled.

Isabel M. Flores-Parra, J. Fernando Bienvenido
Frequency Analysis of Contour Orientation Functions for Shape Representation and Motion Analysis

The location and characterization of edges is a crucial factor for some image processing tasks, such as shape representation and motion analysis. In this paper, we present a set of filters which allow estimating edge orientation and whose output does not vary when the input signal is rotated or when a global illumination change occurs. The outputs of these filters are used to extract a one-dimensional representation of the contour in order to characterize shapes in an accurate way by using Fourier coefficients. An energy function is built to associate shapes and determine the similarities between object contours. The elements of this energy function are weighted so that those which provide the most relevant information have a larger contribution. By extracting the parameters of the transformations which relate every image in a sequence with the following, we can determine the temporal evolution of a moving object.

Miguel Alemán-Flores, Luis Álvarez-León, Roberto Moreno-Díaz Jr.
Preprocessing Phase in the PIETSI Project (Prediction of Time Evolution Images Using Intelligent Systems)

We outline the PIETSI project, the core of which is an image prediction strategy, and discuss common preliminary image processing tasks that are relevant when facing problems such as: useless background, information overlapping (solvable if dissimilar coding is being used) and memory usage, which can be described as “marginal information efficiency”.

J. L. Crespo, P. Bernardos, M. E. Zorrilla, E. Mora
Devices to Preserve Watermark Security in Image Printing and Scanning

In this contribution we specify novel digital watermark embedding and detection algorithms that offer remarkable robustness in situations when an image is printed and later on scanned in again. Since this process can significantly affect image resolution and is commonly not done with perfect accuracy, image scaling potentially involving different scaling factors in horizontal and vertical direction must be dealt with.As will be demonstrated in the contribution, the concept specified herein outperforms commercial digital image watermarking systems with respect to scale-invariance and is capable of detecting watermarks in images that have been printed and scanned. Accordingly it can be expected that these findings will be particularly useful in copyright protection schemes that involve various media transitions from digital to analog and back.

Josef Scharinger
Backmatter
Metadaten
Titel
Computer Aided Systems Theory - EUROCAST 2003
herausgegeben von
Roberto Moreno-Díaz
Franz Pichler
Copyright-Jahr
2003
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-45210-2
Print ISBN
978-3-540-20221-9
DOI
https://doi.org/10.1007/b13239