Skip to main content

2006 | Buch

Professional Practice in Artificial Intelligence

IFIP 19th World Computer Congress, TC 12: Professional Practice Stream, August 21–24, 2006, Santiago, Chile

insite
SUCHEN

Über dieses Buch

The Second Symposium on Professional Practice in AI 2006 is a conference within the IFIP World Computer Congress 2006, Santiago, Chile. The Symposium is organised by the IFIP Technical Committee on Artificial Intelligence (Technical Committee 12) and its Working Group 12.5 (Artificial Intelligence Applications). The First Symposium in this series was one of the conferences in the IFIP World Computer Congi-ess 2004, Toulouse France. The conference featured invited talks by Rose Dieng, John Atkinson, John Debenham and Max Bramer. The Symposium was a component of the IFIP AI 2006 conference, organised by Professor Max Bramer. I should like to thank the Symposium General Chair, Professor Bramer for his considerable assistance in making the Symposium happen within a very tight deadline. These proceedings are the result of a considerable amount of hard work. Beginning with the preparation of the submitted papers, the papers were each reviewed by at least two members of the international Program Committee. The authors of accepted papers then revised their manuscripts to produce their final copy. The hard work of the authors, the referees and the Program Committee is gratefully aclaiowledged. The IFIP AI 2006 conference and the Symposium are the latest in a series of conferences organised by IFIP Technical Committee 12 dedicated to the techniques of Aitificial Intelligence and their real-world applications. Further infoirmation about TC12 can be found on our website http;//www.ifiptcI2.org.

Inhaltsverzeichnis

Frontmatter

Learning and Neural Nets

Detection of Breast Lesions in Medical Digital Imaging Using Neural Networks

The purpose of this article is to present an experimental application for the detection of possible breast lesions by means of neural networks in medical digital imaging. This application broadens the scope of research into the creation of different types of topologies with the aim of improving existing networks and creating new architectures which allow for improved detection.

Gustavo Ferrero, Paola Britos, Ramón García-Martínez
Identification of Velocity Variations in a Seismic Cube Using Neural Networks

This research allow to infer that from seismic section and well data it is possible to determine velocity anomalies variations in layers with thicknesses below to the seismic resolution using neuronal networks.

Dario Sergio Cersósimo, Claudia Ravazoli, Ramón García-Martínez
Improving the k-NN method: Rough Set in edit training set

Rough Set Theory (RST) is a technique for data analysis. In this study, we use RST to improve the performance of k-NN method. The RST is used to edit and reduce the training set. We propose two methods to edit training sets, which are based on the lower and upper approximations. Experimental results show a satisfactory performance of k-NN method using these techniques.

Yailé Caballero, Rafael Bello, Delia Alvarez, Maria M. Gareia, Yaimara Pizano

Agents

Membranes as Multi-agent Systems: an Application to Dialogue Modelling

Human-computer interfaces require models of dialogue structure that capture the variability and unpredictability within dialogue. In this paper, taking as starting point P systems, and by extending it to the concept of dialogue P systems through linguistic P systems we introduce a multi-agent formal architecture for dialogue modelling. In our model, cellular membranes become contextual agents by the adjunction of cognitive domains. With this method, the passage from the real dialogue to the P systems model can be achieved in a quite intuitive and simple way.

Gemma Bel-Enguix, Dolores Jiménez López
Agent Planning, Models, Virtual Haptie Computing, and Visual Ontology

The paper is a basis for multiagent visual computing with the Morph Gentzen logic. A basis to VR computing, computational illusion, and virtual ontology is presented. The IM_BID model is introduced for planning, spatial computing, and visual ontology. Visual intelligent objects are applied with virtual intelligent trees to carry on visual planning. New KR techniques are presented with generic diagrams and appllied to define computable models. The IM Morph Gentzen Logic for computing for multimedia are new projects with important computing applications. The basic principles are a mathematical logic where a Gentzen or natural deduction systems is defined by taking arbitrary structures and multimedia objects coded by diagram functions. The techniques can be applied to arbitrary structures definable by infinitary languages. Multimedia objects are viewed as syntactic objects defined by functions, to which the deductive system is applied.

Cyrus F Nourani
Improving Interoperability Among Learning Objects Using FIPA Agent Communication Framework

The reusability of learning material is based on three main features: modularity, discoverability and interoperability. Several researchers on Intelligent Learning Environments have proposed the use of architectures based on agent societies. Learning systems based on Multi-Agent architectures support the development of more interactive and adaptable systems and the Learning Objects approach gives reusability. We proposed an approach where learning objects are built based on agent architectures. This paper discusses how the Intelligent Learning Objects approach can be used to improve the interoperability between learning objects and pedagogical agents.

Ricardo Azambuja Silveira, Eduardo Rodrigues Gomes, Rosa Vicari
An Agent-Oriented Programming Language for Computing in Context

Context aware intelligent agents are key components in the development of pervasive systems. In this paper, we present an extension of a BDI programming language to support ontological reasoning and ontology-based speech act communication. These extensions were guided by the new requirements brought about by such emerging computing styles. These new features are essential for the development multi-agent systems with context awareness, given that ontologies have been widely pointed out as an appropriate way to model contexts.

Renata Vieira, Álvaro F. Moreira, Rafael H. Bordini, Jomi Hübner

Search

A Little Respect (for the Role of Common Components in Heuristic Search)

A search process implies an exploration of new, unvisited states. This quest to find something new tends to emphasize the processes of change. However, heuristic search is different from random search because features of previous solutions are preserved — even if the preservation of these features is a passive decision. A new parallel simulated annealing procedure is developed that makes some active decisions on which solution features should be preserved. The improved performance of this modified procedure helps demonstrate the beneficial role of common components in heuristic search.

Stephen Chen
Recursive and Iterative Algorithms for N-ary Search Problems

The paper analyses and compares alternative iterative and recursive implementations of N-ary search algorithms in hardware (in field programmable gate arrays, in particular). The improvements over the previous results have been achieved with the aid of the proposed novel methods for the fast implementation of hierarchical algorithms. The methods possess the following distinctive features: 1) providing sub-algorithms with multiple entry points; 2) fast stack unwinding for exits from recursive sub-algorithms; 3) hierarchical returns based on two alternative approaches; 4) rational use of embedded memory blocks for the design of a hierarchical finite state machine.

Valery Sklyarov, Iouliia Skliarova

Ontologies and Intelligent Web

Process of Ontology Construction for the Development of an Intelligent System for the Organization and Retrieval of Knowledge in Biodiversity — SISBIO

This work describes the ontology construction process for the development of an Intelligent System for the Organization and Retrieval of Knowledge in Biodiversity — SISBIO. The system aims at the production of strategic information for the biofuel chain Two main methodologies are used for the construction of the ontologies: knowledge engineering and ontology engineering. The first one consists of extracting and organizing the biofuel specialists’ knowledge, and ontology engineering is used to represent the knowledge through indicative expressions and its relations, developing a semantic network of relationships.

Filipe Corrêa da Costa, Hugo Cesar Hoeschl, Aires José Rover, Tânia Cristina D’Agostini Bueno
Service interdependencies: insight into use cases for service composition

The paper analyses several most appealing use cases for Semantic Web services and their composition. They are considered from the perspective of service types, QoS parameters, semantic description and user preferences. We introduce different levels of service composition and discuss implications of the above.

Witold Abramowicz, Agata Filipowska, Monika Kaczmarek, Tomasz Kaczmarek, Marek Kowalkiewicz, Wojciech Rutkowski, Karol Wieloch, Dominik Zyskowski
Combining Contexts and Ontologies: A Case Study

In the last years different proposals that integrate ontologies and contexts, taking advantages from their abilities for achieving information semantic interoperability, have arisen. Each of them has considered the problematic from different perspectives. Particularly, through an actual case study this paper shows which are the problems related to support information semantic interoperability between different parties involved in a collaborative business relation over the Internet. Furthermore, this paper analyzes those problems from previous proposals perspective and highlights the weaknesses and strengths of them.

Mariela Rico, Ma. Laura Caliusco, Omar Chiotti, Ma. Rosa Galli
The RR Project — A Framework for Relationship Network Viewing and Management

The Relationship Networks Project (RR - Redes de Relacionamento) is an innovative project, which intends to create a framework, which allows -through a fast data modeling - implementing interface elements that describe in a clearly visual way, in two-dimensional presentation, a relationship network among heterogeneous items. This environment also allows the machine to do operations over these relations, such as to find paths or sets, to help the implementation of AI algorithms, or data extraction by the final user. Through graph theory, with visual items, it is possible to find elements with specific characteristics and relationships between them, by the application of filters, refining searches inside an extreme large datasets, or showing differentiated connection maps. Two prototypes were created with this framework: A system which allows seeing telephonic calls sets and financial transactions, and a system for ontology viewing for a digital dictionary inside a semantic network. Another software, in prototypical phase, also for semantic network vision, is being constructed. This document will present the basic RR structure, showing and justifying the creation of the two referred software above.

César Stradiotto, Everton Pacheco, Andre Bortolon, Hugo Hoeschl
Web Service-based Business Process Automation Using Matching Algorithms

In this paper, we focus on two problems of the Web service-based business process integration: the discovery of Web services based on the capabilities and properties of published services, and the composition of business processes based on the business requirements of submitted requests. We propose a solution to these problems, which comprises multiple matching algorithms, a micro-level matching algorithm and macro-level matching algorithms. The solution from the macro-level matching algorithms is optimal in terms of meeting a certain business objective, e.g., minimizing the cost or execution time, or maximizing the total utility value of business properties of interest. Furthermore, we show how existing Web service standards, UDDI and BPEL4WS, can be used and extended to specify the capabilities of services and the business requirements of requests, respectively.

Yanggon Kim, Juhnyoung Lee
A Graphic Tool for Ontology Viewing Based on Graph Theory

The Knowledge Engineering Suite is an ontology production method, based on relationship networks, for knowledge representation inside specific contexts. The production of these ontologies has three basic steps, since catching the client data, knowledge base creation, and information retrieval and consult interfaces for the final users. During the knowledge base creation process, data verification is required, for nonconformity identification on the produced ontological network. Because it has a tabular interface, the verification step has some limitations about data vision, and consequently the tool usability, making the work for tracking errors or missing data uncomfortable, and susceptible to more errors. To make easier the vision of the created ontologies, in the real shape they are planned, it was implemented a software to viewing these new created ontologies, so the work for data error tracking became more efficient. Such software offers filtering and data selection resources too, to give a way to isolate common groups, when the Knowledge Engineer is looking for nonconformities. This software and its functions are described here.

César Stradiotto, Everton Pacheco, André Bortolon, Hugo Hoeschl

Knowledge Engineering

Using Competence Modeling to create Knowledge Engineering Team

The present paper is about applying competence modeling for a knowledge engineer in the case of the company WBSA Sistemas Inteligentes S.A. The process was based on Lucia and Lepsinger model, by which competences are characterized through the identification of situations and behaviors considered relevant to the engineer performance. As one of the different techniques suggested by the model for collecting data, a number of individual interviews were undertaken and at the end it was defined and validated a set of eleven competence regarded as necessary for a satisfactory performance of a knowledge engineer.

Aline T. Nicolini, Cristina S. Santos, Hugo Cesar Hoeschl, Irineu Theiss, Tânia C. D. Bueno
Intelligent Systems Engineering with Reconfigurable Computing

Intelligent computing systems comprising microprocessor cores, memory and reconfigurable user-programmable logic represent a promising technology which is well-suited for applications such as digital signal and image processing, cryptography and encryption, etc. These applications employ frequently recursive algorithms which are particularly appropriate when the underlying problem is defined in recursive terms and it is difficult to reformulate it as an iterative procedure. It is known, however, that hardware description languages (such as VHDL) as well as system-level specification languages (such as Handel-C) that are usually employed for specifying the required functionality of reconfigurable systems do not provide a direct support for recursion. In this paper a method allowing recursive algorithms to be easily described in Handel-C and implemented in an FPGA (

field-programmable gate array

) is proposed. The recursive search algorithm for the knapsack problem is considered as an example.

Iouliia Skliarova
Ontological Evaluation in the Knowledge Based System

In the last few years, several studies have emphasized the use of ontologies as an alternative to organization of the information. The notion of ontology has become popular in fields such as intelligent information integration, information retrieval on the Internet, and knowledge management. Different groups use different approaches to develop and verify de effectiveness of ontologies [

1.

] [

2.

] [

3.

]. This diversity can be a factor that makes the formularization difficult of formal methodologies of evaluation. This paper intends to provide a way to identify the effectiveness of knowledge representation based on ontology that was developed through Knowledge Based System tools. The reason is that all processing and storage of gathered information and knowledge base organization is performed using this structure. Our evaluation is based on case studies of the KMAI system [

4.

], involving real world ontology for the money laundry domain. Our results indicate that modification of ontology structure can effectively reveal faults, as long as they adversely affect the program state.

Tania C. D. Bueno, Sonali Bedin, Fabricia Cancellier, Hugo C. Hoeschl
A Model for Concepts Extraction and Context Identification in Knowledge Based Systems

Information Retrieval Systems normally deal with keyword-based technologies. Although those systems reach satisfactory results, they aren’t able to answer more complex queries done by users, especially those directly in natural language. To do that, there are the Knowledge-Based Systems, which use ontologies to represent the knowledge embedded in texts. Currently, the construction of ontologies is based on the participation of three components: the knowledge engineer, the domain specialist, and the system analyst. This work demands time due to the various studies that should be made do determine which elements must participate of the knowledge base and how these elements are interrelated. In this way, using computational systems that, at least, accelerate this work is fundamental to create systems to the market. A model, that allows a computer directly represents the knowledge, just needing a minimal human intervention, or even no one, enlarges the range of domains a system can maintain, becoming it more efficient and user-friendly.

Andre Bortolon, Hugo Cesar Hoeschl, Christianne C. S. R. Coelho, Tania Cristina D’Agostini Bueno
Proposal of Fuzzy Object Oriented Model in Extended JAVA

The knowledge imperfections should be considered when modeling complex problems. A solution is to develop a model that reduces the complexity and another option is to represent the imperfections: uncertainty, vagueness and incompleteness in the knowledge base. This paper proposes to extend the classical object oriented architecture in order to allow modeling of problems with intrinsic imperfections. The aim is to use the JAVA object oriented architecture to carry out this objective. In consequence, it is necessary to define the semantics for this extension of JAVA and it will be called Fuzzy JAVA. The NCR FuzzyJ library allows represent the vagueness (fuzziness) and uncertainty in class attributes. JAVA extended allows to model fuzzy inheritance.

Wilmer Pereira

Knowledge Discovery

Adopting Knowledge Discovery in Databases for Customer Relationship Management in Egyptian Public Banks

We propose a framework for studying the effect of KDD on CRM in the Egyptian banking sector. We believe that the KDD process and applications may perform a significant role in Egyptian banks to improve CRM, in particular for customer retention. Our believe is supported by the results of the field survey at the largest Egyptian bank.

A. Khedr, J. N. Kok
Pattern Discovery and Model Construction: an Evolutionary Learning and Data Mining Approach

In the information age, knowledge leads to profits, power and success. As an ancestor of data mining, machine learning has concerned itself with discovery of new knowledge on its own. This paper presents experiment results produced by genetic algorithms in the domains of model construction and event predictions, the areas where data mining systems have been focusing on. The experiment results have shown that genetic algorithms are able to discover useful patterns and regularities in large sets of data, and to construct models that conceptualize input data. It demonstrates that genetic algorithms are a powerful and useful learning algorithm for solving fundamental tasks data mining systems are facing today.

Harry Zhou
Towards a Framework for Knowledge Discovery
An Architecture for Distributed Inductive Databases

We discuss how data mining, patternbases and databases can be integrated into inductive databases, which make data mining an inductive query process. We propose a software architecture for such inductive databases, and extend this architecture to support the clustering of inductive databases and to make them suitable for data mining on the grid.

Jeroen S. de Bruin, Joost N. Kok

Language Processing

Prototype Of Speech Translation System For Audio Effective Communication

The present document exposes the development of a prototype of translation system as a Thesis Project. It consists basically on the capture of a flow of voice from the emitter, integrating advanced technologies of voice recognition, instantaneous translation and communication over the internet protocol RTP/RTCP (Real time Transport Protocol) to send information in real-time to the receiver. This prototype doesn’t transmit image, it only boards the audio stage. Finally, the project besides embracing a problem of personal communications, tries to contribute to the development of activities related with the speech recognition, motivating new investigations and advances on the area.

Richard Rojas Bello, Erick Araya Araya, Luis Vidal Vidal
A Knowledge Representation Semantic Network for a Natural Language Syntactic Analyzer Based on the UML

The need for improving software processes approximated the software engineering and artificial intelligence areas. Artificial intelligence techniques have been used as a support to software development processes, particularly through intelligent assistants that offer a knowledge-based support to software process’ activities. The context of the present work is a project for an intelligent assistant that implements a linguistic technique with the purpose of extracting object-oriented elements from requirement specifications in natural language through two main functionalities: the syntactic and semantic analyses. The syntactic analysis has the purpose of extracting the syntactic constituents from a sentence; and the semantic analysis has the goal of extracting the meaning from a set of sentences, i.e., a text. This paper focuses on the syntactic analysis functionality and applies the UML to its core as a semantic network for knowledge representation, based on the premise that the UML is

de facto

a standard general modeling language for software development.

Alberto Tavares da Silva, Luis Alfredo V. Carvalho

Applications

Fast simulation of animal locomotion: lamprey swimming

Biologically realistic computer simulation of vertebrate locomotion is an interesting and challenging problem with applications in computer graphics and robotics. One current approach simulates a relatively simple vertebrate, the lamprey, using recurrent neural networks for the spine and a physical model for the body. The model is realized as a system of differential equations. The drawback with this approach is the slow speed of simulation. This paper describes two approaches to speeding up simulation of lamprey locomotion without sacrificing too much biological realism: (i) use of superior numerical integration algorithms and (ii) simplifications to the neural architecture of the lamprey.

Matthew Beauregard, Paul J. Kennedy, John Debenham
Towards a case-based reasoning approach to analyze road accidents

In this paper the prototype of a system designed to analyze road accidents is presented. The analysis is carried out in order to recognize within accident reports general mechanisms of road accidents that represent prototypes of road accidents. Case Based Reasoning (CBR) is the chosen problem solving paradigm. Natural language documents and semi-structured documents are used to build the cases of our sys-tem, which creates a difficulty. To cope with this difficulty we propose approaches integrating semantic resources. Hence, an ontology of accidentology and a terminology of road accidents are used to build cases. The alignment of two resources supports the retrieval process. A data processing model, based on models of accidentology, is proposed to rep-resent the cases of the system. This paper presents the architecture of ACCTOS (ACCident TO Scenarios), a case based reasoning system prototype. The model to represent the cases is introduced and the phases of the case based reasoning cycle are detailed.

Valentina Ceausu, Sylvie Desprès
Solving the Short Run Economic Dispatch Problem Using Concurrent Constraint Programming

This paper shows a description of an application for solving the Short-Run Economic Dispatch Problem. This problem consists of searching the active power hourly schedule generated in electrical net-works in order to meet the demand at minimum cost. The solution cost is associatted to the inmediate costs of thermal units and the future costs of hydropower stations. The application was implemented using Mozart with real-domain constraints and a hybrid model among real (XRI) and finite domains (FD). The implemented tool showed promising results since the found solution costs were lower than those found in the literature for the same kind of problems. On the other hand, in order to test the tool against real problems, a system with data from real networks was implemented and the solution found was good enough in terms of time efficiency and accuracy. Also, this paper shows the us-ability of Mozart language to model real combinatory problems.

Juan Francisco Díaz F., Iván Javier Romero, Carlos Lozano
Micromechanics as a Testbed for Artificial Intelligence Methods Evaluation

Some of the artificial intelligence (AI) methods could be used to improve the performance of automation systems in manufacturing processes. However, the application of these methods in the industry is not widespread because of the high cost of the experiments with the AI systems applied to the conventional manufacturing systems. To reduce the cost of such experiments, we have developed a special micromechanical equipment, similar to conventional mechanical equipment, but of a lot smaller overall sizes and therefore of lower cost. This equipment can be used for evaluation of different AI methods in an easy and inexpensive way. The methods that show good results can be transferred to the industry through appropriate scaling. This paper contains brief description of low cost microequipment prototypes and some AI methods that can be evaluated with mentioned prototypes.

Ernst Kussul, Tatiana Baidyk, Felipe Lara-Rosano, Oleksandr Makeyev, Anabel Martin, Donald Wunsch
Burst Synchronization in Two Pulse-Coupled Resonate-and-Fire Neuron Circuits

We have shown the out of phase burst synchronization in two pulse-coupled RFN model and its analog integrated circuit implementation. Through circuit simulations using SPICE, we have confirmed the stable region for out of phase burst synchronization. Such burst synchronization is of peculiar in the system of two pulse-coupled RFN circuits. As further considerations, we are going to study collective behavior in a large scale of coupled RFN circuits in view of both theoretical analysis and practical implementation.

Kazuki Nakada, Tetsuya Asai, Hatsuo Hayashi
Ant Colonies using Arc Consistency Techniques for the Set Partitioning Problem

In this paper, we solve some benchmarks of Set Covering Problem and Equality Constrained Set Covering or Set Partitioning Problem. The resolution techniques used to solve them were Ant Colony Optimization algorithms and Hybridizations of Ant Colony Optimization with Constraint Programming techniques based on Arc Consistency. The concept of Arc Consistency plays an essential role in constraint satisfaction as a problem simplification operation and as a tree pruning technique during search through the detection of local inconsistencies with the uninstantiated variables. In the proposed hybrid algorithms, we explore the addition of this mechanism in the construction phase of the ants so they can generate only feasible partial solutions. Computational results are presented showing the advantages to use this kind of additional mechanisms to Ant Colony Optimization in strongly constrained problems where pure Ant Algorithms are not successful.

Broderick Crawford, Carlos Castro
Automatic Query Recommendation using Click-Through Data

We present a method to help a user redefine a query suggesting a list of similar queries. The method proposed is based on click-through data were sets of similar queries could be identified. Scientific literature shows that similar queries are useful for the identification of different information needs behind a query. Unlike most previous work, in this paper we are focused on the discovery of better queries rather than related queries. We will show with experiments over real data that the identification of better queries is useful for query disambiguation and query specialization.

Georges Dupret, Marcelo Mendoza
A Shared-Memory Multiprocessor Scheduling Algorithm

This paper presents an extension of the Latency Time (LT) scheduling algorithm for assigning tasks with arbitrary execution times on a multiprocessor with shared memory. The Extended Latency Time (ELT) algorithm adds to the priority function the synchronization associated with access to the shared memory. The assignment is carried out associating with each task a time window of the same size as its duration, which decreases for every time unit that goes by. The proposed algorithm is compared with the Insertion Scheduling Heuristic (ISH). Analysis of the results established that ELT has better performance with fine granularity tasks (computing time comparable to synchronization time), and also, when the number of processors available to carry out the assignment increases.

Irene Zuccar, Mauricio Solar, Fernanda Kri, Víctor Parada
Web Attack Detection Using ID3

Decision tree learning algorithms have been successfully used in knowledge discovery. They use induction in order to provide an appropriate classification of objects in terms of their attributes, inferring decision tree rules. This paper reports on the use of ID3 to Web attack detection. Even though simple, ID3 is sufficient to put apart a number of Web attacks, including a large proportion of their variants. It also surpasses existing methods: it portrays a higher true-positive detection rate and a lower false-positive one. The IDS output classification rules that are easy to read and so computer officers are more likely to grasp the root of an attack, as well as extending the capabilities of the classifier.

Víctor H. García, Raúl Monroy, Maricela Quintana
A Statistical Sampling Strategy for Iris Recognition

We present a new approach for iris recognition based on a random sampling strategy. Iris recognition is a method to identify individuals, based on the analysis of the eye iris. This technique has received a great deal of attention lately, mainly due to iris unique characterics: highly randomized appearance and impossibility to alter its features. A typical iris recognition system is composed of four phases: image acquisition and preprocessing, iris localization and extraction, iris features characterization, and comparison and matching. Our work uses standard integrodifferential operators to locate the iris. Then, we process iris image with histogram equalization to compensate for illumination variations. The characterization of iris features is performed by using accumulated histograms. These histograms are built from randomly selected subimages of iris. After that, a comparison is made between accumulated histograms of couples of iris samples, and a decision is taken based on their differences and on a threshold calculated experimentally. We ran experiments with a database of 210 iris, extracted from 70 individuals, and found a rate of succesful identifications in the order of 97 %.

Luis E. Garza Castañon, Saul Montes de Oca, Rubén Morales-Menéndez
An Application of ARX Stochastic Models to Iris Recognition

We present a new approach for iris recognition based on stochastic autoregressive models with exogenous input (ARX). Iris recognition is a method to identify persons, based on the analysis of the eye iris. A typical iris recognition system is composed of four phases: image acquisition and preprocessing, iris localization and extraction, iris features characterization, and comparison and matching. The main contribution in this work is given in the step of characterization of iris features by using ARX models. In our work every iris in database is represented by an ARX model learned from data. In the comparison and matching step, data taken from iris sample are substituted into every ARX model and residuals are generated. A decision of accept or reject is taken based on residuals and on a threshold calculated experimentally. We conduct experiments with two different databases. Under certain conditions, we found a rate of successful identifications in the order of 99.7 % for one database and 100 % for the other.

Luis E. Garza Castañón, Saúl Montes de Oca, Rubén Morales-Menéndez
Metadaten
Titel
Professional Practice in Artificial Intelligence
herausgegeben von
John Debenham
Copyright-Jahr
2006
Verlag
Springer US
Electronic ISBN
978-0-387-34749-3
Print ISBN
978-0-387-34655-7
DOI
https://doi.org/10.1007/978-0-387-34749-3