Skip to main content

2014 | Buch

Artificial Intelligence: Methods and Applications

8th Hellenic Conference on AI, SETN 2014, Ioannina, Greece, May 15-17, 2014. Proceedings

herausgegeben von: Aristidis Likas, Konstantinos Blekas, Dimitris Kalles

Verlag: Springer International Publishing

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

This book constitutes the proceedings of the 8th Hellenic Conference on Artificial Intelligence, SETN 2014, held in Ioannina, Greece, in May 2014. There are 34 regular papers out of 60 submissions, in addition 5 submissions were accepted as short papers and 15 papers were accepted for four special sessions. They deal with emergent topics of artificial intelligence and come from the SETN main conference as well as from the following special sessions on action languages: theory and practice; computational intelligence techniques for bio signal Analysis and evaluation; game artificial intelligence; multimodal recommendation systems and their applications to tourism.

Inhaltsverzeichnis

Frontmatter

Main Conference Regular Papers

Performance-Estimation Properties of Cross-Validation-Based Protocols with Simultaneous Hyper-Parameter Optimization

In a typical supervised data analysis task, one needs to perform the following two tasks: (a) select the best combination of learning methods (e.g., for variable selection and classifier) and tune their hyper-parameters (e.g., K in K-NN), also called

model selection

, and (b) provide an estimate of the performance of the final, reported model. Combining the two tasks is not trivial because when one selects the set of hyper-parameters that seem to provide the best estimated performance, this estimation is optimistic (biased / overfitted) due to performing multiple statistical comparisons. In this paper, we confirm that the simple Cross-Validation with model selection is indeed optimistic (overestimates) in small sample scenarios. In comparison the Nested Cross Validation and the method by Tibshirani and Tibshirani provide conservative estimations, with the later protocol being more computationally efficient. The role of stratification of samples is examined and it is shown that stratification is beneficial.

Ioannis Tsamardinos, Amin Rakhshani, Vincenzo Lagani
An Incremental Classifier from Data Streams

A novel evolving fuzzy rule-based classifier, namely parsimonious classifier (pClass), is proposed in this paper. pClass can set off its learning process either from scratch with an empty rule base or from an initially trained fuzzy model. Importantly, pClass not only adopts the open structure concept, where an automatic knowledge building process can be cultivated during the training process, which is well-known as a main pillar to learn from streaming examples, but also incorporates the so-called plug-and-play principle, where all learning modules are coupled in the training process, in order to diminish the requirement of pre- or post-processing steps, undermining the firm logic of the online classifier. In what follows, pClass is equipped with the rule growing, pruning, recall and input weighting techniques, which are fully performed on the fly in the training process. The viability of pClass has been tested exploiting real-world and synthetic data streams containing some sorts of concept drifts, and compared with state-of-the-art classifiers, where pClass can deliver the most encouraging numerical results in terms of the classification rate, number of fuzzy rule, number of rule base parameters and the runtime.

Mahardhika Pratama, Sreenatha G. Anavatti, Edwin Lughofer
Sequential Sparse Adaptive Possibilistic Clustering

Possibilistic clustering algorithms have attracted considerable attention, during the last two decades. A major issue affecting the performance of these algorithms is that they involve certain parameters that need to be estimated accurately beforehand and remain fixed during their execution. Recently, a possibilistic clustering scheme has been proposed that allows the adaptation of these parameters and imposes sparsity in the sense that it forces the data points to “belong” to only a few (or even none) clusters. The algorithm does not require prior knowledge of the exact number of clusters but, rather, only a crude overestimate of it. However, it requires the estimation of two additional parameters. In this paper, a sequential version of this scheme is proposed, which possesses all the advantages of its ancestor and in addition, it requires the (crude) estimation of just a single parameter. Simulation results are provided that show the effectiveness of the proposed algorithm.

Spyridoula D. Xenaki, Konstantinos D. Koutroumbas, Athanasios A. Rontogiannis
A Rough Information Extraction Technique for the Dendritic Cell Algorithm within Imprecise Circumstances

The Dendritic Cell Algorithm (DCA) is an immune inspired classification algorithm based on the behavior of Dendritic Cells (DCs). The performance of DCA depends on the extracted features and their categorization to their specific signal types. These two tasks are performed during the DCA data pre-processing phase and are both based on the use of the Principal Component Analysis (PCA) information extraction technique. However, using PCA presents a limitation as it destroys the underlying semantics of the features after reduction. On the other hand, DCA uses a crisp separation between the two DCs contexts; semi-mature and mature. Thus, the aim of this paper is to develop a novel DCA version based on a two-leveled hybrid model handling the imprecision occurring within the DCA. In the top-level, our proposed algorithm applies a more adequate information extraction technique based on Rough Set Theory (RST) to build a solid data pre-processing phase. At the bottom level, our proposed algorithm applies Fuzzy Set Theory to smooth the crisp separation between the two DCs contexts. The experimental results show that our proposed algorithm succeeds in obtaining significantly improved classification accuracy.

Zeineb Chelly, Zied Elouedi
An Autonomous Transfer Learning Algorithm for TD-Learners

The main objective of transfer learning is to use the knowledge acquired from a source task in order to boost the learning procedure in a target task. Transfer learning comprises a suitable solution for reinforcement learning algorithms, which often require a considerable amount of training time, especially when dealing with complex tasks. This work proposes an autonomous method for transfer learning in reinforcement learning agents. The proposed method is empirically evaluated in the keepaway and the mountain car domains. The results demonstrate that the proposed method can improve the learning procedure in the target task.

Anestis Fachantidis, Ioannis Partalas, Matthew E. Taylor, Ioannis Vlahavas
Play Ms. Pac-Man Using an Advanced Reinforcement Learning Agent

Reinforcement Learning (RL) algorithms have been promising methods for designing intelligent agents in games. Although their capability of learning in real time has been already proved, the high dimensionality of state spaces in most game domains can be seen as a significant barrier. This paper studies the popular arcade video game Ms. Pac-Man and outlines an approach to deal with its large dynamical environment. Our motivation is to demonstrate that an abstract but informative state space description plays a key role in the design of efficient RL agents. Thus, we can speed up the learning process without the necessity of Q-function approximation. Several experiments were made using the multiagent MASON platform where we measured the ability of the approach to reach optimum generic policies which enhances its generalization abilities.

Nikolaos Tziortziotis, Konstantinos Tziortziotis, Konstantinos Blekas
Multi-view Regularized Extreme Learning Machine for Human Action Recognition

In this paper, we propose an extension of the ELM algorithm that is able to exploit multiple action representations. This is achieved by incorporating proper regularization terms in the ELM optimization problem. In order to determine both optimized network weights and action representation combination weights, we propose an iterative optimization process. The proposed algorithm has been evaluated by using the state-of-the-art action video representation on three publicly available action recognition databases, where its performance has been compared with that of two commonly used video representation combination approaches, i.e., the vector concatenation before learning and the combination of classification outcomes based on learning on each view independently.

Alexandros Iosifidis, Anastasios Tefas, Ioannis Pitas
Classifying Behavioral Attributes Using Conditional Random Fields

A human behavior recognition method with an application to political speech videos is presented. We focus on modeling the behavior of a subject with a conditional random field (CRF). The unary terms of the CRF employ spatiotemporal features (i.e., HOG3D, STIP and LBP). The pairwise terms are based on kinematic features such as the velocity and the acceleration of the subject. As an exact solution to the maximization of the posterior probability of the labels is generally intractable, loopy belief propagation was employed as an approximate inference method. To evaluate the performance of the model, we also introduce a novel behavior dataset, which includes low resolution video sequences depicting different people speaking in the Greek parliament. The subjects of the

Parliament

dataset are labeled as friendly, aggressive or neutral depending on the intensity of their political speech. The discrimination between friendly and aggressive labels is not straightforward in political speeches as the subjects perform similar movements in both cases. Experimental results show that the model can reach high accuracy in this relatively difficult dataset.

Michalis Vrigkas, Christophoros Nikou, Ioannis A. Kakadiadis
Rushes Video Segmentation Using Semantic Features

In this paper we describe a method for efficient video rushes segmentation. Video rushes are unedited video footage and contain many repetitive information, since the same scene is taken many times until the desired result is produced. Color histograms have difficulty in capturing the scene changes in rushes videos. In the herein approach shot frames are represented by semantic feature vectors extracted from existing semantic concept detectors. Moreover, each shot keyframe is represented by the mean of the semantic feature vectors of its neighborhood, defined as the frames that fall inside a window centered at the keyframe. In this way, if a concept exists in most of the frames of a keyframe’s neighborhood, then with high probability it exists on the corresponding keyframe. By comparing consecutive pairs of shots we seek to find changes in groups of similar shots. To improve the performance of our algorithm, we employ a face and body detection algorithm to eliminate false boundaries detected between similar shots. Numerical experiments on TRECVID rushes videos show that our method efficiently segments rushes videos by detecting groups of similar shots.

Athina Pappa, Vasileios Chasanis, Antonis Ioannidis
Activity Recognition for Traditional Dances Using Dimensionality Reduction

Activity recognition is a complex problem mainly because of the nature of the data. Data usually are high dimensional, so applying a classifier directly to the data is not always a good practice. A common method is to find a meaningful representation of complex data through dimensionality reduction. In this paper we propose novel kernel matrices based on graph theory to be used for dimensionality reduction. The proposed kernel can be embedded in a general dimensionality reduction framework. Experiments on a traditional dance recognition dataset are conducted and the advantage of using dimensionality reduction before classification is highlighted.

Vasileios Gavriilidis, Anastasios Tefas
Motion Dynamic Analysis of the Basic Facial Expressions

In interaction systems, communication between user and the computer may be performed using a graphical display of human representation called avatar. This paper is focussed on the problem of facial motion analysis for human-like animation. Using similarities in motion data four criteria for characteristic points grouping (facial regions, movement directions, angles and distances) have been proposed. In order to estimate the number of clusters for selected facial expressions a dedicated algorithm has been applied. Based on the results of subjective assessment the most satisfying configuration of criteria, in terms of number of clusters and accuracy of emotions recognition, was a group of distance, region and angle between facial markers. In the result, the obtained groups may be used to simplify the number of control parameters necessary to synthesise facial expressions in virtual human systems. The final structure of the characteristic points can diminish overall computational resources usage by decreasing the number of points that need to be recalculated between animation phases. This is due to the fact, that the movement similarities were exploited to make the groups with the same properties be controlled by dominant markers.

Maja Kocoń
An Intelligent Tool for the Automated Evaluation of Pedestrian Simulation

One of the most cumbersome tasks in the implementation of an accurate pedestrian model is the calibration and fine tuning based on real life experimental data. Traditionally, this procedure employs the manual extraction of information about the position and locomotion of pedestrians in multiple videos. The paper in hand proposes an automated tool for the evaluation of pedestrian models. It employees state of the art techniques for the automated 3D reconstruction, pedestrian detection and data analysis. The proposed method constitutes a complete system which, given a video stream, automatically determines both the workspace and the initial state of the simulation. Moreover, the system is able to track the evolution of the movement of pedestrians. The evaluation of the quality of the pedestrian model is performed via automatic extraction of critical information from both real and simulated data.

Evangelos Boukas, Luca Crociani, Sara Manzoni, Giuseppe Vizzari, Antonios Gasteratos, Georgios Ch. Sirakoulis
Study the Effects of Camera Misalignment on 3D Measurements for Efficient Design of Vision-Based Inspection Systems

Vision based inspection systems for 3D measurements using single camera, are extensively used in several industries, today. Due to transportation and/or servicing of these systems, the camera in this system is prone to misalignment from its original position. In such situations, although a high quality calibration exists, the accuracy of 3D measurement is affected. In this paper, we propose a statistical tool or methodology which involves: a) Studying the significance of the effects of 3D measurements errors due to camera misalignment; b) Modelling the error data using regression models; and c) Deducing expressions to determine tolerances of camera misalignment for an acceptable inaccuracy of the system. This tool can be used by any 3D measuring system using a single camera. Resulting tolerances can be directly used for mechanical design of camera placement in the vision based inspection systems.

Deepak Dwarakanath, Carsten Griwodz, Paal Halvorsen, Jacob Lildballe
Design and Experimental Validation of a Hybrid Micro Tele-Manipulation System

This paper presents analytical and experimental results on a new hybrid tele-manipulation environment for micro-robot control under non-holonomic constraints. This environment is comprised of a haptic tele-manipulation subsystem (macro-scale motion), and a visual servoing subsystem, (micro-scale motion) under the microscope. The first subsystem includes a 5-dof (degrees of freedom) force feedback mechanism, acting as the master, and a 2-dof micro-robot, acting as the slave. In the second subsystem, a motion controller based on visual feedback drives the micro-robot. The fact that the slave micro-robot is driven by two centrifugal force vibration micro-motors makes the presented tele-manipulation environment exceptional and challenging. The unique characteristics and challenges that arise during the micromanipulation of the specific device are described and analyzed. The developed solutions are presented and discussed. Experiments show that, regardless of the disparity between master and slave, the proposed environment facilitates functional and simple micro-robot control during micromanipulation operations.

Kostas Vlachos, Evangelos Papadopoulos
Tackling Large Qualitative Spatial Networks of Scale-Free-Like Structure

We improve the state-of-the-art method for checking the consistency of large qualitative spatial networks that appear in the Web of Data by exploiting the scale-free-like structure observed in their underlying graphs. We propose an implementation scheme that triangulates the underlying graphs of the input networks and uses a hash table based adjacency list to efficiently represent and reason with them. We generate random scale-free-like qualitative spatial networks using the Barabási-Albert (BA) model with a preferential attachment mechanism. We test our approach on the already existing random datasets that have been extensively used in the literature for evaluating the performance of qualitative spatial reasoners, our own generated random scale-free-like spatial networks, and real spatial datasets that have been made available as Linked Data. The analysis and experimental evaluation of our method presents significant improvements over the state-of-the-art approach, and establishes our implementation as the only possible solution to date to reason with large scale-free-like qualitative spatial networks efficiently.

Michael Sioutis, Jean-François Condotta
Modularizing Ontologies for the Construction of $E-\mathcal{SHIQ}$ Distributed Knowledge Bases

Ontology modularization methods aim either to extract modules, or partition ontologies into sets of modules. Each module covers a specific part of the whole that should “make sense”, while it preserves additional properties. Ontology modularization may aim at reusability of knowledge, reduction of complexity, efficient reasoning, and tooling support, e.g. for efficient ontology maintenance and evolution. This paper presents a generic framework for the modularization of

$\mathcal{SHIQ}$

ontologies, towards the construction of distributed

$E-\mathcal{SHIQ}$

knowledge bases. The aim is to compute decompositions for correct, complete and efficient distributed reasoning. The proposed framework combines locality-based rules with graph-based modularization techniques using a generic constraint problem solving framework. The paper presents experimental results concerning the modularization task.

Georgios Santipantakis, George A. Vouros
On the ‘in many cases’ Modality: Tableaux, Decidability, Complexity, Variants

The modality ‘

true in many cases

’ is used to handle non-classical patterns of reasoning, like ‘

probably

φ

is the case

’ or ‘

normally φ holds

’. It is of interest in Knowledge Representation as it has found interesting applications in

Epistemic Logic

, ‘

Typicality

’ logics, and it also provides a foundation for defining ‘

normality

’ conditionals in Non-Monotonic Reasoning. In this paper we contribute to the study of this modality, providing results on the ‘

majority logic

’ Θ of V. Jauregui. The logic Θ captures a simple notion of ‘

a large number of cases

’, which has been independently introduced by K. Schlechta and appeared implicitly in earlier attempts to axiomatize the modality ‘

probably φ

’. We provide a tableaux proof procedure for the logic Θ and prove its soundness and completeness with respect to the class of neighborhood semantics modelling ‘large’ sets of alternative situations. The tableaux-based decision procedure allows us to prove that the satisfiability problem for Θ is NP-complete. We discuss a more natural notion of ‘

large

’ sets which accurately captures ‘

clear majority

’ and we prove that it can be also used, at the high cost however of destroying the finite model property for the resulting logic. Then, we show how to extend our results in the logic of complete majority spaces, suited for applications where either a proposition or its negation (but not both) are to be considered ‘

true in many cases

’, a notion useful in epistemic logic.

Costas D. Koutras, Christos Moyzes, Christos Nomikos, Yorgos Zikos
Reasoning in Singly-Connected Directed Evidential Networks with Conditional Beliefs

Directed evidential networks are powerful tools for knowledge representation and uncertain reasoning in a belief function framework. In this paper, we propose an algorithm for the propagation of belief functions in the singly-connected directed evidential networks, when each node is associated with one conditional belief function distribution specified given all its parents.

Wafa Laâmari, Boutheina Ben Yaghlane
A Formal Approach to Model Emotional Agents Behaviour in Disaster Management Situations

Emotions in Agent and Multi-Agent Systems change their behaviour to a more ’natural’ way of performing tasks thus increasing believability. This has various implications on the overall performance of a system. In particular in situations where emotions play an important role, such as disaster management, it is a challenge to infuse artificial emotions into agents, especially when a plethora of emotion theories are yet to be fully accepted. In this work, we develop a formal model for agents demonstrating emotional behaviour in emergency evacuation. We use state-based formal methods to define agent behaviour in two layers; one that deals with non-emotional and one dealing with emotional behaviour. The emotional level takes into account emotions structures, personality traits and emotion contagion models. A complete formal definition of the evacuee agent is given followed by a short discussion on visual simulation and results to demonstrate the refinement of the formal model into code.

Petros Kefalas, Ilias Sakellariou, Dionysios Basakos, Ioanna Stamatopoulou
Policies Production System for Ambient Intelligence Environments

This paper presents a tool for designing Policies that govern the operation of an Ambient Intelligence (AmI) environment in order to minimize energy consumption and automate every-day tasks in smart settlements. This tool works on top of a semantic web services middleware and interacts with the middleware’s ontology in order to facilitate the designing, monitoring and execution of user defined rules that control the operation of a network of heterogeneous sensors and actuators. Furthermore, it gives the user the capability to organize these rules in tasks, in order to aggregate and discern relative rules. The main objective of this system is to provide a better monitoring and management of the resources, so as to achieve energy efficiency and reduce power consumption. The work presented in this paper is part of the Smart IHU project, which is developed at International Hellenic University.

Nikos P. Kotsiopoulos, Dimitris Vrakas
Nature-Inspired Intelligent Techniques for Automated Trading: A Distributional Analysis

Nowadays, the increased level of uncertainty in various sectors has posed great burdens in the decision-making process. In the financial domain, a crucial issue is how to properly allocate the available amount of capital, in a number of provided assets, in order to maximize wealth. Automated trading systems assist the aforementioned process to a great extent. In this paper, a basic type of such a system is presented. The aim of the study focuses on the behavior of this system in changes to its parameter settings. A number of independent simulations have been conducted, for the various parameter settings, and distributions of profits/losses have been acquired, leading to interesting concluding remarks.

Vassilios Vassiliadis, Georgios Dounias
Combining Clustering and Classification for Software Quality Evaluation

Source code and metric mining have been used to successfully assist with software quality evaluation. This paper presents a data mining approach which incorporates clustering Java classes, as well as classifying extracted clusters, in order to assess internal software quality. We use Java classes as entities and static metrics as attributes for data mining. We identify outliers and apply K-means clustering in order to establish clusters of classes. Outliers indicate potentially fault prone classes, whilst clusters are examined so that we can establish common characteristics. Subsequently, we apply C4.5 to build classification trees for identifying metrics which determine cluster membership. We evaluate the proposed approach with two well known open source software systems, Jedit and Apache Geronimo. Results have consolidated key findings from previous work and indicated that combining clustering with classification produces better results than stand alone clustering.

Diomidis Papas, Christos Tjortjis
Argument Extraction from News, Blogs, and Social Media

Argument extraction is the task of identifying arguments, along with their components in text. Arguments can be usually decomposed into a claim and one or more premises justifying it. Among the novel aspects of this work is the thematic domain itself which relates to Social Media, in contrast to traditional research in the area, which concentrates mainly on law documents and scientific publications. The huge increase of social media communities, along with their user tendency to debate, makes the identification of arguments in these texts a necessity. Argument extraction from Social Media is more challenging because texts may not always contain arguments, as is the case of legal documents or scientific publications usually studied. In addition, being less formal in nature, texts in Social Media may not even have proper syntax or spelling. This paper presents a two-step approach for argument extraction from social media texts. During the first step, the proposed approach tries to classify the sentences into “sentences that contain arguments” and “sentences that don’t contain arguments”. In the second step, it tries to identify the exact fragments that contain the premises from the sentences that contain arguments, by utilizing conditional random fields. The results exceed significantly the base line approach, and according to literature, are quite promising.

Theodosis Goudas, Christos Louizos, Georgios Petasis, Vangelis Karkaletsis
A Learning Analytics Methodology for Student Profiling

On a daily basis, a large amount of data is gathered through the participation of students in e-learning environments. This wealth of data is an invaluable asset to researchers as they can utilize it in order to generate conclusions and identify hidden patterns and trends by using big data analytics techniques. The purpose of this study is a threefold analysis of the data that are related to the participation of students in the online forums of their University. In one hand the content of the messages posted in these fora can be efficiently analyzed by text mining techniques. On the other hand, the network of students interacting through a forum can be adequately processed through social network analysis techniques. Still, the combined knowledge attained from both of the aforementioned techniques, can provide educators with practical and valuable information for the evaluation of the learning process, especially in a distance learning environment. The study was conducted by using real data originating from the online forums of the Hellenic Open University (HOU). The analysis of the data has been accomplished by using the R and the Weka tools, in order to analyze the structure and the content of the exchanged messages in these fora as well as to model the interaction of the students in the discussion threads.

Elvira Lotsari, Vassilios S. Verykios, Chris Panagiotakopoulos, Dimitris Kalles
A Profile-Based Method for Authorship Verification

Authorship verification is one of the most challenging tasks in style-based text categorization. Given a set of documents, all by the same author, and another document of unknown authorship the question is whether or not the latter is also by that author. Recently, in the framework of the PAN-2013 evaluation lab, a competition in authorship verification was organized and the vast majority of submitted approaches, including the best performing models, followed the instance-based paradigm where each text sample by one author is treated separately. In this paper, we show that the profile-based paradigm (where all samples by one author are treated cumulatively) can be very effective surpassing the performance of PAN-2013 winners without using any information from external sources. The proposed approach is fully-trainable and we demonstrate an appropriate tuning of parameter settings for PAN-2013 corpora achieving accurate answers especially when the cost of false negatives is high.

Nektaria Potha, Efstathios Stamatatos
Sentiment Analysis for Reputation Management: Mining the Greek Web

Harvesting the web and social web data is a meticulous and complex task. Applying the results to a successful business case such as brand monitoring requires high precision and recall for the opinion mining and entity recognition tasks. This work reports on the integrated platform of a state of the art Named-entity Recognition and Classification (NERC) system and opinion mining methods for a Software-as-a-Service (SaaS) approach on a fully automatic service for brand monitoring for the Greek language. The service has been successfully deployed to the biggest search engine in Greece powering the large-scale linguistic and sentiment analysis of about 80.000 resources per hour.

Georgios Petasis, Dimitrios Spiliotopoulos, Nikos Tsirakis, Panayiotis Tsantilas
Splice Site Recognition Using Transfer Learning

In this work, we consider a transfer learning approach based on K-means for splice site recognition. We use different representations for the sequences, based on n-gram graphs. In addition, a novel representation based on the secondary structure of the sequences is proposed. We evaluate our approach on genomic sequence data from model organisms of varying evolutionary distance. The first obtained results indicate that the proposed representations are promising for the problem of splice site recognition.

Georgios Giannoulis, Anastasia Krithara, Christos Karatsalos, Georgios Paliouras
An Intelligent Platform for Hosting Medical Collaborative Services

Recent developments in cloud computing technologies, the widespread use of mobile smart devices and the expansion of electronic health record system, raise the need of on-line collaboration among geographically distributed medical personnel. In this context, the paper presents a web based intelligent platform, capable of hosting medical collaborative services and featuring intelligent medical data management and exchange. Our work emphasizes on client-side medical data processing over an intelligent online workflow library. We introduce a Remote Process Calling scheme based on WebRTC (peer-to-peer) communication paradigm, eliminating the typical bandwidth bottleneck of centralized data sharing and allowing the execution of intelligent workflows.

Christos Andrikos, Ilias Maglogiannis, Efthymios Bilalis, George Spyroglou, Panayiotis Tsanakas
Knowledge-Poor Context-Sensitive Spelling Correction for Modern Greek

In the present work a methodology for automatic spelling correction is proposed for common errors on Modern Greek homophones. The proposed methodology corrects the error by taking into account morphosyntactic information regarding the context of the orthographically ambiguous word. Our methodology is knowledge-poor because the information used is only the endings of the words in the context of the ambiguous word; as such it can be adapted even by simple editors for real-time spelling correction. We tested our method using Id3, C4.5, Nearest Neighbor, Naive Bayes and Random Forest as machine learning algorithms for correct spelling prediction. Experimental results show that the success rate of the above method is usually between 90% and 95% and sometimes approaching 97%. Synthetic Minority Oversampling was used to cope with the problem of class imbalance in our datasets.

Spyridon Sagiadinos, Petros Gasteratos, Vasileios Dragonas, Athanasia Kalamara, Antonia Spyridonidou, Katia Kermanidis
An Overview of the ILSP Unit Selection Text-to-Speech Synthesis System

This paper presents an overview of the Text-to-Speech synthesis system developed at the Institute for Language and Speech Processing (ILSP). It focuses on the key issues regarding the design of the system components. The system currently fully supports three languages (Greek, English, Bulgarian) and is designed in such a way to be as language and speaker independent as possible. Also, experimental results are presented which show that the system produces high quality synthetic speech in terms of naturalness and intelligibility. The system was recently ranked among the first three systems worldwide in terms of achieved quality for the English language, at the international Blizzard Challenge 2013 workshop.

Pirros Tsiakoulis, Sotiris Karabetsos, Aimilios Chalamandaris, Spyros Raptis
An Artificial Neural Network Approach for Underwater Warp Prediction

This paper presents an underwater warp estimation approach based on generalized regression neural network (GRNN). The GRNN, with its function approximation feature, is employed for a-priori estimation of the upcoming warped frames using history of the previous frames. An optical flow technique is employed for determining the dense motion fields of the captured frames with respect to the first frame. The proposed method is independent of the pixel-oscillatory model. It also considers the interdependence of the pixels with their neighborhood. Simulation experiments demonstrate that the proposed method is capable of estimating the upcoming frames with small errors.

Kalyan Kumar Halder, Murat Tahtali, Sreenatha G. Anavatti
Audio Feature Selection for Recognition of Non-linguistic Vocalization Sounds

Aiming at automatic detection of non-linguistic sounds from vocalizations, we investigate the applicability of various subsets of audio features, which were formed on the basis of ranking the relevance and the individual quality of several audio features. Specifically, based on the ranking of the large set of audio descriptors, we performed selection of subsets and evaluated them on the non-linguistic sound recognition task. During the audio parameterization process, every input utterance is converted to a single feature vector, which consists of 207 parameters. Next, a subset of this feature vector is fed to a classification model, which aims at straight estimation of the unknown sound class. The experimental evaluation showed that the feature vector composed of the 50-best ranked parameters provides a good trade-off between computational demands and accuracy, and that the best accuracy, in terms of recognition accuracy, is observed for the 150-best subset.

Theodoros Theodorou, Iosif Mporas, Nikos Fakotakis
Plant Leaf Recognition Using Zernike Moments and Histogram of Oriented Gradients

A method using Zernike Moments and Histogram of Oriented Gradients for classification of plant leaf images is proposed in this paper. After preprocessing, we compute the shape features of a leaf using Zernike Moments and texture features using Histogram of Oriented Gradients and then the Support Vector Machine classifier is used for plant leaf image classification and recognition. Experimental results show that using both Zernike Moments and Histogram of Oriented Gradients to classify and recognize plant leaf image yields accuracy that is comparable or better than the state of the art. The method has been validated on the

Flavia

and the

Swedish Leaves

datasets as well as on a combined dataset.

Dimitris G. Tsolakidis, Dimitrios I. Kosmopoulos, George Papadourakis
Ground Resistance Estimation Using Feed-Forward Neural Networks, Linear Regression and Feature Selection Models

This paper proposes ways for estimating the ground resistance of several grounding systems, embedded in various ground enhancing compounds. Grounding systems are used to divert high fault currents to the earth. The proper estimation of the ground resistance is useful from a technical and also economic viewpoint, for the proper electrical installation of constructions. The work utilizes both, conventional and intelligent data analysis techniques, for ground resistance modelling from field measurements. In order to estimate ground resistance from weather and ground data such as soil resistivity, rainfall measurements, etc., three linear regression models have been applied to a properly selected dataset, as well as an intelligent approach based in feed-forward neural networks,. A feature selection process has also been successfully applied, showing that features selected for estimation agree with experts’ opinion on the importance of the variables considered. Experimental data consist of field measurements that have been performed in Greece during the last three years. The input variables used for analysis are related to soil resistivity within various depths and rainfall height during some periods of time, like last week and last month. Experiments produce high quality results, as correlation exceeds 99% for specific experimental settings of all approaches tested.

Theopi Eleftheriadou, Nikos Ampazis, Vasilios P. Androvitsaneas, Ioannis F. Gonos, Georgios Dounias, Ioannis A. Stathopulos

Main Conference Short Papers

Developing a Game Server for Humans and Bots

The paper focuses on modern web development tools as a means of implementing a strategy board game in a web environment. We examine various Ajax frameworks in order to create the client side interface and the turn based interaction between players in the web. A rating system is implemented for measuring the skill of human and computer players alike, and a distributed architecture is proposed for aspiring AI researchers and enthusiasts, in order to integrate their game playing bots with the main server of the game.

Nikos Dikaros, Dimitris Kalles
Feature Evaluation Metrics for Population Genomic Data

Single Nucleotide Polymorphisms (SNPs) are considered nowadays one of the most important class of genetic markers with a wide range of applications with both scientific and economic interests. Although the advance of biotechnology has made feasible the production of genome wide SNP datasets, the cost of the production is still high. The transformation of the initial dataset into a smaller one with the same genetic information is a crucial task and it is performed through feature selection. Biologists evaluate features using methods originating from the field of population genetics. Although several studies have been performed in order to compare the existing biological methods, there is a lack of comparison between methods originating from the biology field with others originating from the machine learning. In this study we present some early results which support that biological methods perform slightly better than machine learning methods.

Ioannis Kavakiotis, Alexandros Triantafyllidis, Grigorios Tsoumakas, Ioannis Vlahavas
Online Seizure Detection from EEG and ECG Signals for Monitoring of Epileptic Patients

In this article, we investigate the performance of a seizure detection module for online monitoring of epileptic patients. The module is using as input data streams from electroencephalographic and electrocardiographic recordings. The architecture of the module consists of time and frequency domain feature extraction followed by classification. Four classification algorithms were evaluated on three epileptic subjects. The best performance was achieved by the support vector machine algorithm, with more than 90% for two of the subjects and slightly lower than 90% for the third subject.

Iosif Mporas, Vasiliki Tsirka, Evangelia I. Zacharaki, Michalis Koutroumanidis, Vasileios Megalooikonomou
Towards Vagueness-Oriented Quality Assessment of Ontologies

Ontology evaluation has been recognized for a long time now as an important part of the ontology development lifecycle, and several methods, processes and metrics have been developed for that purpose. Nevertheless, vagueness is a quality dimension that has been neglected from most current approaches. Vagueness is a common human knowledge and linguistic phenomenon, typically manifested by terms and concepts that lack clear applicability conditions and boundaries such as

high

,

expert

,

bad

,

near

etc. As such, the existence of vague terminology in an ontology may hamper the latter’s quality, primarily in terms of shareability and meaning explicitness. With that in mind, in this short paper we argue for the need of including vagueness in the ontology evaluation activity and propose a set of metrics to be used towards that goal.

Panos Alexopoulos, Phivos Mylonas
Vertex Incremental Path Consistency for Qualitative Constraint Networks

The Interval Algebra (

IA

) and a subset of the Region Connection Calculus, namely,

RCC

-8, are the dominant Artificial Intelligence approaches for representing and reasoning about qualitative temporal and topological relations respectively. Such qualitative information can be formulated as a Qualitative Constraint Network (

QCN

). In this framework, one of the main tasks is to compute the path consistency of a given

QCN

. We propose a new algorithm that applies path consistency in a vertex incremental manner. Our algorithm enforces path consistency on an initial path consistent

QCN

augmented by a new temporal or spatial entity and a new set of constraints, and achieves better performance than the state-of-the-art approach. We evaluate our algorithm experimentally with

QCNs

of

RCC

-8 and show the efficiency of our approach.

Michael Sioutis, Jean-François Condotta

Special Session: Action Languages

Being Logical or Going with the Flow? A Comparison of Complex Event Processing Systems

Complex event processing (CEP) is a field that has drawn significant attention in the last years. CEP systems treat incoming information as flows of time-stamped events which may be structured according to some underlying pattern. Their goal is to extract in real-time those patterns or even learn the patterns which could lead to certain outcomes. Many CEP systems have already been implemented, sometimes with significantly different approaches as to how they represent and handle events. In this paper, we compare the widely used Esper system which employs a SQL-based language, and RTEC which is a dialect of the Event Calculus.

Elias Alevizos, Alexander Artikis
Event Recognition for Unobtrusive Assisted Living

Developing intelligent systems towards automated clinical monitoring and assistance for the elderly is attracting growing attention. USEFIL is an FP7 project aiming to provide health-care assistance in a smart-home setting. We present the data fusion component of USEFIL which is based on a complex event recognition methodology. In particular, we present our knowledge-driven approach to the detection of Activities of Daily Living (ADL) and functional ability, based on a probabilistic version of the Event Calculus. To investigate the feasibility of our approach, we present an empirical evaluation on synthetic data.

Nikos Katzouris, Alexander Artikis, Georgios Paliouras
Declarative Reasoning Approaches for Agent Coordination

Reasoning about Action and Change (RAC) and Answer Set Programming (ASP) are two well-known fields in AI for logic-based reasoning. Each paradigm bears unique features and a possible integration can lead to more effective ways to address hard AI problems. In this paper, we report on implementations that embed RAC formalisms and concepts in ASP and present the experimental results obtained, building on a graph-based problem setting that introduces casual and temporal requirements.

Filippos Gouidis, Theodore Patkos, Giorgos Flouris, Dimitris Plexousakis
Reasoning about Actions with Loops

Plans with

loops

(or

loop-plans

) are more

general

and

compact

than classical

sequential

plans, and gaining increasing attention in AI. While many existing approaches focus on

algorithmic

issues, few work has been devoted to the

semantical

foundations of planning with loops. In this paper we develop a tailored action language

$\mathcal{A}_K^L$

for handling domains with loop-plans and argue that it posses a “better” semantics than existing work and could serve as a clean, solid semantical foundation for reasoning about actions with loops.

Jiankun He, Yuping Shen, Xishun Zhao

Special Session: Computational Intelligence Techniques for Biosignal Analysis and Evaluation

Computer Aided Classification of Mammographic Tissue Using Shapelets and Support Vector Machines

In this paper a robust regions-of-suspicion (ROS) diagnosis system on mammograms, recognizing all types of abnormalities is presented and evaluated. A new type of descriptors, based on Shapelet decomposition, derive the source images that generate the observed ROS in mammograms. The Shapelet decomposition coefficients can be used efficiently to detect ROS areas using Support-Vector-Machines (SVMs) with radial basis function kernels. Extensive experiments using the Mammographic Image Analysis Society (MIAS) database have shown high recognition accuracy above 86% for all kinds of breast abnormalities that exceeds the performance of similar decomposition methods based on Zernike moments presented in the literature by more than 8%.

George Apostolopoulos, Athanasios Koutras, Ioanna Christoyianni, Evaggelos Dermatas
Discriminating Normal from “Abnormal” Pregnancy Cases Using an Automated FHR Evaluation Method

Electronic fetal monitoring has become the gold standard for fetal assessment both during pregnancy as well as during delivery. Even though electronic fetal monitoring has been introduced to clinical practice more than forty years ago, there is still controversy in its usefulness especially due to the high inter- and intra-observer variability. Therefore the need for a more reliable and consistent interpretation has prompted the research community to investigate and propose various automated methodologies. In this work we propose the use of an automated method for the evaluation of fetal heart rate, the main monitored signal, which is based on a data set, whose labels/annotations are determined using a mixture model of clinical annotations. The successful results of the method suggest that it could be integrated into an assistive technology during delivery.

Jiří Spilka, George Georgoulas, Petros Karvelis, Václav Chudáček, Chrysostomos D. Stylios, Lenka Lhotská
Semi-Automated Annotation of Phasic Electromyographic Activity

Recent research on manual/visual identification of phasic muscle activity utilizing the phasic electromyographic metric (PEM) in human polysomnograms (PSGs) cites evidence that PEM is a potentially reliable quantitative metric to assist in distinguishing between neurodegenerative disorder populations and age-matched controls. However, visual scoring of PEM activity is time consuming-preventing feasible implementation within a clinical setting. Therefore, here we propose an assistive/semi-supervised software platform designed and tested to automatically identify and characterize PEM events in a clinical setting that will be extremely useful for sleep physicians and technicians. The proposed semi-automated approach consists of four levels: A) Signal Parsing, B) Calculation of quantitative features on candidate PEM events, C) Classification of PEM and non-PEM events using a linear classifier, and D) Post-processing/Expert feedback to correct/remove automated misclassifications of PEM and Non-PEM events. Performance evaluation of the designed software compared to manual labeling is provided for electromyographic (EMG) activity from the PSG of a control subject. Results indicate that the semi-automated approach provides an excellent benchmark that could be embedded into a clinical decision support system to detect PEM events that would be used in neurological disorder identification and treatment.

Petros Karvelis, Jacqueline Fairley, George Georgoulas, Chrysostomos D. Stylios, David B. Rye, Donald L. Bliwise
Time Dependent Fuzzy Cognitive Maps for Medical Diagnosis

Time dependence in medical diagnosis is important since, frequently, symptoms evolve over time, thus, changing with the progression of an illness. Taking into consideration that medical information may be vague, missing and/or conflicting during the diagnostic procedure, a new type of Fuzzy Cognitive Maps (FCMs), the soft computing technique that can handle uncertainty to infer a result, have been developed for Medical Diagnosis. Here, a method to enhance the FCM behaviour is proposed introducing time units that can follow disease progression. An example from the pulmonary field is described.

Evangelia Bourgani, Chrysostomos D. Stylios, George Manis, Voula C. Georgopoulos

Special Session: Game Artificial Intelligence

Flexible Behavior for Worker Units in Real-Time Strategy Games Using STRIPS Planning

In this paper we investigate how STRIPS planning techniques can be used to enhance the behavior of worker units that are common in real-time strategy (RTS) video games. Worker units are typically instructed to carry out simple tasks such as moving to destinations or mining for a type of resource. In this work we investigate how this interaction can be extended by providing the human player with the capability of instructing the worker unit to achieve simple goals. We introduce the ”Smart Workers” STRIPS planning domain, and generate a series of planning problems of increasing difficulty and size. We use these problem sets to evaluate the conditions under which this idea can be used in practice in a real video game. The evaluation is performed using a STRIPS planner that is implemented inside a commercial video game development framework.

Ioannis Vlachopoulos, Stavros Vassos, Manolis Koubarakis
Opening Statistics and Match Play for Backgammon Games

Players of complex board games like backgammon, chess and go, were always wondering what the best opening moves for their favourite game are. In the last decade, computer analysis has offered more insight to many opening variations. This is especially true for backgammon, where computer rollouts have radically changed the way human experts play the opening. In this paper we use Palamedes, the winner of the latest computer backgammon Olympiad, to make the first ever computer assisted analysis of the opening rolls for the backgammon variants Portes, Plakoto and Fevga (collectively called Tavli in Greece). We then use these results to build effective match strategies for each game variant.

Nikolaos Papahristou, Ioannis Refanidis
Story Generation in PDDL Using Character Moods: A Case Study on Iliad’s First Book

In this paper we look into a simple approach for generating character-based stories using planning and the language of PDDL. A story often involves modalities over properties and objects, such as what the characters believe, desire, request, etc. We look into a practical approach that reifies such modalities into normal objects of the planning domain, and relies on a “mood” predicate to represent the disposition of characters based on these objects. A short story is then generated by specifying a goal for the planning problem expressed in terms of the moods of the characters of the story. As a case study of how such a domain for story generation is modeled, we investigate the story of the first book of Homer’s Iliad as a solution of an appropriate PDDL domain and problem description.

Andrea Marrella, Stavros Vassos

Special Session: Multimodal Recommendation Systems and their Application to Tourism

A Novel Probabilistic Framework to Broaden the Context in Query Recommendation Systems

This paper presents a novel probabilistic framework for broadening the notion of context in web search query recommendation systems. In the relevant literature, query suggestion is typically conducted based on past user actions of the current session, mostly related to query submission. Our proposed framework regards user context in a broader way, consisting of a series of further parameters that express it more thoroughly, such as spatial and temporal ones. Therefore, query recommendation is performed herein by considering the appropriateness of each candidate query suggestion, given this broadened context. Experimental evaluation showed that our proposed framework, utilizing spatiotemporal contextual features, is capable to increase query recommendation performance, compared to state-of-art methods such as co-occurence, adjacency and Variable-length Markov Models (VMM). Due to its generic nature, our framework can operate on the basis of further features expressing the user context than the ones studied in the present work, e.g. affect-related, toward further advancing web search query recommendation.

Dimitris Giakoumis, Dimitrios Tzovaras
iGuide: Socially-Enriched Mobile Tourist Guide for Unexplored Sites

The paper presents iGuide, a system that aims at enabling a

socially enriched mobile tourist guide service

, with the aim to address a much wider range of sites and attractions than existing solutions cover, including historic and traditional settlements, sites of natural beauty or unattended sites of cultural heritage where access to information is unavailable or not directly provided. The casual visitor will obtain information and guidance while personally contributing to content enrichment of the visiting places by uploading user-generated media (images, videos) along with personalised views about the acquired experience (comments, ratings). At the same time, users will receive supplementary location-based services and recommendations to enhance their visiting experience and facilitate their wandering in places of interest and their direct interaction with local provisions. iGuide targets to offer text-to-speech (narration), rich multimedia content including real time 3D graphics, augmented reality services and a backend Web 2.0 informational portal and recommender tool.

Sofia Tsekeridou, Vassileios Tsetsos, Aimilios Chalamandaris, Christodoulos Chamzas, Thomas Filippou, Christos Pantzoglou
myVisitPlanner GR: Personalized Itinerary Planning System for Tourism

This application paper presents

myVisitPlanner

GR

, an intelligent web-based system aiming at making recommendations that help visitors and residents of the region of Northern Greece to plan their leisure, cultural and other activities during their stay in this area. The system encompasses a rich ontology of activities, categorized across dimensions such as activity type, historical era, user profile and age group. Each activity is characterized by attributes describing its location, cost, availability and duration range. The system makes activity recommendations based on user-selected criteria, such as visit duration and timing, geographical areas of interest and visit profiling. The user edits the proposed list and the system creates a plan, taking into account temporal and geographical constraints imposed by the selected activities, as well as by other events in the user’s calendar. The user may edit the proposed plan or request alternative plans. A recommendation engine employs non-intrusive machine learning techniques to dynamically infer and update the user’s profile, concerning his preferences for both activities and resulting plans, while taking privacy concerns into account. The system is coupled with a module to semi-automatically feed its database with new activities in the area.

Ioannis Refanidis, Christos Emmanouilidis, Ilias Sakellariou, Anastasios Alexiadis, Remous-Aris Koutsiamanis, Konstantinos Agnantis, Aimilia Tasidou, Fotios Kokkoras, Pavlos S. Efraimidis
Simultaneous Image Clustering, Classification and Annotation for Tourism Recommendation

The exponential increase in the amount of data uploaded to the web has led to a surge of interest in multimedia recommendation and annotation. Due to the vast volume of data, efficient algorithms for recommendation and annotation are needed. Here, a novel two-step approach is proposed, which annotates an image received as input and recommends several tourist destinations strongly related to the image. It is based on probabilistic latent semantic analysis and hypergraph ranking enhanced with the visual attributes of the images. The proposed method is tested on a dataset of 30000 images bearing text information (e.g., title, tags) collected from

Flickr

. The experimental results are very promising, as they achieve a top rank precision of 80% for tourism recommendation.

Konstantinos Pliakos, Constantine Kotropoulos
Backmatter
Metadaten
Titel
Artificial Intelligence: Methods and Applications
herausgegeben von
Aristidis Likas
Konstantinos Blekas
Dimitris Kalles
Copyright-Jahr
2014
Verlag
Springer International Publishing
Electronic ISBN
978-3-319-07064-3
Print ISBN
978-3-319-07063-6
DOI
https://doi.org/10.1007/978-3-319-07064-3