Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 18th International Conference on Artificial Intelligence: Methodology, Systems, and Applications, AIMSA 2018, held in Varna, Bulgaria, in September 2018.
The 22 revised full papers and 7 poster papers presented were carefully reviewed and selected from 72 submissions. They cover a wide range of topics in AI: from machine learning to natural language systems, from information extraction to text mining, from knowledge representation to soft computing; from theoretical issues to real-world applications.



Natural Language Processing


A New Approach to the Supervised Word Sense Disambiguation

The paper presents a new supervised approach for solving the all-words sense disambiguation (WSD) task, which allows avoiding the necessity to construct different specialized classifiers for disambiguating different target words. In the core of the approach lies a new interpretation of the notion ‘class’, which relates each possible meaning of a word to a frequency with which it occurs in some corpora. In such a way all possible senses of different words can be classified in a unified way into a restricted set of classes starting from the most frequent, and ending with the least frequent class. For representing target and context words the approach uses word embeddings and information about their part-of-speech (POS) categories. The experiments have shown that classifiers trained on examples created by means of the approach outperform the standard baselines for measuring the behavior of all-words WSD classifiers.
Gennady Agre, Daniel Petrov, Simona Keskinova

Explorations in Sentiment Mining for Arabic and English Tweets

Assigning sentiment labels to documents is, at first sight, a standard multi-label classification task. As such, it seems likely that standard machine learning algorithms such as deep neural networks (DNNs) will provide an effective approach. We describe an alternative approach, involving the construction of a weighted lexicon of sentiment terms, which significantly outperforms the use of DNNs. The moral of the story is that DNNs are not a universal panacea, and that paying attention to the nature of the data that you are trying to learn from can be more important than trying out ever more powerful general purpose machine learning algorithms.
Tariq Ahmad, Allan Ramsay, Hanady Ahmed

Intent Detection System Based on Word Embeddings

Intent detection is one of the main tasks of a dialogue system. In this paper we present our intent detection system that is based on FastText word embeddings and neural network classifier. We find a significant improvement in the FastText sentence vectorization. The results show that our intent detection system provides state-of-the-art results on three English datasets outperforming many popular services.
Kaspars Balodis, Daiga Deksne

Indirect Association Rules Mining in Clinical Texts

This paper presents a method for structured information extraction from patient status description. The proposed approach is based on indirect association rules mining (IARM) in clinical text. This method is language independent and unsupervised, that makes it suitable for applications in low resource languages. For experiments are used data from Bulgarian Diabetes Register. The Register is automatically generated from pseudonymized reimbursement requests (outpatient records) submitted to the Bulgarian National Health Insurance Fund in 2010–2016 for more than 5 million citizens yearly. Experiments were run on data collections with patient status data only. The great variety of possible values (conditions) makes this task challenging. The classical frequent itemsets mining algorithms identify just few frequent pairs only even for small minimal support. The results of the proposed IARM method show that attribute-value pairs of anatomical organs/systems and their condition can be identified automatically. IARM approach allows extraction of indirect relations between item pairs with support below the minimal support.
Svetla Boytcheva

Towards Automated Customer Support

Recent years have seen growing interest in conversational agents, such as chatbots, which are a very good fit for automated customer support because the domain in which they need to operate is narrow. This interest was in part inspired by recent advances in neural machine translation, esp. the rise of sequence-to-sequence (seq2seq) and attention-based models such as the Transformer, which have been applied to various other tasks and have opened new research directions in question answering, chatbots, and conversational systems. Still, in many cases, it might be feasible and even preferable to use simple information retrieval techniques. Thus, here we compare three different models: (i) a retrieval model, (ii) a sequence-to-sequence model with attention, and (iii) Transformer. Our experiments with the Twitter Customer Support Dataset, which contains over two million posts from customer support services of twenty major brands, show that the seq2seq model outperforms the other two in terms of semantics and word overlap.
Momchil Hardalov, Ivan Koychev, Preslav Nakov

Evaluation of Automatic Tag Sense Disambiguation Using the MIRFLICKR Image Collection

Automatic identification of intended tag meanings is a challenge in large image collections where human authors assign tags inspired by emotional or professional motivations. Algorithms for automatic tag disambiguation need “golden” collections of manually created tags to establish baselines for accuracy assessment. Here we show how to use the MIRFLICKR-25000 collection to evaluate the performance of our algorithm for tag sense disambiguation which identifies meanings of image tags based on WordNet or Wikipedia. We present three different types of observations on the disambiguated tags: (i) accuracy evaluation, (ii) evaluation of the semantic similarity of the individual tags with the image category and (iii) the semantic similarity of an image tagset to the image category, using different word embedding models for the latter two. We show how word embeddings create a specific baseline so the results can be compared. The accuracy we achieve is 78.6%.
Olga Kanishcheva, Ivelina Nikolova, Galia Angelova

Echo State Network for Word Sense Disambiguation

The current developments in the area report on numerous applications of recurrent neural networks for Word Sense Disambiguation that allowed the increase of prediction accuracy even in situation with sparse knowledge due to the available generalization properties. Since the traditionally used LSTM networks demand enormous computational power and time to be trained, the aim of the present work is to investigate the possibility of applying a recently proposed fast trainable RNN, namely Echo state networks. The preliminary results reported here demonstrate the applicability of ESN to WSD.
Petia Koprinkova-Hristova, Alexander Popov, Kiril Simov, Petya Osenova

Constrained Permutations for Computing Textual Similarity

A wide range of algorithms for computing textual similarity have been proposed. Much recent work has been aimed at calculating lexical similarity, but in general such calculations have to be treated as components in larger algorithms for computing similarity between sentences.
In the current paper we describe a refinement of the well-known dynamic-time warping (DTW) algorithm for calculating the string edit distance between a pair of texts. The refined version of this algorithm allows for a range of constrained permutations without increasing the complexity of the underlying algorithm.
Allan Ramsay, Amal Alshahrani

A Study on Dialog Act Recognition Using Character-Level Tokenization

Dialog act recognition is an important step for dialog systems since it reveals the intention behind the uttered words. Most approaches on the task use word-level tokenization. In contrast, this paper explores the use of character-level tokenization. This is relevant since there is information at the sub-word level that is related to the function of the words and, thus, their intention. We also explore the use of different context windows around each token, which are able to capture important elements, such as affixes. Furthermore, we assess the importance of punctuation and capitalization. We performed experiments on both the Switchboard Dialog Act Corpus and the DIHANA Corpus. In both cases, the experiments not only show that character-level tokenization leads to better performance than the typical word-level approaches, but also that both approaches are able to capture complementary information. Thus, the best results are achieved by combining tokenization at both levels.
Eugénio Ribeiro, Ricardo Ribeiro, David Martins de Matos

Neural Methods for Cross-Lingual Sentence Compression

Sentence compression produces a shorter sentence by removing redundant information, preserving the grammaticality and the important content. We propose an improvement to current neural deletion systems. These systems output a binary sequence of labels for an input sentence: one indicates that the token from the source sentence remains in the compression, whereas zero indicates that the token should be removed. Our main improvement is the use of a Conditional Random Field as final layer, which benefits the decoding of the best global sequence of labels for a given input. In addition, we also evaluate the incorporation of syntactic features, which can improve grammaticality. Finally, this task is extended into a cross-lingual setting where the models are evaluated on English and Portuguese. The proposed architecture achieves better than or equal results to the current state-of-the-art systems, validating that the model benefits from the modification in both languages.
Frederico Rodrigues, Bruno Martins, Ricardo Ribeiro

Towards Constructing a Corpus for Studying the Effects of Treatments and Substances Reported in PubMed Abstracts

We present the construction of an annotated corpus of PubMed abstracts reporting about positive, negative or neutral effects of treatments or substances. Our ultimate goal is to annotate one sentence (rationale) for each abstract and to use this resource as a training set for text classification of effects discussed in PubMed abstracts. Currently, the corpus consists of 750 abstracts. We describe the automatic processing that supports the corpus construction, the manual annotation activities and some features of the medical language in the abstracts selected for the annotated corpus. It turns out that recognizing the terminology and the abbreviations is key for determining the rationale sentence. The corpus will be applied to improve our classifier, which currently has accuracy of 78.80% achieved with normalization of the abstract terms based on UMLS concepts from specific semantic groups and an SVM with a linear kernel. Finally, we discuss some other possible applications of this corpus.
Evgeni Stefchov, Galia Angelova, Preslav Nakov

Recursive Style Breach Detection with Multifaceted Ensemble Learning

We present a supervised approach for style change detection, which aims at predicting whether there are changes in the style in a given text document, as well as at finding the exact positions where such changes occur. In particular, we combine a TF.IDF representation of the document with features specifically engineered for the task, and we make predictions via an ensemble of diverse classifiers including SVM, Random Forest, AdaBoost, MLP, and LightGBM. Whenever the model detects that style change is present, we apply it recursively, looking to find the specific positions of the change. Our approach powered the winning system for the PAN@CLEF 2018 task on Style Change Detection.
Daniel Kopev, Dimitrina Zlatkova, Kristiyan Mitov, Atanas Atanasov, Momchil Hardalov, Ivan Koychev, Preslav Nakov

Machine Learning and Data Mining Applications


Improving Machine Learning Prediction Performance for Premature Termination of Psychotherapy

Premature termination of psychosocial treatments is one of the major challenges in psychotherapy, with negative consequences for the patient, the therapist and the healthcare system as a whole. The aim of this pilot study is to use machine learning approaches to identify such parameters of dropout prediction based on either symptom questionnaires or open-ended diary questions in a specific population with borderline personality disorder (BPD). For this purpose, the Borderline Symptom Checklist (BSL-23) was used to create machine learning models to predict therapy dropout. We discovered that data from BSL-23 does not contain relevant predictors. Furthermore we present a concept of a private digital therapy diary (PDTD), which we use to investigate and predict dropout parameters in order to alert therapists on the danger of premature termination of individual patients.
Martin Bohus, Stephan Gimbel, Nora Goerg, Bernhard G. Humm, Martin Schüller, Marc Steffens, Ruben Vonderlin

Exploring the Usability of the Dice CAPTCHA by Advanced Statistical Analysis

This paper introduces a new study of the Dice CAPTCHA usability based on advanced statistical analysis. An experiment is performed on a population of 197 Internet users, characterised by age and Internet experiences, to which the solution to the Dice CAPTCHA is required on a laptop or tablet computer. The response time, which is the solution time to successfully solve the CAPTCHA, together with the number of tries are registered for each user. Then, the collected data are subjected to association rule mining for analysing the dependence of the response time to solve the CAPTCHA in a given number of tries on the co-occurrence of the user’s features. This analysis is very useful to understand the co-occurrence of factors influencing the solution to the CAPTCHA, and accordingly, to realise which CAPTCHA is closer to the “ideal” CAPTCHA.
Darko Brodić, Alessia Amelio, Ivo R. Draganov, Radmila Janković

Time Series Analysis for Sales Prediction

In this paper, we present an approach to forecasting the number of paintings that will be sold daily by Vivre Deco S.A. Vivre is an online retailer for Home and Lifestyle in Central and Eastern Europe. One of its concerns is related to the stocks that it needs to make at its own warehouse (considering its limited available space) to ensure a good product flow that would maximize both the company profit and the users’ satisfaction. Since stocks are directly connected to sales, the purpose is to predict the amount of sales from each category of products, given the selling history of these products. Thus, we have chosen a category of products (paintings) and used ARIMA for obtaining the predictions. We present different considerations regarding how we chose the model, along with the solver and the optimization method for fitting ARIMA. We also discuss the influence of the differencing on the obtained results, along with information about the runtime of different models.
Costin-Gabriel Chiru, Vlad-Valentin Posea

Machine Learning-Driven Noise Separation in High Variation Genomics Sequencing Datasets

Genomics studies have increasingly had to deal with datasets containing high variation between the sequenced nucleotide chains. This is most common in metagenomics studies and polyploid studies, where the biological nature of studied samples requires analysis of multiple variants of nearly identical sequences. The high variation makes it more difficult to determine the correct nucleotide sequences, as well as to distinguish signal from noise, producing digital results with higher error rates than the ones that can be achieved in samples with low variation. This paper presents an original pure machine learning-based approach for detecting and potentially correcting those errors. It uses a generic machine learning-based model that can be applied to different types of sequencing data with minor modifications. As presented in a separate part of this work, these models can be combined with data-specific error candidate selection to apply the models on, for a refined error discovery, but as shown here, can also be used independently.
Milko Krachunov, Maria Nisheva, Dimitar Vassilev

Machine Learning Techniques for Survival Time Prediction in Breast Cancer

The use of machine learning in disease prediction and prognosis is part of a growing trend of personalized and predictive medicine. Cancer studies are domain of active machine learning implementation in particular in sense of accuracy of cancer prognosis and prediction. The accuracy of survival time prediction in breast cancer is the main object of the study. Two major features for survival time prediction, based on clinical data are used: the created in the study tumor integrated clinical feature and Nottingham prognostic index. The applied machine learning methods aside with data normalisation and classification provide promising results for accuracy of survival time prediction. Results showed prepotency of the support vector regression modles - linear and decision tree regression models, for more accurate prediction of the survival time in breast cancer. Cross-validation, based on four parameters for error evaluation, confirms the results of the model performance concerning the accuracy of survival time prediction in breast cancer.
Iliyan Mihaylov, Maria Nisheva, Dimitar Vassilev

Knowledge Representation, Reasoning and Search


Tractable Classes in Exactly-One-SAT

In this paper, we aim at proposing a new approach for defining tractable classes for Exactly-One-SAT problem (in short EO-SAT). EO-SAT is the problem of deciding whether a given CNF formula has a model so that each clause has exactly one true literal. Our first tractable class is defined by using a simple property that has to be satisfied by every three clauses sharing at least one literal. In a similar way, our second tractable class is obtained from a property that has to be satisfied by particular sequences of clauses. The proposed tractable classes can, in a sense, be seen as natural counterparts of tractable classes of the maximum independent set problem.
Yazid Boumarafi, Yakoub Salhi

Semantic Meta-search Using Cohesion Network Analysis

Online searching is one of the most frequently performed actions and search engines need to provide relevant results, while maintaining scalability. In this paper we introduce a novel approach grounded in Cohesion Network Analysis in the form of a semantic search engine incorporated in our Hub-Tech platform. Our aim is to help researchers and people unfamiliar with a domain find meaningful articles online, relevant for their project scope. In addition, we integrate state-of-the-art technologies to ensure scalability and low response time, namely SOLR – for data storage and full-text search functionalities – and Akka – for parallel and distributed processing. Preliminary validations denote promising search results, the software being capable to suggest articles in approximately the same way as humans consider them most appropriate – 75% are close results and top 20% are identical to user recommendations. Moreover, Hub-Tech recommended more suitable articles than Google Scholar for our specific task of searching for articles related to a detailed description given as input query (50 + words).
Ionut Daniel Chelcioiu, Dragos Corlatescu, Ionut Cristian Paraschiv, Mihai Dascalu, Stefan Trausan-Matu

A Query Language for Cognitive Maps

A Cognitive map is a graphical semantic model that represents influences between concepts. A cognitive map is easy to design and use, but the only query a user can make on it is to infer the propagated influence from a concept to another. This paper (This work is made in the context of “Analyse Cognitive de Savoirs”, a computer science project, granted by French region Pays de la Loire.) proposes CMQL, a general query language for cognitive maps. This language provides a way to query the different items of the model and not only the propagated influence. The language is declarative and is inspired of the domain relational calculus.
Adrian Robert, David Genest, Stéphane Loiseau

Approaches for Enumerating All the Essential Prime Implicants

The aim of this paper is to study the problem of enumerating all the essential prime implicants (EPIes) of a CNF formula. We first provide some interesting computational complexity results. We show in particular that the problem of checking whether a prime implicant of a CNF formula is essential is NP-complete. Then, we propose a simple characterization of the e-models of a CNF formula. An e-model is a model covered by a unique prime implicant, which is necessarily essential. Our characterization is then used to define a linear-time algorithm for checking whether a model of CNF formula is an e-model or not. Finally, using our characterization of the e-models, we propose two approaches for enumerating all the EPIes of a CNF formula.
Yakoub Salhi

Evolving a Team of Asymmetric Predator Agents That Do Not Compute in Predator-Prey Pursuit Problem

We herein revisit the predator-prey pursuit problem – using very simple predator agents. The latter – intended to model the emerging micro- and nano-robots – are morphologically simple. They feature a single line-of-sight sensor and a simple control of their two thrusters. The agents are behaviorally simple as well – their decision-making involves no computing, but rather – a direct mapping of the few perceived environmental states into the corresponding pairs of thrust values. We apply genetic algorithms to evolve such a mapping that results in the successful behavior of the team of these predator agents. To enhance the generality of the evolved behavior, we propose an asymmetric morphology of the agents – an angular offset of their sensor. Our experimental results verify that the offset of both 20° and 30° yields efficient and consistent evolution of successful behaviors of the agents in all tested initial situations.
Ivan Tanev, Milen Georgiev, Katsunori Shimohara, Thomas Ray



Semantic Graph Based Automatic Summarization of Multiple Related Work Sections of Scientific Articles

The summarization of scientific articles and particularly their related work sections would support the researchers in their investigation by allowing them to summarize a large number of articles. Scientific articles differ from generic text due to their specific structure and inclusion of citation sentences. Related work sections of scientific articles generally describe the most important facts of prior related work. Automatically summarizing these sections would support research development by speeding up the research process and consequently enhancing research quality. However, these sections may overlap syntactically and semantically. This research proposes to explore the automatic summarization of multiple related work sections. More specifically, the research goals of this work are to reduce the redundancy of citation sentences and enhance the readability of the generated summary by investigating a semantic graph-based approach and cross-document structure theory. These approaches have proven successful in the field of abstractive document summarization.
Nouf Ibrahim Altmami, Mohamed El Bachir Menai

Collective Lévy Walk for Efficient Exploration in Unknown Environments

One of the key tasks of autonomous mobile robots is to explore the unknown environment under limited energy and deadline conditions. In this paper, we focus on one of the most efficient random walks found in the natural and biological system, i.e., Lévy walk. We show how Lévy properties disappear in larger robot swarm sizes because of spatial interferences and propose a novel behavioral algorithm to preserve Lévy properties at the collective level. Our initial findings hold potential to accelerate target search processes in large unknown environments by parallelizing Lévy exploration using a group of robots.
Yara Khaluf, Stef Van Havermaet, Pieter Simoens

A BLAST-Based Algorithm to Find Evenly Distributed Unique Subsequences

Genomics rearrangements detection involves processing of large amounts of DNA data and therefore efficiency of the used algorithms is crucial. We propose the algorithm based on evenly distributed unique subsequences. In this paper BLAST-based pattern matching is examined in terms of computation time and detection quality. The experiments were carried out both on real sequence with artificially introduced random rearrangements. The algorithm extension was implemented as part of genomecmp web application which provides graphical user interface for ease and convenience of use.
Maciej Kulawik, Robert M. Nowak

Implementation of Multilayer Perceptron in Graphics Processing Unit

Often in computer technologies methods are used based on exact calculations. For example, in searching algorithms the goal is to find exactly a given element in a given set. Searching a record in a database is performed by looking for exact value in a given field, for example the identifier of the record or the values in a group of fields.
Ventsislav Nikolov

Semantic Annotation Modelling for Protein Functions Prediction

Functional protein annotation is a key phase in the analysis of de-novo sequenced genomes. Often the automatic annotation tools are insensitive to removing wrong annotations associated with contradictions and non-compliance in biological terms. In this study, we introduce a semantic model for representation of functional annotations based on a resource description framework standard (RDF).
We have integrated several databases with information for protein sequences and ontologies describing the functional relationships of the protein molecules. By using Web Ontology Language (OWL) axioms, RDF storage engines are able to take decisions which candidate annotations should be marked as biologically unviable and do not withstand the reality checks associated with coexistence, subcellular location and species affiliation [1]. This approach reduces the number of false positives and time spent in machine annotation’s curation process. The presented semantic data model is designed to combine the semantic representation of annotations with examples designed for machine learning.
Current work is part of a large scale project of functional annotation of plant genomes.
Deyan Peychev, Irena Avdjieva

ReadME – Enhancing Automated Writing Evaluation

Writing is a central skill needed for learning that is tightly linked to text comprehension. Good writing skills are gained through practice and are characterized by clear and organized language, accurate grammar usage, strong text cohesion, and sophisticated wording. Providing constructive feedback can help learners improve their writing; however, providing feedback is a time-consuming process. The aim of this paper is to present an updated version of the tool ReadME, which generates automated and personalized feedback designed to help learners improve the quality of their writing. Sampling a corpus of over 15,000 essays, we used the ReaderBench framework to generate more than 1,200 textual complexity indices. These indices were then grouped into six writing components using a Principal Component Analysis. Based on the components generated by the PCA, as well as individual index values, we created an extensible rule-based engine to provide personalized feedback at four granularity levels: document, paragraph, sentence, and word levels. The ReadME tool consists of a multi-layered, interactive visualization interface capable of providing feedback to writers by highlighting sections of texts that may benefit from revision.
Maria-Dorinela Sirbu, Robert-Mihai Botarleanu, Mihai Dascalu, Scott A. Crossley, Stefan Trausan-Matu

Feature Selection Based on Logistic Regression for 2-Class Classification of Multidimensional Molecular Data

This paper describes a classification system which uses feature selection method based on logistic regression algorithm. As a feature elimination criterion the variance inflation factor of the statistical logistic regression model is used. The experimental results show that this method can be successfully applied for feature selection in classification problem of multidimensional microarray data.
Sebastian Student, Alicja Płuciennik, Michał Jakubczak, Krzysztof Fujarewicz


Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.



Globales Erdungssystem in urbanen Kabelnetzen

Bedingt durch die Altersstruktur vieler Kabelverteilnetze mit der damit verbundenen verminderten Isolationsfestigkeit oder durch fortschreitenden Kabelausbau ist es immer häufiger erforderlich, anstelle der Resonanz-Sternpunktserdung alternative Konzepte für die Sternpunktsbehandlung umzusetzen. Die damit verbundenen Fehlerortungskonzepte bzw. die Erhöhung der Restströme im Erdschlussfall führen jedoch aufgrund der hohen Fehlerströme zu neuen Anforderungen an die Erdungs- und Fehlerstromrückleitungs-Systeme. Lesen Sie hier über die Auswirkung von leitfähigen Strukturen auf die Stromaufteilung sowie die Potentialverhältnisse in urbanen Kabelnetzen bei stromstarken Erdschlüssen. Jetzt gratis downloaden!