Skip to main content

About this book

The two-volume set LNAI 12319 and 12320 constitutes the proceedings of the 9th Brazilian Conference on Intelligent Systems, BRACIS 2020, held in Rio Grande, Brazil, in October 2020.

The total of 90 papers presented in these two volumes was carefully reviewed and selected from 228 submissions.

The contributions are organized in the following topical section:

Part I: Evolutionary computation, metaheuristics, constrains and search, combinatorial and numerical optimization; neural networks, deep learning and computer vision; and text mining and natural language processing.

Part II: Agent and multi-agent systems, planning and reinforcement learning; knowledge representation, logic and fuzzy systems; machine learning and data mining; and multidisciplinary artificial and computational intelligence and applications.

Due to the Corona pandemic BRACIS 2020 was held as a virtual event.

Table of Contents


Correction to: Semi-Supervised Sentiment Analysis of Portuguese Tweets with Random Walk in Feature Sample Networks

Inadvertently the authors of this chapter released it without correcting an error in the title. This has now been corrected and the corrected title reads: “Semi-Supervised Sentiment Analysis of Portuguese Tweets with Random Walk in Feature Sample Networks”.

Pedro Gengo, Filipe A. N. Verri

Evolutionary Computation, Metaheuristics, Constrains and Search, Combinatorial and Numerical Optimization


A New Hybridization of Evolutionary Algorithms, GRASP and Set-Partitioning Formulation for the Capacitated Vehicle Routing Problem

This work presents a new hybrid method based on the route-first-cluster-second approach using Greedy Randomized Adaptive Search Procedure (GRASP), Differential Evolution (DE), Evolutionary Local Search (ELS) and set-partitioning problem (SPP) to solve well-known instances of Capacitated Vehicle Routing Problem (CVRP). The CVRP consists of minimizing the cost of a fleet of vehicles serving a set of customers from a single depot, in which every vehicle has the same capacity. The DE heuristic is used to build an initial feasible solution and ELS is applied until a local minimum is found during the local search phase of the GRASP. Finally, the SPP model provides a new optimal solution with regard to the built solutions in the GRASP. We perform computational experiments for benchmarks available in the literature and the results show that our method was effective to solve CVRP instances with a satisfactory performance. Moreover, a statistical test shows that there is not significant difference between the best known solutions of benchmark instances and the solutions of the proposed method.

André Manhães Machado, Maria Claudia Silva Boeres, Rodrigo de Alvarenga Rosa, Geraldo Regis Mauri

An Evolutionary Algorithm for Learning Interpretable Ensembles of Classifiers

Ensembles of classifiers are a very popular type of method for performing classification, due to their usually high predictive accuracy. However, ensembles have two drawbacks. First, ensembles are usually considered a ‘black box’, non-interpretable type of classification model, mainly because typically there are a very large number of classifiers in the ensemble (and often each classifier in the ensemble is a black-box classifier by itself). This lack of interpretability is an important limitation in application domains where a model’s predictions should be carefully interpreted by users, like medicine, law, etc. Second, ensemble methods typically involve many hyper-parameters, and it is difficult for users to select the best settings for those hyper-parameters. In this work we propose an Evolutionary Algorithm (an Estimation of Distribution Algorithm) that addresses both these drawbacks. This algorithm optimizes the hyper-parameter settings of a small ensemble of 5 interpretable classifiers, which allows users to interpret each classifier. In our experiments, the ensembles learned by the proposed Evolutionary Algorithm achieved the same level of predictive accuracy as a well-known Random Forest ensemble, but with the benefit of learning interpretable models (unlike Random Forests).

Henry E. L. Cagnini, Alex A. Freitas, Rodrigo C. Barros

An Evolutionary Analytic Center Classifier

Classification is an essential task in the field of Machine Learning, where developing a classifier that minimizes errors on unknown data is one of its central problems. It is known that the analytic center is a good approximation of the center of mass of the version space that is consistent with the Bayes-optimal decision surface. Therefore, in this work, we propose an evolutionary algorithm, relying on the convexity properties of the version space, that evolves a population of perceptron classifiers in order to find a solution that approximates its analytic center. Hyperspherical coordinates are used to guarantee feasibility when generating new individuals and enabling exploration to be uniformly distributed through the search space. To evaluate the individuals we consider using a potential function that employs a logarithmic barrier penalty. Experiments were performed on real datasets, and the obtained results indicate concrete possibilities for applying the proposed algorithm for solving practical problems.

Renan Motta Goulart, Saulo Moraes Villela, Carlos Cristiano Hasenclever Borges, Raul Fonseca Neto

Applying Dynamic Evolutionary Optimization to the Multiobjective Knapsack Problem

Real-world discrete problems are often also dynamic making them very challenging to be optimized. Here we focus on the employment of evolutionary algorithms to deal with such problems. In the last few years, many evolutionary optimization algorithms have investigated dynamic problems, being that some of the most recent papers investigate formulations with more than one objective to be optimized at the same time. Although evolutionary optimization had revealed very competitive algorithms in different applications, both multiobjective formulations and dynamic problems need to apply specific strategies to perform well. In this work, we investigate four algorithms proposed for dynamic multiobjective problems: DNSGA-II, MOEA/D-KF, MS-MOEA and DNSGA-III. The first three were previously proposed in the literature, where they were applied just in continuous problems. We aim to observe the behavior of these algorithms in a discrete problem: the Dynamic Multiobjective Knapsack Problems (DMKP). Our results have shown that some of them are also promising for applying to problems with discrete space.

Thiago Fialho de Queiroz Lafetá, Gina Maira Barbosa de Oliveira

Backtracking Group Search Optimization: A Hybrid Approach for Automatic Data Clustering

Data clustering is one of the most primitive tasks in pattern recognition, although it is known to be a NP-hard grouping task. Given its complexity, standard clustering methods, such as the partitional data clustering algorithms, are easily trapped in local minimum solutions, due to their lack of good global searching mechanisms. Evolutionary Algorithms (EAs) and Swarm Intelligence (SIs) methods, such as Group Search Optimization (GSO) and Backtracking Search Optimization (BSA), are commonly employed to deal with clustering task, given their capabilities to handle global search problems. In this work, a new hybrid evolutionary algorithm between GSO and BSA is presented, named BGSO, to tackle clustering problem, which combines the best features of GSO and the historical mechanisms of BSA. Also, BGSO is developed in the context of Automatic Clustering approach, which means that it is able to predict the best number of final clusters, so no prior assumption about the data set at hand is required. The proposed approach is compared to standard GSO, BSA and other three EAs and SIs from the literature by means of nine real-world problems, showing promising results considering four clustering metrics.

Luciano Pacifico, Teresa Ludermir

Dynamic Software Project Scheduling Problem with PSO and Dynamic Strategies Based on Memory

The Software Project Scheduling Problem (SPSP) aims to allocate employees to tasks in the development of a software project, such that the cost and duration, two conflicting goals, are minimized. The dynamic model of SPSP, called DSPSP, considers that some unpredictable events may occur during the project life cycle, like the arrival of new tasks, which implies on schedule updating along the project. In the context of Search-Based Software Engineering, this work proposes the use of dynamic optimization strategies, based on memory, together with the particle swarm optimization algorithm (PSO) to solve the DSPSP. The results suggest that the addition of these dynamic strategies improves the quality of the solutions in comparison with the application of the PSO algorithm only.

Gabriel Fontes da Silva, Leila Silva, André Britto

Evaluation of Metaheuristics in the Optimization of Laguerre-Volterra Networks for Nonlinear Dynamic System Identification

The main objective of nonlinear dynamic system identification is to model the behaviour of the systems under analysis from input-output signals. To approach this problem, the Laguerre-Volterra network architecture combines the connectionist approach with Volterra-based processing to achieve good performance when modeling high-order nonlinearities, while retaining interpretability of the system’s characteristics. In this research we assess the performances of three metaheuristics in the optimization of Laguerre-Volterra Networks using synthetic input-output data, a task in which only the simulated annealing metaheuristic was previously evaluated.

Victor O. Costa, Felipe M. Müller

EvoLogic: Intelligent Tutoring System to Teach Logic

This article presents the cognitive model of the EvoLogic Intelligent Tutoring System, developed to assist in the teaching-learning process of Natural Deduction in Propositional Logic. EvoLogic consists of 3 agents, among which, the Pedagogical agent (treated here as the student model) and the Specialist agent (based on a Genetic Algorithm) compose the cognitive model. This cognitive model allows an efficient model tracing mechanism to be developed, which will follow each student’s step during the theorem proof. The purpose of the article, in addition to presenting the EvoLogic, is to analyze the efficiency of the ITS in a known exercise that has already been studied in the literature (applied to 57 students). The results show that the EvoLogic obtained all the solutions presented by the students, allowing it to follow the student’s steps, providing real-time feedback, based on the steps that the students are taking (in real time), known as model tracing.

Cristiano Galafassi, Fabiane F. P. Galafassi, Eliseo B. Reategui, Rosa M. Vicari

Impacts of Multiple Solutions on the Lackadaisical Quantum Walk Search Algorithm

The lackadaisical quantum walk is a graph search algorithm for 2D grids whose vertices have a self-loop of weight l. Since the technique depends considerably on this l, research efforts have been estimating the optimal value for different scenarios, including 2D grids with multiple solutions. However, specifically two previous works have used different stopping conditions for the simulations. Here, firstly, we show that these stopping conditions are not interchangeable. After doing such a pending investigation to define the stopping condition properly, we analyze the impacts of multiple solutions on the final results achieved by the technique, which is the main contribution of this work. In doing so, we demonstrate that the success probability is inversely proportional to the density of vertices marked as solutions and directly proportional to the relative distance between solutions. These relations presented here are guaranteed only for high values of the input parameters because, from different points of view, we show that a disturbed transition range exists in the small values.

Jonathan H. A. de Carvalho, Luciano S. de Souza, Fernando M. de Paula Neto, Tiago A. E. Ferreira

Multi-objective Quadratic Assignment Problem: An Approach Using a Hyper-Heuristic Based on the Choice Function

The Quadratic Assignment Problem (QAP) is an example of combinatorial optimization problem and it belongs to NP-hard class. QAP assigns interconnected facilities to locations while minimizing the cost of transportation of the flow of commodities between facilities. Hyper-Heuristics (HH) is a high-level approach that automatically selects or generates heuristics for solving complex problems. In this paper is proposed the use of a selection HH to solve the multi-objective QAP (mQAP). This HH is based on the MOEA/DD (Evolutionary Many-Objective Optimization Algorithm Based on Dominance and Decomposition) and Choice Function strategy. The heuristics selected by HH correspond to the operators that generate new solutions in an iteration of the multi-objective evolutionary algorithm. IGD metric and statistical tests are applied in order to evaluate the algorithm performances in 22 mQAP instances. The effectiveness of the proposed method is shown and it is favorably compared with three other evolutionary multi-objective algorithms: IBEA, SMS-EMOA e MOEAD/DRA.

Bianca N. K. Senzaki, Sandra M. Venske, Carolina P. Almeida

On Improving the Efficiency of Majorization-Minorization for the Inference of Rank Aggregation Models

The Plackett-Luce model represents discrete choices from a set of items and it is often applied to rank aggregation problems. The iterative majorization-minorization method is among the most relevant approaches for finding the maximum likelihood estimation of the parameters of the Plackett-Luce model, but its convergence might be slow. A noninformative initialization is usually adopted which assumes all items are equally relevant at the first step of the iterative inference process. This paper investigates the adoption of approximate inference methods which could allow a better initialization, leading to a smaller number of iterations required for the convergence of majorization-minorization. Two alternatives are adopted: a spectral inference method from the literature and also a novel approach based on a Poisson probabilistic model. Empirical evaluation is performed using synthetic and real-world datasets. It was revealed that initialization provided by an approximate method can lead to statistically significant reductions in both the number of iterations required and also in the overall computational time when compared to the scheme usually adopted for majorization-minorization.

Leonardo Ramos Emmendorfer

On the Multiple Possible Adaptive Mechanisms of the Continuous Ant Colony Optimization

Among the existing techniques to improve the performance of metaheuristics in optimization problems, adaptive parameter control consists in varying one or more parameters of a given metaheuristic according to some indicator of the search conditions. This approach allows metaheuristics to change algorithmic behaviour during the search, and is particularly relevant for the optimization of dynamic problems. In this research we theoretically analyse in which ways the parameters of the ant colony optimization for continuous domains metaheuristic can be adapted, regarding how each parameter influences exploration and exploitation characteristics of the algorithm. Our experimental contributions include validating the colony success rate as a search condition estimator and choosing suitable maps from this estimator to the parameters q and $$\xi $$ ξ of the algorithm. Beyond that, we compare the performances of three proposed adaptive versions of the base metaheuristic and show the benefits of simultaneously adapting multiple parameters.

Victor O. Costa, Felipe M. Müller

A Differential Evolution Algorithm for Contrast Optimization

Image Enhancement is one of the most important phases of the image processing system. Contrast Enhancement plays a key role in this step. Histogram Equalization (HE) is one of the main tools used to improve the contrast of an image. However, the use of HE causes an increase in the natural brightness of the image, which is not desirable in many types of applications such as consumer electronics products. To solve these limitations, it is proposed in this paper a variation of the Differential Evolution metaheuristic algorithm for Contrast Optimization called DECO. The results obtained were statistically compared with other techniques and metaheuristic algorithms. The results showed that DECO is competitive compared with other techniques.

Artur Leandro da Costa Oliveira, André Britto

Neural Networks, Deep Learning and Computer Vision


A Deep Learning Approach for Pulmonary Lesion Identification in Chest Radiographs

Radiography is a primary examination used to diagnose chest conditions, as it is fast, low cost, and widely available. If the physician cannot conclude de diagnosis with the radiography, a computed tomography scan may be required. However, this exam is expensive and has low availability, mainly in the public health system of developing countries and low-income locations, which can delay the treatment and cause complications to the patient’s health condition. Computer-aided diagnosis systems provide more resources for medical diagnostic decision-making, increasing the accuracy of the assessment of the patient’s clinical condition. The main objective of this work is to develop a deep-learning-based approach that performs an automatic analysis of digital images of chest radiographs to aid the detection of pulmonary nodules and masses, aiming to extract sufficient relevant information from the image, optimizing the initial phase of the diagnosis of lung lesions. The developed approach uses neural networks in a dataset of 8,178 annotated chest radiographs extracted from a public dataset. Half of it is of images annotated with “nodule” or “mass”, and the other half is of images with “no findings”. We implemented and tested convolutional neural networks and data preprocessing techniques to create a classification model. A model with five convolution layers that achieved 0.72 accuracy, 0.75 sensitivity, and 0.68 specificity. The proposed approach achieved results comparable to state of the art for lesion identification using limited computational power and can assist radiological practice as a second opinion, which can improve the rates of early diagnosed cancer.

Eduardo Henrique Pais Pooch, Thatiane Alves Pianoschi Alva, Carla Diniz Lopes Becker

A Pipelined Approach to Deal with Image Distortion in Computer Vision

Image classification is a well-established problem in computer vision. Most state-of-the-art models rely on Convolutional Neural Networks to achieve near-human performance in that task. However, CNNs have shown to be susceptible to image manipulation, which undermines the trustability of perception systems. This property is critical, especially in unmanned systems, autonomous vehicles, and scenarios where light cannot be controlled. We investigate the robustness of several Deep-Learning based image recognition models and how the accuracy is affected by several distinct image distortions. The distortions include ill-exposure, low-range image sensors, and common noise types. Furthermore, we also propose and evaluate an image pipeline designed to minimize image distortion before the image classification is performed. Results show that most CNN models are marginally affected by mild miss-exposure and Shot noise. On the one hand, the proposed pipeline can provide significant gain on miss-exposed images. On the other hand, harsh miss-exposure, signal-dependent noise, and impulse noise, incur in a high impact on all evaluated models.

Cristiano Rafael Steffens, Lucas Ricardo Vieira Messias, Paulo Lilles Jorge Drews-Jr, Silvia Silva da Costa Botelho

A Robust Automatic License Plate Recognition System for Embedded Devices

Automatic License Plate Recognition (ALPR) systems are used in many real-world applications, such as road traffic monitoring and traffic law enforcement, and the use of deep learning can result in efficient methods. In this work, we present an ALPR system efficient for edge computing, using a combination of MobileNet-SSD for vehicle detection, Tiny YOLOv3 for license plate detection and OCR-net for character recognition. This method was evaluated in two datasets on a NVIDIA Jetson TX2 system, obtaining 96.87% of accuracy and 8 FPS of framerate in a public real-world scenario dataset and achieving 90.56% of accuracy and 11 FPS of framerate in a private dataset of traffic monitoring images, considering the recognition of at least six characters. It is faster than related works with similar deep learning approaches, that achieved at most 2 FPS, and slightly inferior in accuracy, with less than 10% of difference in the worst scenario. This shows the proposed method is well balanced between accuracy and speed, thus, suitable for embedded devices.

Lucas S. Fernandes, Francisco H. S. Silva, Elene F. Ohata, Aldísio Medeiros, Aloísio V. Lira Neto, Yuri L. B. Nogueira, Paulo A. L. Rego, Pedro Pedrosa Rebouças Filho

Assessing Deep Learning Models for Human-Robot Collaboration Collision Detection in Industrial Environments

The increasing adoption of industrial robots to boost production efficiency is turning human-robot collaborative scenarios much more frequent. In this context, technical factory workers need to be safe at all times from collisions and prepare for emergencies and potential accidents. Another trend in industrial automation is the usage of machine learning techniques - specifically, deep learning algorithms - for image classification. Following these tendencies, this work evaluates the application of deep learning models to detect physical collision in human-robot interactions. Security camera images are used as the primary information source for intelligent collision detection. Unlike other proposed approaches in the literature that apply sensors like Light Detection And Ranging (LIDAR), Laser Range Finder (LRF), or torque sensors from robots, this work does not consider extra sensors, using only 2D cameras. Results show more than 99% of accuracy in the evaluated scenarios, revealing that approaches adopting deep learning algorithms could be promising for human-robot collision avoidance in industrial scenarios. The proposed models may support safety in industrial environments and reduce the impact of collision accidents.

Iago R. R. Silva, Gibson B. N. Barbosa, Carolina C. D. Ledebour, Assis T. Oliveira Filho, Judith Kelner, Djamel Sadok, Silvia Lins, Ricardo Souza

Diagnosis of Apple Fruit Diseases in the Wild with Mask R-CNN

A major challenge in image classification tasks using Machine Learning, and in particular when using deep neural networks, is domain shifting in deployment. This happens when images during usage are capture in different conditions from those used during training. In this paper, we show that despite previous works on the diagnosis of apple tree diseases using standard Convolutional Neural Networks displaying high accuracy, they do so only when no domain shift is present. When the trained model is asked to classify photos of apples taken in the wild, a 22% reduction in F1 score is observed. We propose to treat the task as a segmentation problem and test two different approaches, showing that using Mask R-CNN allows not only to improve performance in the original domain by 3%, but also significantly reduce losses in the new domain (only 6% reduction in F1 score). We establish segmentation as an important alternative towards improving diagnosis of apple tree diseases from photos.

Ramásio Ferreira de Melo, Gustavo Lameirão de Lima, Guilherme Ribeiro Corrêa, Bruno Zatt, Marilton Sanchotene de Aguiar, Gilmar Ribeiro Nachtigall, Ricardo Matsumura Araújo

Ensemble of Algorithms for Multifocal Cervical Cell Image Segmentation

One of the main challenges for cell segmentation is to separate overlapping cells, which is also a challenging task for cytologists. Here we propose a method that combines different algorithms for cervical cell segmentation of Pap smear images and searches for the best result underlying the maximization of a similarity coefficient. We carried out experiments with three state-of-the-art segmentation algorithms on images with clumps of cervical cells. We extracted features such as coefficient of variation and overlapping ratios for each cell grouping and selected the most appropriate algorithm to segment each cell clump. For decision criterion, we identified the cell clumps of the training dataset and calculated the mentioned features. We segmented each clump by the algorithms and reckoned the Dice measure from each segmentation. Finally, we used the kNN classifier to predict the best algorithm among neighboring k-clumps by choosing the one with the largest number of wins. We validated our proposal on multifocal cervical cell images and obtained an average Dice around 76.6% without using a threshold value. These results demonstrated that the proposed ensemble of segmentation algorithms is promising and suitable for cervical cell image segmentation.

Geovani L. Martins, Daniel S. Ferreira, Fátima N. S. Medeiros, Geraldo L. B. Ramalho

Improving Face Recognition Accuracy for Brazilian Faces in a Criminal Investigation Department

This work addresses a critical problem in the use of the Face Recognition (FR) task by a police department state of Brazil. FR is a valuable crime-fighting tool that can help the police service prevent and detect crime, preserve public safety, and bring offenders to justice. Although significant advances have been shown in the last years, the works are based on large labeled datasets and supervised training. But with this approach, the lack of representative data distribution is an issue, known as data bias, mainly according to some aspects that makes FR harders: gender and race. Recent works have suggested that these two aspects may cause a significant accuracy drop. Thus, the paper is concerned over the FR data bias problem for Brazilian faces. Using pre-trained models learned from public datasets, we demonstrate that even in the small training dataset, it is possible to improve the accuracy of Brazilian faces with simple yet effective implementation tricks in fine-tuning. Two important conclusions wast obtained from this study using a non-public police dataset. First, there is a strong suggestion of data bias concerning ethnicity when evaluating models trained with public datasets on Brazilian faces, and second, the fine-tuning task implemented over non-public police dataset showed a relevant improvement to minimize the dataset bias problem.

Jones José da Silva Júnior, Anderson Silva Soares

Neural Architecture Search in Graph Neural Networks

Performing analytical tasks over graph data has become increasingly interesting due to the ubiquity and large availability of relational information. However, unlike images or sentences, there is no notion of sequence in networks. Nodes (and edges) follow no absolute order, and it is hard for traditional machine learning (ML) algorithms to recognize a pattern and generalize their predictions on this type of data. Graph Neural Networks (GNN) successfully tackled this problem. They became popular after the generalization of the convolution concept to the graph domain. However, they possess a large number of hyper-parameters and their design and optimization is currently hand-made, based on heuristics or empirical intuition. Neural Architecture Search (NAS) methods appear as an interesting solution to this problem. In this direction, this paper compares two NAS methods for optimizing GNN: one based on reinforcement learning and a second based on evolutionary algorithms. Results consider 7 datasets over two search spaces and show that both methods obtain similar accuracies to a random search, raising the question of how many of the search space dimensions are actually relevant to the problem.

Matheus Nunes, Gisele L. Pappa

People Identification Based on Soft Biometrics Features Obtained from 2D Poses

An important challenge in the research field of Biometrics is real-time identification, at a distance, in uncontrolled environments, using low-resolution cameras. In such circumstances, soft biometrics can be the only option. In this work, we propose two novel descriptor methods for biometric identification based on ensemble of anthropometric measurements and joints heat-map of the person skeleton, captured from video frames through state-of-the-art 2D poses estimation methods. The proposed methods were assessed on a popular benchmark dataset, CASIA Gait Dataset B, and obtained good results (85% and 89% of rank-1 identification rates, respectively) with PifPaf 2D pose estimation method.

Henrique Leal Tavares, João Baptista Cardia Neto, João Paulo Papa, Danilo Colombo, Aparecido Nilceu Marana

Texture Analysis Based on Structural Co-occurrence Matrix Improves the Colorectal Tissue Characterization

Colorectal cancer causes the deaths of thousands of people worldwide according to the World Health Organization. Automatic tissue recognition of histopathological images is essential for early disease diagnosis. Most research consists of employing texture descriptors to capture features that identify tumor samples. However, accurate multi-class classification is a challenge due to the complexity of colorectal tissue images. Recently, researchers have shown that the analysis of texture structural patterns degraded by image filtering provides valuable features for pre-diagnosis in several medical applications. Here we propose an approach to automatically classify eight types of colorectal tissues using Structural Co-occurrence Matrix. We carried on experiments on 5000 tissue patches from a public dataset to evaluate our algorithm, considering two scenarios: structural differences as a single descriptor, and combined with other characteristics. We found that our strategy improves the state-of-the-art, achieving, accuracy: 91.30%, precision: 91.41%, sensitivity: 91.31%, specificity: 98.76% e F1-score: 91.31%.

Elias P. Medeiros, Daniel S. Ferreira, Geraldo L. B. Ramalho

Unsupervised Learning Method for Encoder-Decoder-Based Image Restoration

The restoration of a corrupted image is a challenge to computer vision and image processing. In hazy, underwater and medical images, the lack of paired images lead the state of the art to synthesize datasets. The Generative Adversarial Networks (GANs) are widely used in these cases. However, computational cost and training instability are current concerns. We present an unsupervised learning algorithm that does not requires paired dataset to train encoder-decoder-like neural network for image restoration. An encoder-decoder learn to represent its input data in a latent representation and reconstruct then in the output. During the training stage, our algorithm applies the encoder-decoder output image to a degradation block that reinforces its degradation. The degraded and input images are matched using a loss function. After the training process, we obtain a restored image from the decoder. We used ill-exposed images to evaluate and validate our algorithm.

Claudio D. Mello, Lucas R. V. Messias, Paulo Lilles Jorge Drews-Jr, Silvia S. C. Botelho

A Computational Tool for Automated Detection of Genetic Syndrome Using Facial Images

Early diagnosis of genetic syndromes has a vital importance in the prevention of any potential related health problems. Down syndrome is the most common genetic syndrome. Patients with down syndrome have a high probability of developmental disorders, like Congenital Heart Disease, which is best treated when discovered in the early stages. These patients also have particular facial characteristics that are identified by geneticists in a physical exam. However, there is subjectivity in the professional analysis, which can lead to a late diagnosis, aggravating the patient’s health condition. This paper proposes a software framework for the automatic detection of Down syndrome using facial features extracted from digital images, which could be used as a tool to help in the early detection of genetic syndromes. For training the machine learning model, we create a dataset gathering 170 pictures of children available on the internet. 50% of the pictures were of children with Down syndrome and the other 50% of healthy children. Then, we automatically identify faces and describe the images with facial landmarks. Next, we use two approaches for feature extraction. The first is a traditional computer vision approach using selected distances and angles and textures between the landmarks. The other, a deep learning approach using a Convolutional Neural Network to extract the features automatically. Then, the feature vector is fed to a Support Vector Machine with a linear kernel on both feature extraction approaches. We validate the results measuring the accuracy, sensitivity, and specificity of both feature extraction approaches using 10-fold cross-validation. The deep learning method resulted in an accuracy of 0.94, while the traditional approach achieved 0.84 of accuracy in our dataset. The results shows that the deep learning approach has a higher classification accuracy for this task, even with a small dataset.

Eduardo Henrique Pais Pooch, Thatiane Alves Pianoschi Alva, Carla Diniz Lopes Becker

Improving FIFA Player Agents Decision-Making Architectures Based on Convolutional Neural Networks Through Evolutionary Techniques

Convolutional Neural Network (CNN) is a fundamental tool in Deep Learning and Computer Vision due to its remarkable ability to extract relevant characteristics from raw data, which has been allowing significant advances in image classification tasks. One of the great challenges in using CNNs is to define an architecture that is suitable for the problem for which they are being designed. Thus, there is a big effort in many recent works to propose approaches to automatically define appropriate CNN architectures. Among them, the Convolutional Neural Network designed by Genetic Algorithm (CNN-GA) method stands out. As CNN-GA has only been validated in static scenarios involving image classification data sets, the main contributions of the present paper are the following: implementing an improved version of CNN-GA, named as Minimum CNN-GA (MCNN-GA), that automatically defines CNN architectures through a policy that minimizes the weigh vector dimensions and the classification error rate of the CNNs; implementing a set of imitation learning based agents that operate in a complex and dynamic scenario of FIFA game exploring distinct raw image representations for the environment at the input of CNNs designed according to the MCNN-GA approach. The performance of these agents were evaluated through their in-game score in tournaments against FIFA’s engine. The results corroborate that the decision-making ability of such agents can be as good as human ability.

Matheus Prado Prandini Faria, Rita Maria Silva Julia, Lídia Bononi Paiva Tomaz

Text Mining and Natural Language Processing


Authorship Attribution of Brazilian Literary Texts Through Machine Learning Techniques

Authorship attribution is the process of identifying the author of a particular document. This task has been performed by experts in the field. However, with the advancement of natural language processing tools and machine learning techniques, this activity has also been performed by computer systems. Authorship attribution has applicability from the detection of plagiarism and copyright to the resolution of forensic problems. There are several works on this subject in the English idiom, however those that consider texts in Portuguese are few. Therefore, this paper aims to study authorship attribution of texts of Brazilian literature. We carried out our experiments using Naïve Bayes and Random Forests methods, and for the feature extraction we considered Term Frequency - Inverse Document Frequency and Part of Speech techniques. The results showed that the Random Forests using as input the textual features extracted by Part of Speech presented the best cross-validation accuracy, although not the best runtime.

Bianca da Rocha Bartolomei, Isabela Neves Drummond

BERTimbau: Pretrained BERT Models for Brazilian Portuguese

Recent advances in language representation using neural networks have made it viable to transfer the learned internal states of large pretrained language models (LMs) to downstream natural language processing (NLP) tasks. This transfer learning approach improves the overall performance on many tasks and is highly beneficial when labeled data is scarce, making pretrained LMs valuable resources specially for languages with few annotated training examples. In this work, we train BERT (Bidirectional Encoder Representations from Transformers) models for Brazilian Portuguese, which we nickname BERTimbau. We evaluate our models on three downstream NLP tasks: sentence textual similarity, recognizing textual entailment, and named entity recognition. Our models improve the state-of-the-art in all of these tasks, outperforming Multilingual BERT and confirming the effectiveness of large pretrained LMs for Portuguese. We release our models to the community hoping to provide strong baselines for future NLP research: .

Fábio Souza, Rodrigo Nogueira, Roberto Lotufo

Deep Learning Models for Representing Out-of-Vocabulary Words

Communication has become increasingly dynamic with the popularization of social networks and applications that allow people to express themselves and communicate instantly. In this scenario, distributed representation models have their quality impacted by new words that appear frequently or that are derived from spelling errors. These words that are unknown by the models, known as out-of-vocabulary (OOV) words, need to be properly handled to not degrade the quality of the natural language processing (NLP) applications, which depend on the appropriate vector representation of the texts. To better understand this problem and finding the best techniques to handle OOV words, in this study, we present a comprehensive performance evaluation of deep learning models for representing OOV words. We performed an intrinsic evaluation using a benchmark dataset and an extrinsic evaluation using different NLP tasks: text categorization, named entity recognition, and part-of-speech tagging. Although the results indicated that the best technique for handling OOV words is different for each task, Comick, a deep learning method that infers the embedding based on the context and the morphological structure of the OOV word, obtained promising results.

Johannes V. Lochter, Renato M. Silva, Tiago A. Almeida

DeepBT and NLP Data Augmentation Techniques: A New Proposal and a Comprehensive Study

Data Augmentation methods – a family of techniques designed for synthetic generation of training data – have shown remarkable results in various Deep Learning and Machine Learning tasks. Despite its widespread and successful adoption within the computer vision community, data augmentation techniques designed for natural language processing (NLP) tasks have exhibited much slower advances and limited success in achieving performance gains. As a consequence, with the exception of applications of back-translation to machine translation tasks, these techniques have not been as thoroughly explored by the wider NLP community. Recent research on the subject also still lacks a proper practical understanding of the relationship between data augmentation and several important aspects of model design, such as hyperparameters and regularization parameters. In this paper, we perform a comprehensive study of NLP data augmentation techniques, comparing their relative performance under different settings. We also propose Deep Back-Translation, a novel NLP data augmentation technique and apply it to benchmark datasets. We analyze the quality of the synthetic data generated, evaluate its performance gains and compare all of these aspects to previous existing data augmentation procedures.

Taynan Maier Ferreira, Anna Helena Reali Costa

Dense Captioning Using Abstract Meaning Representation

The world around us is composed of images that often need to be translated into words. This translation can take place in parts, converting regions of the image into textual descriptions what is also known as dense captioning. By doing so, the information present in this region is converted into words expressing the way objects relate to each other. Computational models have been proposed to perform this task in a similar way to human beings, mainly using deep neural networks. As the same region of the image can be described in several different forms, this study proposes to use the Abstract Meaning Representation (AMR) in the generation of descriptions for a given region. We hypothesize that by using AMR it would be possible to extract the meaning of the text and, as a consequence, improve the quality of the sentences produced by the models. AMR was investigated as a semantic representation formalism evolving the so far proposed models that are based only on purely natural language. The results show that the models trained with sentences in the form of AMR led to better descriptions and the performance achieved was superior in almost all evaluations.

Antonio M. S. Almeida Neto, Helena M. Caseli, Tiago A. Almeida

Can Twitter Data Estimate Reality Show Outcomes?

People’s opinions can impact the real world in many different ways. The election of politics, the sales of products, stock market prices, and consumer habits are just a few examples. However, exploring this relationship between people’s opinions and real-world events requires data from both sides, which is usually expensive and hard to obtain. In this study, on one side, we address this problem by extracting data from Twitter, and on the other side, the real-world outcomes of a reality show. We carefully select a reality show that uses the audience’s opinion to define the elimination of participants. This relationship brings an interesting case of a causal relationship between audience opinion and real-world events. From Twitter, we obtained simple features, such as the counts of likes, retweets, followers, specific hashtags along with sentiment analysis counts obtained from a fine-tuned BERT. From the TV show, we obtained the eliminated candidate and the percentage of audience rejection of the eliminated candidate. To answer the question posed in the title, we empirically evaluate eleven standard machine learning algorithms using the collected features. The models were able to achieve 88.23% of accuracy to predict the eliminated candidate in the reality show.

Kenzo Sakiyama, Lucas de Souza Rodrigues, Edson Takashi Matsubara

Domain Adaptation of Transformers for English Word Segmentation

Word segmentation can contribute to improve the results of natural language processing tasks on several problem domains, including social media sentiment analysis, source code summarization and neural machine translation. Taking the English language as a case study, we fine-tune a Transformer architecture which has been trained through the Pre-trained Distillation (PD) algorithm, while comparing it to previous experiments with recurrent neural networks. We organize datasets and resources from multiple application domains under a unified format, and demonstrate that our proposed architecture has competitive performance and superior cross-domain generalization in comparison with previous approaches for word segmentation in Western languages.

Ruan Chaves Rodrigues, Acquila Santos Rocha, Marcelo Akira Inuzuka, Hugo Alexandre Dantas do Nascimento

Entropy-Based Filter Selection in CNNs Applied to Text Classification

Filter selection in convolutional neural networks aims at finding the most important filters in a convolutional layer, with the goal of reducing computational costs and needed storage, as well as understanding the networks’ inner workings. In this paper we propose an entropy-based filter selection method that ranks filters based on the mutual information between their activations and the output classes using validation data. Our proposed method outperforms using filters’ absolute weights sum by a large margin, allowing to regain better performance with fewer filters.

Rafael Bezerra de Menezes Rodrigues, Wilson Estécio Marcílio Júnior, Danilo Medeiros Eler

Identifying Fine-Grained Opinion and Classifying Polarity on Coronavirus Pandemic

In this paper, we explore the fine-grained opinion identification and polarity classification tasks using twitter data on the COVID-19 pandemic in Brazilian Portuguese. We trained machine learning-based classifiers using a few different methods and tested how well they performed different tasks. For polarity classification, we tested a cross-domain strategy in order to measure the performance of the classifiers among different domains. For fine-grained opinion identification, we provide a taxonomy of opinion aspects and employed them in conjunction with machine learning methods. Based on the obtained results, we found that the cross-domain data improved the results of the polarity classification. For fine-grained opinion identification, the use of a domain taxonomy presented competitive results for the Portuguese language.

Francielle Alves Vargas, Rodolfo Sanches Saraiva Dos Santos, Pedro Regattieri Rocha

Impact of Text Specificity and Size on Word Embeddings Performance: An Empirical Evaluation in Brazilian Legal Domain

Word embeddings is a text representation technique capable of capturing syntactic and semantic linguistic patterns and of representing each word as an n-dimensional dense vector. In the domain of legal texts, there are trained word embeddings in languages like English, Polish, and Chinese. However, to the best of our knowledge, there are no embeddings based on Portuguese (Brazilian and European) legal texts. Given that, our research question is: does the specificity and size of the text corpus used for a word embedding training contribute to a more successful classification? To answer the question, we train word embeddings models in the legal domain with different levels of specificity and size. Then we evaluate their impact on text classification. To deal with the different levels of specificity, we collect text documents from different courts of the Brazilian Judiciary, in hierarchical order. We used these text corpora to train a word embeddings model (GloVe) and then had then evaluated while classifying processes with a deep learning model (CNN). In a context perspective, the results show that in word embeddings trained on smaller corpora sizes, text specificity has a higher impact than for large sizes. Also, in a corpus size perspective, the results demonstrate that the greater the corpus size in embeddings training, the better are the results. However, this impact decreases as the corpus size increases until a point where more words in the corpus have little impact on the results.

Thiago Raulino Dal Pont, Isabela Cristina Sabo, Jomi Fred Hübner, Aires José Rover

Machine Learning for Suicidal Ideation Identification on Twitter for the Portuguese Language

Suicidal ideation is one of the main predictors of the risk of suicide attempt and can be described as thoughts, ideas, planning, and desire to commit suicide. Fast detection of such ideation in early stages is essential for effective treatment. Many expressions of suicidal ideation can be found in publications in social networks, especially by young people. Previous works explore the automatic detection of suicidal ideation in social networks for the English language using machine learning algorithms. In this work, we present the first exploration of machine learning algorithms for suicidal ideation detection for the Portuguese language. We compared three classifiers in Twitter data: SVM, LSTM, and BERT (multilingual and Portuguese). Results suggest that BERT is effective for suicidal ideation identification in Portuguese data, achieving 79% of F1 score and less than 9% false negative score.

Viní­cios Faustino de Carvalho, Bianca Giacon, Carlos Nascimento, Bruno Magalhães Nogueira

Pre-trained Data Augmentation for Text Classification

Data augmentation is a widely adopted method for improving model performance in image classification tasks. Although it still not as ubiquitous in Natural Language Processing (NLP) community, some methods have already been proposed to increase the amount of training data using simple text transformations or text generation through language models. However, recent text classification tasks need to deal with domains characterized by a small amount of text and informal writing, e.g., Online Social Networks content, reducing the capabilities of current methods. Facing these challenges by taking advantage of the pre-trained language models, low computational resource consumption, and model compression, we proposed the PRE-trained Data AugmenTOR (PREDATOR) method. Our data augmentation method is composed of two modules: the Generator, which synthesizes new samples grounded on a lightweight model, and the Filter, that selects only the high-quality ones. The experiments comparing Bidirectional Encoder Representations from Transformer (BERT), Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM) and Multinomial Naive Bayes (NB) in three datasets exposed the effective improvement of accuracy. It was obtained 28.5% of accuracy improvement with LSTM on the best scenario and an average improvement of 8% across all scenarios. PREDATOR was able to augment real-world social media datasets and other domains, overcoming the recent text augmentation techniques.

Hugo Queiroz Abonizio, Sylvio Barbon Junior

Predicting Multiple ICD-10 Codes from Brazilian-Portuguese Clinical Notes

ICD coding from electronic clinical records is a manual, time-consuming and expensive process. Code assignment is, however, an important task for billing purposes and database organization. While many works have studied the problem of automated ICD coding from free text using machine learning techniques, most use records in the English language, especially from the MIMIC-III public dataset. This work presents results for a dataset with Brazilian Portuguese clinical notes. We develop and optimize a Logistic Regression model, a Convolutional Neural Network (CNN), a Gated Recurrent Unit Neural Network and a CNN with Attention (CNN-Att) for prediction of diagnosis ICD codes. We also report our results for the MIMIC-III dataset, which outperform previous work among models of the same families, as well as the state of the art. Compared to MIMIC-III, the Brazilian Portuguese dataset contains far fewer words per document, when only discharge summaries are used. We experiment concatenating additional documents available in this dataset, achieving a great boost in performance. The CNN-Att model achieves the best results on both datasets, with micro-averaged F1 score of 0.537 on MIMIC-III and 0.485 on our dataset with additional documents.

Arthur D. Reys, Danilo Silva, Daniel Severo, Saulo Pedro, Marcia M. de Sousa e Sá, Guilherme A. C. Salgado

Robust Ranking of Brazilian Supreme Court Decisions

This work studies quantitative measures for ranking judicial decisions by the Brazilian Supreme Court (STF). The measures are based on a network built over decisions whose cases were finalized in the Brazilian Supreme Court between 01/2001 and 12/2019, obtained by crawling publicly available STF records. Three ranking measures are proposed; two are adaptations of the PageRank algorithm, and one adapts Kleinberg’s Algorithm. All are compared with respect to agreement on top 100 rankings; we also analyze each measure robustness based on self-agreement under perturbation.We conclude that all algorithms show the network of citations is highly robust under perturbation. Both versions of PageRank, even if producing different rankings, achieved robustness results which are indistinguishable via statistical tests; Kleinberg’s algorithm achieves more promising results to rank leading cases at the top, but it does need more research to achieve this goal.

Jackson José de Souza, Marcelo Finger

Semi-cupervised Sentiment Analysis of Portuguese Tweets with Random Walk in Feature Sample Networks

Nowadays, a huge amount of data is generated daily around the world and many machine learning tasks require labeled data, which sometimes is not available. Manual labeling such amount of data may consume a lot of time and resources. One way to overcome this limitation is to learn from both labeled and unlabeled data, which is known as semi-supervised learning. In this paper, we use a positive-unlabeled (PU) learning technique called Random Walk in Feature-Sample Networks (RWFSN) to perform semi-supervised sentiment analysis, which is an important machine learning that can be achieved by classifying the polarity of texts, in Brazilian Portuguese tweets. Although RWFSN reaches excellent performance in many PU learning problems, it has two major limitations when applied in our problem: it assumes that samples are long texts (many features) and that the class prior probabilities are known. We leverage the technique by augmenting the data representation in the feature space and by adding a validation set to better estimate the class priors. As a result, we identified unlabeled samples of the positive class with precision around at 70% in higher labeled ratio, but with high standard deviation, showing the impact of data variance in results. Moreover, given the properties of the RWFSN method, we provide interpretability of the results by pointing out the most relevant features of the task.

Pedro Gengo, Filipe A. N. Verri

The Use of Machine Learning in the Classification of Electronic Lawsuits: An Application in the Court of Justice of Minas Gerais

With the abundance of electronic lawsuits already implemented throughout Brazil, courts have a valuable source of information in text format that constitute attractive bases for the application of Artificial Intelligence (AI) and machine learning (ML). In this research, supervised learning approaches were explored for the automatic classification of types of documents in electronic court proceedings of the Court of Justice of Minas Gerais (TJMG). The methodology is composed of cross-validation within the specific corpus of the legal domain, comparing traditional classifiers and more recent methods based on neural networks and deep learning models, using Glove word vectors generated for the Portuguese Language and Convolutional Neural Network (CNN). This work achieved high precision in the results and if implemented in the courts it can provide significant savings in financial and human resources, allowing lawsuits classification activities, currently done manually by employees, to be performed in seconds by the machine. The result of this experiment shows that the hit rates for the CNN and SVM classifiers exceed 93% and is considered a high result. Based on the assumption that Glove brings extra semantic resources that can help in classifying texts from court proceedings, this work demonstrates Glove’s effectiveness by showing that a CNN with Glove surpasses SVM.

Adriano Capanema Silva, Luiz Cláudio Gomes Maia

Towards a Free, Forced Phonetic Aligner for Brazilian Portuguese Using Kaldi Tools

Phonetic analysis of speech, in general, requires the alignment of audio samples to its phonetic transcription. This task could be performed manually for a couple of files, but as the corpus grows large it becomes unfeasibly time-consuming, which emphasizes the need for computational tools that perform such speech-phonemes forced alignment automatically. Therefore, due to the scarce availability of phonetic alignment tools for Brazilian Portuguese (BP), this work describes the evolution process towards creating a free phonetic alignment tool for BP using Kaldi, a toolkit that has been the state of the art for open-source speech recognition. Five acoustic models were trained with Kaldi and tested in phonetic alignment, where the evaluation took place in terms of the phone boundary metric. The results show that its performance is similar to some Kaldi-based aligners for other languages, and superior to an outdated phonetic aligner for BP based on HTK toolkit.

Ana Larissa Dias, Cassio Batista, Daniel Santana, Nelson Neto

Twitter Moral Stance Classification Using Long Short-Term Memory Networks

In Natural Language Processing, stance detection is the computational task of deciding whether a piece of text expresses a favourable or unfavourable attitude (or stance) towards a given topic. Stance detection may be divided into two subtasks: deciding whether a piece of text conveys any stance towards the target topic and, once we have established that the text does convey a stance, determining its polarity (e.g., favourable or unfavourable) towards the target. Both tasks - hereby called stance recognition and (stance) polarity classification - are the focus of the present work. Taking as a basis a corpus of 13.7k tweets in the Brazilian Portuguese language, and which conveys stances towards five moral issues (abortion legislation, death penalty, drug legalisation, lowering of criminal age, and racial quotas at universities), we compare a number of long short-term memory (LSTM) and bidirectional LSTM (BiLSTM) models for stance recognition and polarity classification. In doing so, the two tasks are addressed both independently and as a joint model. Results suggest that the use of BiLSTM models with attention mechanism outperform the alternatives under consideration, and pave the way for more comprehensive studies in this domain.

Matheus Camasmie Pavan, Wesley Ramos dos Santos, Ivandré Paraboni

A Study on the Impact of Intradomain Finetuning of Deep Language Models for Legal Named Entity Recognition in Portuguese

Deep language models, like ELMo, BERT and GPT, have achieved impressive results on several natural language tasks. These models are pretrained on large corpora of unlabeled general domain text and later supervisedly trained on downstream tasks. An optional step consists of finetuning the language model on a large intradomain corpus of unlabeled text, before training it on the final task. This aspect is not well explored in the current literature. In this work, we investigate the impact of this step on named entity recognition (NER) for Portuguese legal documents. We explore different scenarios considering two deep language architectures (ELMo and BERT), four unlabeled corpora and three legal NER tasks for the Portuguese language. Experimental findings show a significant improvement on performance due to language model finetuning on intradomain text. We also evaluate the finetuned models on two general-domain NER tasks, in order to understand whether the aforementioned improvements were really due to domain similarity or simply due to more training data. The achieved results also indicate that finetuning on a legal domain corpus hurts performance on the general-domain NER tasks. Additionally, our BERT model, finetuned on a legal corpus, significantly improves on the state-of-the-art performance on the LeNER-Br corpus, a Portuguese language NER corpus for the legal domain.

Luiz Henrique Bonifacio, Paulo Arantes Vilela, Gustavo Rocha Lobato, Eraldo Rezende Fernandes


Additional information

Premium Partner

    Image Credits