Skip to main content

2016 | Buch

Computer and Information Science

insite
SUCHEN

Über dieses Buch

This edited book presents scientific results of the 15th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2016) which was held on June 26– 29 in Okayama, Japan. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

The conference organizers selected the best papers from those papers accepted for presentation at the conference. The papers were chosen based on review scores submitted by members of the program committee, and underwent further rigorous rounds of review. This publication captures 12 of the conference’s most promising papers, and we impatiently await the important contributions that we know these authors will bring to the field of computer and information science.

Inhaltsverzeichnis

The Drill-Locate-Drill (DLD) Algorithm for Automated Medical Diagnostic Reasoning: Implementation and Evaluation in Psychiatry

The drill-locate-drill (DLD) algorithm models the expert clinician’s top-down diagnostic reasoning process, which generates a set of diagnostic hypotheses using a set of screening symptoms, and then tests them by eliciting specific clinical information for each differential diagnosis. The algorithm arrives at final diagnoses by matching the elicited clinical features with what is expected in each differential diagnosis using an efficient technique known as the orthogonal vector projection method. The DLD algorithm is compared with its rival select-test (ST) algorithm and its design/implementation in psychiatry, and evaluation using actual patient data is discussed.

Implementation of Artificial Neural Network and Multilevel of Discrete Wavelet Transform for Voice Recognition

This paper presents an implementation of simple Artificial Neural Network model and multilevel of Discrete Wavelet Transform as feature extractions, which is achieved to increase the high recognition rates up to 95 % instead of Short-time Fourier Transform in the conversation background noises at noises up to 65 dB. The performance evaluation has been demonstrated in terms of correct recognition rate, maximum noise power of interfering sounds, hit rates, false alarm rates and miss rates. The proposed method offers a potential alternative to intelligence voice recognition system in speech analysis-synthesis and recognition applications.

Parallel Dictionary Learning for Multimodal Voice Conversion Using Matrix Factorization

Parallel dictionary learning for multimodal voice conversion is proposed in this paper. Because of noise robustness of visual features, multimodal feature has been attracted in the field of speech processing, and we have proposed multimodal VC using Non-negative Matrix Factorization (NMF). Experimental results showed that our conventional multimodal VC can effectively converted in a noisy environment, however, the difference of conversion quality between audio input VC and multimodal VC is not so large in a clean environment. We assume this is because our exemplar dictionary is over-complete. Moreover, because of non-negativity constraint for visual features, our conventional multimodal NMF-based VC cannot factorize visual features effectively. In order to enhance the conversion quality of our NMF-based multimodal VC, we propose parallel dictionary learning. Non-negative constraint for visual features is removed so that we can handle visual features which include negative values. Experimental results showed that our proposed method effectively converted multimodal features in a clean environment.

Unanticipated Context Awareness for Software Configuration Access Using the getenv API

Configuration files, command-line arguments and environment variables are the dominant tools for local configuration management today. When accessing such program execution environments, however, most applications do not take context, e.g. the system they run on, into account. The aim of this paper is to integrate unmodified applications into a coherent and context-aware system by instrumenting the getenv API. We propose a global database stored in configuration files that includes specifications for contextual interpretations and a novel matching algorithm. In a case study we analyze a complete Debian operating system where every getenv API call is intercepted. We evaluate usage patterns of 16 real-world applications and systems and report on limitations of unforeseen context changes. The results show that getenv is used extensively for variability. The tool has acceptable overhead and improves context-awareness of many applications.

Stripes-Based Object Matching

We propose a novel and fast 3D object matching framework that is able to fully utilise the geometry of objects without any object reconstruction process. Traditionally, 3D object matching methods are mostly applied based on 3D models. In order to generate accurate and proper 3D models, object reconstruction methods are used for the collected data from laser or time-of-flight sensors. Although those methods are naturally appealing, heavy computations are required for segmentation as well as transformation estimation. Moreover, some useful features could be filtered out during the reconstruction process. On the contrary, the proposed method is applied without any reconstruction process. Building on stripes generated from laser scanning lines, we represent an object by a set of stripes. To capture the full geometry, we describe each stripe by the proposed robust point context descriptor. After representing all stripes, we perform a flexible and fast matching over all collected stripes. We show that the proposed method achieves promising results on some challenging real-life objects.

User Engagement Analytics Based on Web Contents

User engagement is a relation of emotion, cognitive, and behavior between users and resources at a specific time or range of time. Measuring and analyzing web user engagement has been used by web developers as a means to gather feedback information from web users in order to understand their behavior and find ways to improve the websites. Many websites have been successful in using analytics tools since the information acquired by the tools helps, for example, to increase sales and the rate of returning to the websites. Most web analytics tools in the market focus on measuring engagement with the whole webpages, whereas the insight information about user behavior with respect to particular contents or areas within webpages is missing. However, such knowledge of web user engagement based on contents of the webpages would provide a deeper perspective on user behavior, compared to that based on the whole webpages. To fill this gap, we propose a set of web-content-based user engagement metrics that are adapted from existing web-page-based engagement metrics. In addition, the proposed metrics are accompanied by an analytics tool which the web developers can install on their websites to acquire deeper user engagement information.

Business Process Verification and Restructuring LTL Formula Based on Machine Learning Approach

It is important to deal with rapidly changing environments (regulations, customer behavior change, and process improvement etc.) to keep achieving business goals. Therefore, verification for business process in various phases are needed to make sure of goal achievements. LTL (Linear Temporal Logic) verification is an important method for checking a specific property to be satisfied with business processes, but correctly writing formal language like LTL is difficult. Lacks of domain knowledge and knowledge of mathematical logics have bad influence on writing LTL formulas. In this paper, we use LTL verification and prediction based on decision tree learning for verification of specific properties. Furthermore, we helps writing properly LTL formula for representing the correct desirable property using decision tree constrction. We conducted a case study for evaluations.

Development of a LMS with Dynamic Support Functions for Active Learning

In this study, we propose a new type of Learning Management System (LMS), which gives teachers more freedom in planning active learning. We name it “Dynamic LMS (DLMS)”. The DLMS generates user interface, in which learners access information necessary for activities, by interpreting a “Learning Design Object (LDO) package”. It is based on a minimum expansion of the Instructional Management System-Learning Design (IMS-LD). It offers teachers a lot of choices. They can choose self-learning, a pair work or a group work, and even change roles in a group later. It is constructed from the “LD editor” and “LD player” that are compatible with the LDO package. These linked functions enable teachers to plan a variety of activities and, consequently, create more effective active learning in their classes.

Content-Based Microscopic Image Retrieval of Environmental Microorganisms Using Multiple Colour Channels Fusion

Environmental Microorganisms (EMs) are usually unicellular and cannot be seen with the naked eye. Though they are very small, they impact the entire biosphere by their omnipresence. Traditional DeoxyriboNucleic Acid (DNA) and manual investigation in EMs search are very expensive and time-consuming, we develop an EM search system based on Content-based Image Retrieval (CBIR) method by using multiple colour channels fusion. The system searches over a database to find EM images that are relevant to the query EM image. Through the CBIR method, the features are automatically extracted from EM images. We compute the similarity between a query image and EM database images in terms of each colour channel. As many colour channels exist, a weight fusion of similarity in different channels is required. We apply Particle Swarm Optimisation (PSO), Fish Swarm Optimisation Algorithm (FSOA), Invasive Weed Optimization (IWO) and Immunity Algorithm (IA) to laugh fusion. Then obtain the re-weighted EM similarity and final retrieval result. Experiments on our EM dataset show the advantage of the proposed multiple colour channels fusion method over each single channel result.

Enhancing Spatial Data Warehouse Exploitation: A SOLAP Recommendation Approach

This paper presents a recommendation approach that proposes personalized queries to SOLAP users in order to enhance the exploitation of spatial data warehouses. The approach allows implicit extraction of the preferences and needs of SOLAP users using a spatial-semantic similarity measure between queries of different users. The proposal is defined theoretically and validated by experiments.

On the Prevalence of Function Side Effects in General Purpose Open Source Software Systems

A study that examines the prevalence and distribution of function side effects in general-purpose software systems is presented. The study is conducted on 19 open source systems comprising over 9.8 Million lines of code (MLOC). Each system is analyzed and the number of function side effects is determined. The results show that global variables modification and parameters by reference are the most prevalent side effect types. Thus, conducting accurate program analysis for many adaptive changes processes (e.g., automatic parallelization to improve their parallelizability to better utilize multi-core architectures) becomes very costly or impractical to conduct. Analysis of the historical data over a 7-year period for 10 systems how that there is a relatively large percentage of affected functions over the lifetime of the systems. The trend is flat in general, therefore posing further problems for inter-procedural analysis.

aIME: A New Input Method Based on Chinese Characters Algebra

Chinese characters are the cement of Asia; they are found in numerous scripts and languages. Yet, the number of characters involved is huge, thus causing a memorisation issue. Both foreign learners and native speakers have to cope with this issue. Aiming at mitigating this issue, we have recently started to describe a novel way to approach Chinese characters: the algebraic way. Such a scientific approach to these characters is innovative in itself, and we propose in this paper a concrete implementation of an input method editor (IME) based on this algebra: aIME. Furthermore, we shall experimentally measure the relevance of our IME and its performance by comparing it to several other existing IMEs. From the results obtained, it is clear that the proposed input method brings significant improvement over conventional approaches.

Metadaten
Titel
Computer and Information Science
Copyright-Jahr
2016
Electronic ISBN
978-3-319-40171-3
Print ISBN
978-3-319-40170-6
DOI
https://doi.org/10.1007/978-3-319-40171-3

Premium Partner