Skip to main content

2014 | Buch

Advanced Computational Methods for Knowledge Engineering

Proceedings of the 2nd International Conference on Computer Science, Applied Mathematics and Applications (ICCSAMA 2014)

insite
SUCHEN

Über dieses Buch

The proceedings consists of 30 papers which have been selected and invited from the submissions to the 2nd International Conference on Computer Science, Applied Mathematics and Applications (ICCSAMA 2014) held on 8-9 May, 2014 in Budapest, Hungary. The conference is organized into 7 sessions: Advanced Optimization Methods and Their Applications, Queueing Models and Performance Evaluation, Software Development and Testing, Computational Methods for Mobile and Wireless Networks, Computational Methods for Knowledge Engineering, Logic Based Methods for Decision Making and Data Mining and Nonlinear Systems and Applications, respectively. All chapters in the book discuss theoretical and practical issues connected with computational methods and optimization methods for knowledge engineering. The editors hope that this volume can be useful for graduate and Ph.D. students and researchers in Computer Science and Applied Mathematics. It is the hope of the editors that readers of this volume can find many inspiring ideas and use them to their research. Many such challenges are suggested by particular approaches and models presented in individual chapters of this book.

Inhaltsverzeichnis

Frontmatter

Advanced Optimization Methods and Their Applications

Frontmatter
A Collaborative Metaheuristic Optimization Scheme: Methodological Issues

A so called MetaStorming scheme is proposed to solve hard Combinatorial Optimization Problems (COPs). It is an innovative parallel-distributed collaborative approach based on metaheuristics. The idea is inspired from brainstorming, an efficient meeting mode for collectively solving company’s problems. Different metaheuristic algorithms are used parallely for collectively solving COPs. These algorithms collaborate by exchanging the best current solution obtained after each running cycle via an MPI (Message Passing Interface) library. Several collaborative ways can be investigated in the generic scheme. As an illustrative example, we show how the MetaStorming works on an instance of the well known Traveling Salesman Problem (TSP).

Mohammed Yagouni, Hoai An Le Thi
DC Programming and DCA for General DC Programs

We present a natural extension of DC programming and DCA for modeling and solving general DC programs with DC constraints. Two resulting approaches consist in reformulating those programs as standard DC programs in order to use standard DCAs for their solutions. The first one is based on penalty techniques in DC programming, while the second linearizes concave functions in DC constraints to build convex inner approximations of the feasible set. They are proved to converge to KKT points of general DC programs under usual constraints qualifications. Both designed algorithms can be viewed as a sequence of standard DCAs with updated penalty (resp. relaxation) parameters.

Hoai An Le Thi, Van Ngai Huynh, Tao Pham Dinh
DC Programming Approaches for BMI and QMI Feasibility Problems

We propose some new DC (difference of convex functions) programming approaches for solving the Bilinear Matrix Inequality (BMI) Feasibility Problems and the Quadratic Matrix Inequality (QMI) Feasibility Problems. They are both important NP-hard problems in the field of robust control and system theory. The inherent difficulty lies in the nonconvex set of feasible solutions. In this paper, we will firstly reformulate these problems as a DC program (minimization of a concave function over a convex set). Then efficient approaches based on the DC Algorithm (DCA) are proposed for the numerical solution. A semidefinite program (SDP) is required to be solved during each iteration of our algorithm. Moreover, a hybrid method combining DCA with an adaptive Branch and Bound is established for guaranteeing the feasibility of the BMI and QMI. A concept of partial solution of SDP via DCA is proposed to improve the convergence of our algorithm when handling more large-scale cases. Numerical simulations of the proposed approaches and comparison with PENBMI are also reported.

Yi-Shuai Niu, Tao Pham Dinh
A DC Programming Approach for Sparse Linear Discriminant Analysis

We consider the supervised pattern classification in the high-dimensional setting, in which the number of features is much larger than the number of observations. We present a novel approach to the sparse linear discriminant analysis (LDA) using the zero-norm. The resulting optimization problem is non-convex, discontinuous and very hard to solve. We overcome the discontinuity by using an appropriate continuous approximation to zero-norm such that the resulting problem can be formulated as a DC (Difference of Convex functions) program to which DC programming and DC Algorithms (DCA) can be investigated. The computational results show the efficiency and the superiority of our approach versus the

l

1

regularization model on both feature selection and classification.

Phan Duy Nhat, Manh Cuong Nguyen, Hoai An Le Thi
Minimum K-Adjacent Rectangles of Orthogonal Polygons and its Application

This paper presents the problem of partitioning a rectangle

$\Re$

which contains several non-overlapping orthogonal polygons, into a minimum number of rectangles. By introducing maximally horizontal line segments of largest total length, the number of rectangles intersecting with any vertical scan line over the interior surface of

$\Re$

is less than or equal to

k

, a positive integer. Our methods are based on a construction of the directed acyclic graph

G

 = (

V

,

E

) corresponding to the structures of the orthogonal polygons contained in

$\Re$

. According to this, it is easy to verify whether a horizontal segment can be introduced in the partitioning process. It is demonstrated that an optimal partition exists if only if all path lengths from the source to the sink in

G

are less than or equal to

k

 + 1. Using this technique, we propose two integer program formulations with a linear number of constraints to find an optimal partition. Our goal is motivated by a problem involving utilization of memory descriptors applying to a memory protection mechanism for the embedded systems. We discuss our motivation in greater details.

Thanh-Hai Nguyen
The Confrontation of Two Clustering Methods in Portfolio Management: Ward’s Method Versus DCA Method

This paper presents a new methodology to cluster asset in the portfolio theory. This new methodology is compare with the classical ward cluster in SAS software. The method is based on DCA (Difference of Convex functions), an innovative approach in nonconvex optimization framework which has been successfully used on various industrial complex systems. The cluster can be used in an empirical example in the context of multi-managers portfolio management, and to identify the one that seems to best fit the objectives of portfolio management of a fund of funds or funds. The cluster is useful to reduce the choice of asset class and to facilitate the optimization of Markowitz frontier.

Hoai An Le Thi, Pascal Damel, Nadège Peltre, Nguyen Trong Phuc
Approximating the Minimum Tour Cover with a Compact Linear Program

A tour cover of an edge-weighted graph is a set of edges which forms a closed walk and covers every other edge in the graph. The minimum tour cover problem is to find a minimum weight tour cover.

This problem is introduced by Arkin, Halldórsson and Hassin (Information Processing Letters 47:275-282, 1993) where the author prove the

NP

-hardness of the problem and give a combinatorial 5.5-approximation algorithm. Later Könemann, Konjevod, Parekh, and Sinha [7] improve the approximation factor to 3 by using a linear program of exponential size. The solution of this program involves the ellipsoid method with a separation oracle. In this paper, we present a new approximation algorithm achieving a slightly weaker approximation factor of 3.5 but only dealing with a compact linear program.

Viet Hung Nguyen

Queueing Models and Performance Evaluation

Frontmatter
A New Queueing Model for a Physician Office Accepting Scheduled Patients and Patients without Appointments

Family physicians play a significant role in the public health care systems. Assigning appointments to patients in an educated and systematic way is necessary to minimize their waiting times and thus ensure certain convenience to them. Furthermore, careful scheduling is very important for effective utilization of physicians and high system efficiency.

This paper presents new queueing model that incorporates important aspects related to scheduled patients (patients and clients are synonymous throughout this paper), patients without appointments and the no-show phenomenon of clients.

Ram Chakka, Dénes Papp, Thang Le-Nhat
Usability of Deterministic and Stochastic Petri Nets in the Wood Industry: A Case Study

Deterministic and stochastic Petri nets (DSPNs) are commonly used for modeling processes having either deterministically, or exponentially distributed delays. However, DSPNs are not widespread in the wood industry, where the leaders of a company try to make decisions based on common sense instead of using high-level performance evaluation methods.

In this paper, we present a case study, in which we demonstrate the usability of DSPN models in the wood industry. In the case study, we model the production of wooden windows of a Hungarian company [2]. Using the model, we can simply determine the bottleneck of the manufacturing process and show how to eliminate it.

Ádám Horváth
A New Approach for Buffering Space in Scheduling Unknown Service Time Jobs in a Computational Cluster with Awareness of Performance and Energy Consumption

In this paper, we present a new approach concentrating on buffering schemes along with scheduling policies for distribution of compute – intensive jobs with unknown service times in a cluster of heterogeneous servers. We utilize two types of ADM Operton processors of which parameters are measured according to SPEC’s Benchmark and Green500 list. We investigate three cluster models according to buffering schemes (server-level queue, class-level queue, and cluster-level queue). The simulation results show that the buffering schemes significantly influence the performance capacity of clusters, regarding the waiting time and response time experienced by incoming jobs while they retain energy efficiency of system.

Xuan T. Tran, Binh T. Vu

Software Development and Testing

Frontmatter
Supporting Energy-Efficient Mobile Application Development with Model-Driven Code Generation

Energy-efficiency is a critical attribute of mobile applications, but it is often difficult for the developers to optimize the energy consumption on the code level. In this work we explore how we could use a model and code library based approach to assist the developer. Our vision is that developers can specify the operation on a high level and the system automatically converts the model to an appropriate software pattern. In this way, the developer can focus on the actual functionality of the app. We exemplify our approach with several energy-efficient software patterns, which focus on wireless data communication which is one of the biggest energy hogs with typical mobile applications. We discuss the pros and cons of different implementation alternatives and suggest open questions needing further exploration.

Imre Kelényi, Jukka K. Nurminen, Matti Siekkinen, László Lengyel
Problems of Mutation Testing and Higher Order Mutation Testing

Since Mutation Testing was proposed in the 1970s, it has been considered as an effective technique of software testing process for evaluating the quality of the test data. In other words, Mutation Testing is used to evaluate the fault detection capability of the test data by inserting errors into the original program to generate mutations, and after then check whether tests are good enough to detect them. However, the problems of mutation testing such as a large number of generated mutants or the existence of equivalent mutants, are really big barriers for applying mutation testing. A lot of solutions have been proposed to solve that problems. A new form of Mutation Testing, Higher Order Mutation Testing, was first proposed by Harman and Jia in 2009 and is one of the most promising solutions. In this paper, we consider the main limitations of Mutation Testing and previous proposed solutions to solve that problems. This paper also refers to the development of Higher Order Mutation Testing and reviews the methods for finding the good Higher Order Mutants.

Quang Vu Nguyen, Lech Madeyski
Realization of a Test System Framework

This paper proposes an architecture for a test system framework where the execution of the logic of test cases is decoupled from the communications between a system under a test and the proposed test framework. A proof of concept is demonstrated through an example.

Tamás Krejczinger, Binh Thai Vu, Tien Van Do

Computational Methods for Knowledge Engineering

Frontmatter
Processing Collective Knowledge from Autonomous Individuals: A Literature Review

In recent years, Collective Intelligence is one of the major research subjects in Computer Science. Determining knowledge of a collective is one of the most important issues and has been considered as a subfield of Collective Intelligence. In this paper, we present a literature review on the research progress related to determining collective knowledge from autonomous indidviduals. For this aim, we will analyze some related works on determining representation of a collective, team or group knowledge, and collective intelligence. Finally, some conclusions and future directions are presented.

Van Du Nguyen, Ngoc Thanh Nguyen
Solving Conflicts in Video Semantic Annotation Using Consensus-Based Social Networking in a Smart TV Environment

Smart TV media content can be embedded with available information from the Internet and shared via the Social Web. An increasing number of social videos are now available as well as traditional digital videos such as TV programs and video on demand (VOD). However, it is difficult to find relevant content due to a lack of semantic content. Therefore, there is a great need for multimedia content analysis techniques. In this work, we propose a framework of collaborative semantic video annotation using consensus-based social network. The collaborative video annotation process is organized via social networking. The media content is shared with friends of friend who collaboratively annotate it. We use ontologies to semantically describe the media content and share the media content amongst the users. A consensus choice is applied to conciliate conflicts on annotation information among the participants. According to our experiments, a consensus method is an effective approach that can be used to solve conflicts in a collaborative annotation.

Trong Hai Duong, Tran Hoang Chau Dao, Jason J. Jung, Ngoc Thanh Nguyen
An Overview of Fuzzy Ontology Integration Methods Based on Consensus Theory

Ontology plays an important role in the organization and management of knowledge in the field of research and various applications. Ontology research has attracted the attention of scientists worldwide. A traditional ontology concept lacks the ability to represent fuzzy information in the field of knowledge uncertainty. It turned out that fuzzy ontology is a good approach for this matter. On the other hand the problem of fuzzy ontology integration is still a problem with many challenges and requires research in both theory and application aspects. This paper presents an overview of selected results of recent research on the methods of resolving conflicts between ontologies in fuzzy ontology integration approaches.

Hai Bang Truong, Xuan Hung Quach
Novel Operations for FP-Tree Data Structure and Their Applications

Frequent Pattern Tree (FP-tree) proposed by Han et al. is a data structure that is used for storing frequent patterns (or itemsets) in association rule mining. FP-tree helps to reduce the number of database (DB) scans to only two, and shrink down the number of candidates of frequent patterns. This paper proposes to define some operations on the FP-tree in order to empower its application. With the devised operations, we can: a) incrementally build the FP-tree when only a subset of a DB is ready at a time; b) construct FP-tree in parallel with low cost of communication; c) build local FP-trees independently based on local database and then use them to construct the global FP-tree in a distributed system; d) prune the FP-tree according to different values of minimum support threshold for frequent pattern mining.

Tri-Thanh Nguyen, Quang-Thuy Ha
Policy by Policy Analytical Approach to Develop GAINS-City Data Marts Based on Regional Federated Data Warehousing Framework

City-scale data marts are requested to help local policy makers to identify viable and efficient solutions. In this context, the Regional Federated Data warehousing Framework (RFDW), which has been introduced in our previous literatures, is extended and used to develop such city-scale data marts. Afterwards, a case study, namely GAINS-City China, will be presented as a city scale manifested data mart to approve our concepts.

Thanh Binh Nguyen
Next Improvement Towards Linear Named Entity Recognition Using Character Gazetteers

Natural Language Processing (NLP) is important and interesting area in computer science affecting also other spheres of science; e.g., geographical processing, social statistics, molecular biology. A large amount of textual data is continuously produced in media around us and therefore there is a need of processing it in order to extract required information. One of the most important processing steps in NLP is Named Entity Recognition (NER), which recognizes occurrence of known entities in input texts. Recently, we have already presented our approach for linear NER using gazetteers, namely Hash-map Multi-way Tree (HMT) and first-Child next-Sibling binary Tree (CST) with their strong and weak sides. In this paper, we present Patricia Hash-map Tree (PHT) character gazetteer approach, which shows as the best compromise between the both previous versions according to matching time and memory consumption.

Giang Nguyen, Štefan Dlugolinský, Michal Laclavík, Martin Šeleng, Viet Tran

Logic Based Methods for Decision Making and Data Mining

Frontmatter
Semantic Evaluation of Text Clustering

In this paper, we investigate the problem of quality analysis of clustering results using semantic annotations given by experts. We propose a novel approach to construction of evaluation measure, called SEE (Semantic Evaluation by Exploration), which is an improvement of the existing methods such as Rand Index or Normalized Mutual Information. We illustrate the proposed evaluation method on the freely accessible biomedical research articles from Pubmed Central (PMC). Many articles from Pubmed Central are annotated by the experts using Medical Subject Headings (MeSH) thesaurus. We compare different semantic techniques for search result clustering using the proposed measure.

Sinh Hoa Nguyen, Wojciech Świeboda, Hung Son Nguyen
An Improved Depth-First Control Strategy for Query-Subquery Nets in Evaluating Queries to Horn Knowledge Bases

The QSQN evaluation method uses query-subquery nets and allows any control strategy for processing queries to Horn knowledge bases. This paper proposes an improved depth-first control strategy for the QSQN evaluation method to reduce the number of accesses to the intermediate relations and extensional relations. We came up to the improvement by using query-subquery nets to observe which relations are likely to grow or saturate and which ones are not yet affected by the computation and the other relations. Our intention is to accumulate as many as possible tuples or subqueries at each node of the query-subquery net before processing it. The experimental results confirm the outperformance of the improved version.

Son Thanh Cao, Linh Anh Nguyen
A Domain Partitioning Method for Bisimulation-Based Concept Learning in Description Logics

We have implemented a bisimulation-based concept learning method for description logics-based information systems using information gain. In this paper, we present our domain partitioning method that was used for the implementation. Apart from basic selectors, we also used a new kind of selectors, called extended selectors. Our evaluation results show that the concept learning method is valuable and extended selectors support it significantly.

Thanh-Luong Tran, Linh Anh Nguyen, Thi-Lan-Giao Hoang
Measuring the Influence of Bloggers in Their Community Based on the H-index Family

Nowadays, people in social networks can have impact on the actual society, e.g. a post on a person’s space can lead to real actions of other people in many areas of life. This is called social influence and the task of evaluating the influence is called social influence analysis which can be exploited in many fields, such as typical marketing (object oriented advertising), recommender systems, social network analysis, event detection, expert finding, link prediction, ranking, etc. The h-index, proposed by Hirsch in 2005, is now a widely used index for measuring both the productivity and impact of the published work of a scientist or scholar. This paper proposes to use h-index to measure the blogger influence in a social community. We also propose to enhance information for h-index (as well as its variants) calculation, and our experimental results are very promising.

Dinh-Luyen Bui, Tri-Thanh Nguyen, Quang-Thuy Ha
Automatic Question Generation for Educational Applications – The State of Art

Recently, researchers from multiple disciplines have been showing their common interest in automatic question generation for educational purposes. In this paper, we review the state of the art of approaches to developing educational applications of question generation. We conclude that although a great variety of techniques on automatic question generation exists, just a small amount of educational systems exploiting question generation has been developed and deployed in real classroom settings. We also propose research directions for deploying the question technology in computer-supported educational systems.

Nguyen-Thinh Le, Tomoko Kojiri, Niels Pinkwart

Nonlinear Systems and Applications

Frontmatter
A New Approach Based on Interval Analysis and B-splines Properties for Solving Bivariate Nonlinear Equations Systems

This paper is addressing the problem of solving non linear systems of equations. It presents a new algorithm based on use of B-spline functions and Interval-Newton’s method. The algorithm generalizes the method designed by Grandine for solving univariate equations. First, recursive bisection is used to separate the roots, then a combination of bisection/Interval Newton’s method is used to refine them. Bisection is ensuring robustness while Newton’s iteration is guaranteeing fast convergence. The algorithm is making great benefit of geometric properties of B-spline functions to avoid unnecessary calculations in both root-separating and root-refining steps. Since B-spline functions can provide an accurate approximation for a wide range of functions (provided they are smooth enough), the algorithm can be made available for those functions by prior conversion/approximation to B-spline basis. It has successfully been used for solving various bivariate nonlinear equations systems.

Ahmed Zidna, Dominique Michel
Application of Sigmoid Models for Growth Investigations of Forest Trees

Radial growth of the trees and its relationships with other factors is one of the most important research areas of forestry for a long time. For measuring intra-annual growth there are several widely used methods: one of them is measuring the girth of trees regularly, preferably on weekly basis. However, the weekly measured growth data may have bias due to the actual precipitation, temperature or other environmental conditions. This bias can be reduced and using adequate growth functions the discrete growth data can be transformed into a continuous curve.

In our investigations the widely used logistic, Gompertz and Richards sigmoid growth models were compared on intra-annual girth data of beech trees. To choose the best model two statistical criteria, the Akaike weight and the modified coefficient of determination were examined. Based on these investigations and the view of applicability, Gompertz model was chosen for later applications. However, we came to the conclusion that all three models can be applied to the annual growth curve with sufficient accuracy.

The modified form of the Gompertz function gives three basic curve parameters that can be used in further investigations. These are the time lag, the maximum specific growth rate and the upper asymptotic value of the curve. Based on the fitted growth curves several other completely objective curve parameters can be defined. For example, the intersection of the tangent drawn in the inflection point and the upper asymptote, the distance between upper intersection point and the time lag, the time and value of the inflection point etc. and even different ratios of these parameters can be identified. The main advantages of these parameters are that they can be created uniformly and objectively from the fitted growth curves. This paper demonstrates the application opportunities of the curve parameters in a particular study of tree growth.

Zoltán Pödör, Miklós Manninger, László Jereb

Computational Methods for Mobile and Wireless Networks

Frontmatter
GPS Tracklog Compression by Heading-Based Filtering

In the era of ubiquitous computing, a great amount of location data is recorded by a wide variety of devices like GPS trackers or smartphones. The available datasets are proven to be very good ground for data mining and information extraction in general. Since the raw data usually carries a huge rate of redundancy, the first step of the process is to reduce dataset size. Our research focuses on mining valuable information from GPS tracklogs on smartphones, which environment is well known to be extremely sensitive to resource-demanding operations, therefore it is crucial to reduce input data size as much as possible. In this paper we introduce a heading-based filtering method, which is able to drastically decrease the number of GPS points needed to represent a trajectory. We evaluate and test it by simulations on a real-world dataset,

Geolife Trajectories

. We show that using this filter, data is reduced to an average of 39 percent of the original size per trajectory, which causes a 250% speedup in the total runtime of Douglas-Peucker line generalization algorithm.

Marcell Fehér, Bertalan Forstner
Optimal Path Planning for Information Based Localization

This paper addresses the problem of optimizing the navigation of an intelligent mobile in a real world environment, described by a map. The map is composed of features representing natural landmarks in the environment. The vehicle is equipped with a sensor which implies range and bearing measurements from observed landmarks. These measurements are correlated with the map to estimate the mobile localization through a

filtering algorithm

. The optimal trajectory can be designed by adjusting a measure of performance for the filtering algorithm used for the localization task. As the state of the mobile and the measurements provided by the sensors are random data, criterion based on the estimation of the

Posterior Cramer-Rao Bound

(PCRB) is a well-suited measure. A natural way for optimal path planning is to use this measure of performance within a (constrained) Markovian Decision Process framework and to use the

Dynamic Programming

method for optimizing the trajectory. However, due to the functional characteristics of the PCRB, Dynamic Programming method is generally irrelevant. We investigate two different approaches in order to provide a solution to this problem. The first one exploits the Dynamic Programming algorithm for generating feasible trajectories, and then uses Extreme Values Theory (EV) in order to extrapolate the optimum. The second one is a rare evnt simulation approach, the

Cross-Entropy

(CE) method introduced by Rubinstein & al. [9]. As a result of our implementation, the CE optimization is assessed by the estimated optimum derived from the EV.

Francis Celeste, Frédéric Dambreville
Formal Security Verification of Transport Protocols for Wireless Sensor Networks

In this paper, we address the problem of formal security verification of transport protocols for wireless sensor networks (WSN) that perform cryptographic operations. Analyzing this class of protocols is a difficult task because they typically consist of complex behavioral characteristics, such as launching timers, performing probabilistic behavior, and cryptographic operations. Some of the recently published WSN transport protocols are DTSN, which does not include cryptographic security mechanism, and two of its secured versions, SDTP and STWSN. In our previous work, we formally analyzed the security of Distributed Transport for Sensor Networks (DTSN) and Distributed Transport Protocol for Wireless Sensor Networks (SDTP), and showed that they are vulnerable against packet modification attacks. In another work we proposed a new Secure Transport Protocol for WSNs (STWSN), with the goal of eliminating the vulnerability of DTSN and SDTP, however, its security properties have only been informally argued. In this paper, we apply formal method to analyze the security of STWSN.

Vinh-Thong Ta, Amit Dvir, Levente Buttyán
How to Apply Large Deviation Theory to Routing in WSNs

This paper deals with optimizing energy efficient communication subject to reliability constraints in the case of Wireless Sensor Networks (WSNs). The reliability is measured by the number of packets needed to be sent from a node to the base station via multi-hop communication in order to receive a fixed amount of data. To calculate reliability and efficiency as a function of the transmission energies proves to be of exponential complexity. By using the statistical bounds of large deviation theory, one can evaluate the necessary number of transmitted packets in real time, which can later be used for optimal, energy aware routing in WSNs. The paper will provide the estimates on efficiency and test their performance by extensive numerical simulations.

János Levendovszky, Hoc Nguyen Thai
Efficient Core Selection for Multicast Routing in Mobile Ad Hoc Networks

PUMA (Protocol for Unified Multicasting through Announcements) is a mesh-based multicast routing protocol for mobile ad hoc networks distinguishing from others in its class by the use of only multicast control packets for mesh establishment and maintenance, allowing it to achieve impressive performances in terms of packet delivery ratios and control overhead. However, one of the main drawbacks of the PUMA protocol is that the core of the mesh remains fixed during the whole execution process. In this paper, we present an improvement of PUMA by introducing to it an adaptive core selection mechanism. The improved protocol produces higher packet delivery ratios and lower delivery time while incurring only little additional control overhead.

Dai Tho Nguyen, Thanh Le Dinh, Binh Minh Nguyen
Backmatter
Metadaten
Titel
Advanced Computational Methods for Knowledge Engineering
herausgegeben von
Tien van Do
Hoai An Le Thi
Ngoc Thanh Nguyen
Copyright-Jahr
2014
Electronic ISBN
978-3-319-06569-4
Print ISBN
978-3-319-06568-7
DOI
https://doi.org/10.1007/978-3-319-06569-4

Premium Partner