Skip to main content

Über dieses Buch

This book constitutes the thoroughly refereed conference proceedings of the 7th International Conference on Multi-disciplinary Trends in Artificial Intelligence, MIWAI 2013, held in Krabi, Thailand, in December 2013. The 30 full papers were carefully reviewed and selected from 65 submissions and cover topics such as cognitive science, computational intelligence, computational philosophy, game theory, machine learning, multi-agent systems, natural language, representation and reasoning, speech, vision and the web.



An ET-Based Low-Level Solution for Query-Answering Problems

Query-answering (QA) problems have attracted wider attention in recent years. Methods for solving QA problems based on the equivalent transformation (ET) principle have been recently developed. Meanwhile efficient satisfiability solvers (SAT solvers) have been invented and successfully applied to many kinds of problems. In this paper, we propose an ET-based low-level solution for QA problems. By slightly modifying it, we also propose a low-level solution using an all-solution SAT solver. We show that the obtained SAT-solver-based solution can also be seen as another ET-based low-level solution. Our findings clarify that the ET principle supports not only high-level computation but also low-level computation, and it provides a formal basis for correctness verification of computation in both levels.

Kiyoshi Akama, Ekawit Nantajeewarawat

Incremental Rough Possibilistic K-Modes

In this paper, we propose a novel version of the k-modes method dealing with the incremental clustering under uncertain framework. The proposal is called the incremental rough possibilistic k-modes (I-RPKM). First, possibility theory is used to handle uncertain values of attributes in databases and, to compute the membership values of objects to resulting clusters. After that, rough set theory is applied to detect boundary regions. After getting the final partition, the I-RPKM adapts the incremental clustering strategy to take into account new information and update the cluster number without re-clustering objects. I-RPKM is shown to perform better than other certain and uncertain approaches.

Asma Ammar, Zied Elouedi, Pawan Lingras

Probabilistic Neural Network for the Automated Identification of the Harlequin Ladybird (Harmonia Axyridis)

This paper describes recent work in the UK to automate the identification of Harlequin ladybird species (

Harmonia axyridis

) using color images. The automation process involves image processing and the use of probabilistic neural network (PNN) as classifier, with an aim to reduce the number of color images to be examined by entomologists through pre-sorting the images into correct, questionable and incorrect species. Two major sets of features have been extracted: color and geometrical measurements. Experimental results revealed more than 75% class match for the identification of taxa with similar-colored spots.

Mohd Zaki Ayob, E. D. Chesmore

Unifying Multi-level Business Process Discovered by Heuristic Miner Algorithm

Process mining techniques are commonly used in business management while conformance checking becomes an important issue in business process management. Researchers derive the actual business process from event logs to draw a comparison with the business process model. When the two are inconsistent, a lack of internal controls happens.

This research proposes the consistence checking in the event logs. Because of the different granularity in the same event logs, the process can be demonstrated as grain 1 to grain n, in which the smaller the grain means the finer the granularity of the process. While using a process log to retrace the business process, different business processes might be shown in the processes with different granularities in the same event logs. The dependency threshold of Heuristic miner algorithm is used to deal with the differences of consistency automatically in this research.

This research uses the event logs from a marble processing industry for the case conformance. Focusing on the fine and coarse two granularities of the business process matrix, the conformance checking is applied for a consistent business process via the setting of the dependency threshold and the consistent ratio. The result show the valuable information to audit or re-design the business process model.

Yu-Cheng Chuang, PingYu Hsu, Hung-Hao Chen

User Distributions in N-Tier Platform with Effective Memory Reusability

Due to the vigorous development of the network, a variety of application software has been widely used. In order to serve more users, it is necessary to build middleware servers (Application Servers). If the middleware server was overloaded, it will result in poor performance. Besides, it is simply waste of resources if loading was idle. So the server with or without load balancing becomes an important issue. The proposed LAPO algorithm in this paper can dynamically allocate each middleware server for each user. And firstly, it improves the POCA algorithm that spends a lot of computation time to determine the optimal combination of solutions. Secondly, it can uniformly distribute each user on the servers; and finally, propose the model for best combination of load balancing solution. By using the SAP ERP ECC 6.0 for implementation, this study can verify that the LAPO is not only more efficient in computation time than POCA, but also more in line with the actual situation of the enterprises use. Moreover, we comment the results of experiments and some limitations.

Yu-Cheng Chuang, Pingyu Hsu, Mintzu Wang, Ming-Te Lin, Ming Shien Cheng

Bat Algorithm, Genetic Algorithm and Shuffled Frog Leaping Algorithm for Designing Machine Layout

Arranging non-identical machines into a limited area of manufacturing shop floor is an essential part of plant design. Material handling distance is one of the key performance indexes of internal logistic activities within manufacturing companies. It leads to the efficient productivity and related costs. Machine layout design is known as facility layout problem and classified into non-deterministic polynomial-time hard problem. The objective of this paper was to compare the performance of Bat Algorithm (BA), Genetic Algorithm (GA) and Shuffled Frog Leaping Algorithm (SFLA) for designing machine layouts in a multiple-row environment with the aim to minimise the total material handling distance. An automated machine layout design tool has been coded in modular style using a general purpose programming language called Tcl/Tk. The computational experiment was designed and conducted using four MLD benchmark datasets adopted from literature. It was found that the proposed algorithms performed well in different aspects.

Kittipong Dapa, Pornpat Loreungthup, Srisatja Vitayasak, Pupong Pongcharoen

Hue Modeling for Object Tracking in Multiple Non-overlapping Cameras

Images collected through CCTV can be expressed using an RGB color channel. As a result, studies on object tracking using a color histogram consisting of an RGB color channel are currently underway. However, the color histogram of the same object may be displayed differently in different cameras when installed under variable lighting conditions. Under such an environment, object tracking using a color histogram cannot be applied. To resolve this problem, we propose the use of hue modeling to find the same object in multiple non-overlapping cameras located at different physical sites.

Minho Han, Ikkyun Kim

Using HMMs and Depth Information for Signer-Independent Sign Language Recognition

In this paper, we add the depth information to effectively locate the 3D position of the hands in the sign language recognition system. But, the information will be changed by the different testers and we can’t do the recognition well. So, we use the incremental changes of the three-dimensional coordinates on a unit time as the feature parameter to fix the above problem. We record the changes of the three-dimensional coordinates in time, then using the hidden Markov models to recognize the variety of sign language movement changing on the time domain. Experiment verifies the proposed method is superior to traditional ones.

Yeh-Kuang Wu, Hui-Chun Wang, Liung-Chun Chang, Ke-Chun Li

Unscented Kalman Filter for Noisy Multivariate Financial Time-Series Data

Kalman filter is one of the novel techniques useful for statistical estimation theory and now widely used in many practical applications. In this paper, we consider the process of applying Unscented Kalman Filtering algorithm to multivariate financial time series data to determine if the algorithm could be used to smooth the direction of KLCI stock price movements using five different measurement variance values. Financial data are characterized by non-linearity, noise, chaotic in nature, volatile and the biggest impediment is due to the colossal nature of the capacity of transmitted data from the trading market. Unscented Kalman filter employs the use of unscented transformation commonly referred to as sigma points from which estimates are recovered from. The filtered output precisely internments the covariance of noisy input data producing smoothed and less noisy estimates.

Said Jadid Abdulkadir, Suet-Peng Yong

Reconstructing Gene Regulatory Network Using Heterogeneous Biological Data

Gene regulatory network is a model of a network that describes the relationships among genes in a given condition. However, constructing gene regulatory network is a complicated task as high-throughput technologies generate large-scale of data compared to number of sample. In addition, the data involves a substantial amount of noise and false positive results that hinder the downstream analysis performance. To address these problems Bayesian network model has attracted the most attention. However, the key challenge in using Bayesian network to model GRN is related to its learning structure. Bayesian network structure learning is NP-hard and computationally complex. Therefore, this research aims to address the issue related to Bayesian network structure learning by proposing a low-order conditional independence method. In addition we revised the gene regulatory relationships by integrating biological heterogeneous dataset to extract transcription factors for regulator and target genes. The empirical results indicate that proposed method works better with biological knowledge processing with a precision of 83.3% in comparison to a network that rely on microarray only, which achieved correctness of 80.85%.

Farzana Kabir Ahmad, Nooraini Yusoff

Design of a Multi-day Tour-and-Charging Scheduler for Electric Vehicles

Aiming at alleviating the range anxiety problem in electric vehicles taking advantage of well-developed computational intelligence, this paper designs a multi-day tour-and-charging scheduler and measures its performance. Some tour spots have charging facilities for the vehicle battery to be charged during the tour. Our scheduler finds a multi-day visiting sequence, permitting different day-by-day start and end points. To exploit genetic algorithms for the extremely vast search space, a feasible schedule is encoded to an integer-valued vector having (




-1) elements, where


is the number of places to visit and


is the number of tour days. The cost function evaluates the waiting time, namely, the time amount the tourist must wait for the battery to be charged enough to reach the next place. It also integrates the time budget constraint and quantizes the tour length. The performance measurement result obtained from a prototype implementation shows that our scheme achieves 100 % schedulability until 13 places for the 2-day trip and 17 places for the 3-day trip on given parameter setting.

Junghoon Lee, Gyung-Leen Park

Using a Normalized Score Centroid-Based Classifier to Classify Multi-label Herbal Formulae

The popularity of herbal medicines has greatly increased in worldwide countries over recent years. Herbal formula is a form of traditional medicine where herbs are combined to heal patient to heal faster and more efficiency. Herbal formulae can be divided into one or more therapeutic categories. The categories of a formula are usually based on decision from a group of experts. To support experts for classifying a formula, the normalized score centroid-based, is proposed for multi-label herbal formulae classification. The centroid-based classifier with more advanced term weight scheme is used. The normalized scores are calculated. The maximum number and cutoff point are set to adjust the decision for multi-label herbal formulae. The experiment is done using a mixed data set of herbal formulae collected from the Natural List of Essential Medicine and the list of common household remedies for traditional medicine. Moreover, a set of well-known commercial products are used for evaluating the effectiveness of the proposed method. From the results, the normalized score centroid-based classifier is an efficient method to classify multi-label herbal formulae. Its performance is depended on the set values of the maximum category and the cutoff point.

Verayuth Lertnattee, Sinthop Chomya, Virach Sornlertlamvanich

On the Computation of Choquet Optimal Solutions in Multicriteria Decision Contexts

We study in this paper the computation of Choquet optimal solutions in decision contexts involving multiple criteria or multiple agents. Choquet optimal solutions are solutions that optimize a Choquet integral, one of the most powerful tools in multicriteria decision making. We develop a new property that characterizes the Choquet optimal solutions. From this property, a general method to generate these solutions in the case of several criteria is proposed. We apply the method to different Pareto non-dominated sets coming from different knapsack instances with a number of criteria included between two and seven. We show that the method is effective for a number of criteria lower than five or for high size Pareto non-dominated sets. We also observe that the percentage of Choquet optimal solutions increase with the number of criteria.

Thibaut Lust, Antoine Rolland

Test-Cost-Sensitive Attribute Reduction in Decision-Theoretic Rough Sets

Decision-theoretic rough sets (DTRS) can be seen as a kind of misclassification cost-sensitive learning model. In DTRS, attribute reduction is the process of minimizing misclassification costs. However in parctice, data are not free, and there are test costs to obtain feature values of objects. Hence, the process of attribute reduction should help minimizing both of misclassification costs and test costs. In this paper, the minimal test cost attribute reduct (MTCAR) problem is defined in DTRS. The objective of attribute reduction is to minimize misclassification costs and test costs. A genetic algorithm (GA) is used to solve this problem. Experiments on UCI data sets are performed to validate the effectiveness of GA to solve MTCAR problem.

Xi’ao Ma, Guoyin Wang, Hong Yu, Feng Hu

Spatial Anisotropic Interpolation Approach for Text Removal from an Image

We propose a Spatial Anisotropic Interpolation (SAI) based, Design and Analysis of Computer Experiment (DACE) model for inpainting the gaps that are induced by the removal of text from images. The spatial correlation among the design data points is exploited, leading to a model which produces estimates with zero variance at all design points. Incorporating such a feature turns the model to serve as a surrogate for predicting the response at desired points where experiment is not carried out. This property has been tuned for the purpose of gap filling in images also called as Image Inpainting, while treating the pixel values as responses. The proposed methodology restores the structural as well as textural characteristics of input image. Experiments are carried out with this methodology and results are demonstrated using quality metrics such as SSIM and PSNR.

Morusupalli Raghava, Arun Agarwal, Ch. Raghavendra Rao

A Geometric Evaluation of Self-Organizing Map and Application to City Data Analysis

Kohonen’s Self-Organizinig Map (SOM) is useful to make a low dimensional manifold which is embedded into a high dimensional data space, hence it is now being used in visualization, analysis, and knowledge discovery of complex data. However, for a given set of training samples, a trained SOM is not uniquely determined because of many local minima, resulting that we can not evaluate whether it is appropriately embedded or not. In this paper, we propose a new method how to evaluate a trained SOM from the viewpoint of geometric naturalness. The new criterion is defined by the average correspondence gap between different trained SOMs. We show the effectiveness of the proposed method by experimental results for artificial data, and then introduce its application to a real-world problem, analysis of cities and villages in Japan.

Shigehiro Ohara, Keisuke Yamazaki, Sumio Watanabe

AOF-Based Algorithm for Dynamic Multi-Objective Distributed Constraint Optimization

Many real world problems involve multiple criteria that should be considered separately and optimized simultaneously. A Multi-Objective Distributed Constraint Optimization Problem (MO-DCOP) is the extension of a mono-objective Distributed Constraint Optimization Problem (DCOP). A DCOP is a fundamental problem that can formalize various applications related to multi-agent cooperation. This problem consists of a set of agents, each of which needs to decide the value assignment of its variables so that the sum of the resulting rewards is maximized. An MO-DCOP is a DCOP which involves multiple criteria. Most researches have focused on developing algorithms for solving static problems. However, many real world problems are dynamic. In this paper, we focus on a change of criteria/objectives and model a Dynamic MO-DCOP (DMO-DCOP) which is defined by a sequence of static MO-DCOPs. Furthermore, we develop a novel algorithm for DMO-DCOPs. The characteristics of this algorithm are as follows: (i) it is a reused algorithm which finds Pareto optimal solutions for all MO-DCOPs in a sequence using the information of previous solutions, (ii) it utilizes the Aggregate Objective Function (AOF) technique which is the widely used classical method to find Pareto optimal solutions, and (iii) the complexity of this algorithm is determined by the induced width of problem instances.

Tenda Okimoto, Maxime Clement, Katsumi Inoue

Relational Change Pattern Mining Based on Modularity Difference

This paper is concerned with a problem of detecting

relational changes

. Many kinds of graph data including social networks are increasing nowadays. In such a graph, the relationships among vertices are changing day by day. Therefore, it would be worth investigating a data mining method for detecting significant patterns informing us about what changes. We present in this paper a general framework for detecting relational changes over two graphs to be contrasted. Our target pattern with relational change is defined as a set of vertices common in both graphs in which the vertices are almost disconnected in one graph, while densely connected in the other. We formalize such a target pattern based on the notions of




. A depth-first algorithm for the mining task is designed as an extension of


-plex enumerators with some pruning mechanisms. Our experimental results show usefulness of the proposed method for two pairs of graphs representing actual reply-communications among Twitter users and word co-occurrence relations in Japanese news articles.

Yoshiaki Okubo, Makoto Haraguchi, Etsuji Tomita

Reasoning with Near Set-Based Digital Image Flow Graphs

This paper introduces sufficiently near points in pairs of digital image flow graphs (DIFGs). This work is an extension of earlier work on a framework for layered perceptual flow graphs, where analysis of such graphs was performed in terms of flow graph nodes, branches, and paths using near set theory. A description-based method for determining nearness between flow graphs is given in terms of a practical application to digital image analysis.

James F. Peters, Doungrat Chitcharoen, Sheela Ramanna

An Efficient Interval-Based Approach to Mining Frequent Patterns in a Time Series Database

In this paper, we introduce an interval-based approach to mining frequent patterns in a time series database. As compared to frequent patterns in the existing approaches, frequent patterns in our approach are more informative with explicit time gaps automatically discovered along with the temporal relationships between the components in each pattern. In addition, our interval-based frequent pattern mining algorithm on time series databases, called IFPATS, is more efficient with a single database scan and a looking-ahead mechanism for a reduction in non-potential candidates for frequent patterns. Experimental results have been conducted and have confirmed that our IFPATS algorithm outperforms both the existing interval-based algorithm on sequential databases and the straightforward approach with post processing for explicit time gaps in the temporal relationships of the resulting patterns. Especially as a time series database gets larger and time series get longer in a higher dimensional space, our approach is much more efficient.

Phan Thi Bao Tran, Vo Thi Ngoc Chau, Duong Tuan Anh

A Secure and Privacy-Aware Cloud-Based Architecture for Online Social Networks

The use of social networks has grown exponentially in recent years, and these social networks continue to have an ever-increasing impact on human lives. There are many concerns regarding the privacy of users in these environments, such as how trustworthy the social network operators are, in addition to the external adversaries. In this paper we propose a new architecture for online social networking, based on distributed cloud-based datacenters and using secret sharing as the method of encrypting user profile data, for enhanced privacy and availability. This proposed architecture is theoretically analyzed for its security and performance along with some experimental analysis. We show that the proposed architecture is highly secure at an acceptable level of time complexity overhead in comparison to existing online social networks, as well as the models proposed in previous studies targeting the same research problem.

Kasun Senevirathna, Pradeep K. Atrey

A Hybrid Algorithm for Image Watermarking against Signal Processing Attacks

In this paper, we have presented a hybrid image watermarking technique and developed an algorithm based on the three most popular trans form techniques which are discrete wavelet transforms (DWT), discrete cosine transforms (DCT), and singular value decomposition (SVD) against signal processing attacks. However, the experimental results demonstrate that this algorithm combines the advantages and remove the disadvantages of these three transform. This proposed hybrid algorithm provides better imperceptibility and robustness against various attacks such as Gaussian noise, salt and pepper noise, motion blur, speckle noise, and Poisson noise etc.

Amit Kumar Singh, Mayank Dave, Anand Mohan

Hindi Word Sense Disambiguation Using Semantic Relatedness Measure

In this paper we propose and evaluate a method of Hindi word sense disambiguation that computes similarity based on the semantics. We adapt an existing measure for semantic relatedness between two lexically expressed concepts of Hindi WordNet. This measure is based on the length of paths between noun concepts in an is-a hierarchy. Instead of relying on direct overlap the algorithm uses Hindi WordNet hierarchy to learn semantics of words and exploits it in the disambiguation process. Evaluation is performed on a sense tagged dataset consisting of 20 polysemous Hindi nouns. We obtained an overall average accuracy of 60.65% using this measure.

Satyendr Singh, Vivek Kumar Singh, Tanveer J. Siddiqui

A Content-Based eResource Recommender System to Augment eBook-Based Learning

This paper presents our experimental work to design a content-based recommendation system for eBook readers. The system automatically identifies a set of relevant eResources for a reader, reading a particular eBook, and presents them to the user through an integrated interface. The system involves two different phases. In the first phase, we parse the textual content of the eBook currently read by the user to identify learning concepts being pursued. This requires analysing the text of relevant part(s) of the eBook to extract concepts and subsequently filter them to identify learning concepts of interest to Computer Science domain. In the second phase, we identify a set of relevant eResources from the World Wide Web. This involves invoking publicly available APIs from Slideshare, LinkedIn, YouTube etc. to retrieve relevant eResources for the learning concepts identified in the first part. The system is evaluated through a multi-faceted process involving tasks like sentiment analysis of user reviews of the retrieved set of eResources for recommendations. We strive to obtain an additional wisdom-of-crowd kind of evaluation of our system by hosting it on a public Web platform.

Vivek Kumar Singh, Rajesh Piryani, Ashraf Uddin, David Pinto

Markov Decision Processes with Functional Rewards

Markov decision processes (MDP) have become one of the standard models for decision-theoretic planning problems under uncertainty. In its standard form, rewards are assumed to be numerical additive scalars. In this paper, we propose a generalization of this model allowing rewards to be functional. The value of a history is recursively computed by composing the reward functions. We show that several variants of MDPs presented in the literature can be instantiated in this setting. We then identify sufficient conditions on these reward functions for dynamic programming to be valid. In order to show the potential of our framework, we conclude the paper by presenting several illustrative examples.

Olivier Spanjaard, Paul Weng

Hand Gesture Segmentation from Complex Color-Texture Background Image

Gestures provide a rich, intuitive and natural form of interaction between human and other devices. In this paper an automatic hand gesture segmentation technique from the complex color-texture Image is developed for segmentation of hand gesture with less false positive rate(FPR). In this approach we propose a model for Skin Color Characterization and define a Potential of a Pixel (PoP) which are then used to segment the hand gesture. This new skin segmentation technique takes into account both the color-texture features for efficient segmentation. It is observed that the classifier is robust with respect to usage of hand and mode of hands like front or back side of hand. To evaluate the system the hand gesture images have been acquired from set of students under various complex background. The gesture segmentation technique has false positive rate of nearly 5.7% and true positive rate near to 98.93%.

Vinay Kumar Verma, Rajeev Wankar, C. R. Rao, Arun Agarwal

Distributed Query Plan Generation Using HBMO

Processing a distributed query entails accessing data from multiple sites. The inter site communication cost, being the dominant cost, needs to be reduced in order to improve the query response time. This would require the query optimizer to devise a distributed query processing strategy that would, for a given distributed query, generate query plans involving fewer number of sites in order to reduce the inter site communication cost. In this paper, a distributed query plan generation algorithm, based on the honey bee mating optimization (HBMO) technique that generates query plans for a distributed query involving less number of sites and higher relation concentration in the participating sites, is presented. Further, experimental comparison of the proposed HBMO based DQPG algorithm with the GA based DQPG algorithm shows that the former is able to generate distributed query plans at a comparatively lesser total query processing cost, which in turn would lead to efficient processing of a distributed query.

T. V. Vijay Kumar, Biri Arun, Lokendra Kumar

Axiomatic Foundations of Generalized Qualitative Utility

The aim of this paper is to provide a unifying axiomatic justification for a class of qualitative decision models comprising among others optimistic/pessimistic qualitative utilities, binary possibilistic utility, likelihood-based utility, Spohn’s disbelief function-based utility. All those criteria that are instances of Algebraic Expected Utility have been shown to be counterparts of Expected Utility thanks to a unifying axiomatization in a von Neumann-Morgenstern setting when non probabilistic decomposable uncertainty measures are used. Those criteria are based on ( ⊕ , ⊗ ) operators, counterpart of ( + , ×) used by Expected Utility, where ⊕ is an idempotent operator and ⊗ is a triangular norm. The axiomatization is lead in the Savage setting which is a more general setting than that of von Neumann-Morgenstern as here we do not assume that the uncertainty representation of the decision-maker is known.

Paul Weng

Computing Semantic Association: Comparing Spreading Activation and Spectral Association for Ontology Learning

Spreading activation is a common method for searching semantic or neural networks, it iteratively propagates activation for one or more sources through a network – a process that is computationally intensive. Spectral association is a recent technique to approximate spreading activation in one go, and therefore provides very fast computation of activation levels. In this paper we evaluate the characteristics of spectral association as replacement for classic spreading activation in the domain of ontology learning. The evaluation focuses on run-time performance measures of our implementation of both methods for various network sizes. Furthermore, we investigate differences in output, i.e. the resulting ontologies, between spreading activation and spectral association. The experiments confirm an excessive speedup in the computation of activation levels, and also a fast calculation of the spectral association operator if using a variant we called

brute force

. The paper concludes with pros and cons and usage recommendations for the methods.

Gerhard Wohlgenannt, Stefan Belk, Matthias Schett

Evolution of Self-interested Agents: An Experimental Study

In this paper, we perform an experimental study to examine the evolution of self-interested agents in cooperative agent societies. To this end, we realize a multiagent system in which agents initially behave altruistically by sharing information of food. After generations of a genetic algorithm, we observe the emergence of selfish agents who do not share food information. The experimental results show the process of evolving self-interested agents in resource-restrictive environments, which is observed in nature and in human society.

Naoki Yamada, Chiaki Sakama


Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.




Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten
Jetzt gratis downloaden!