Skip to main content
main-content

Über dieses Buch

This book constitutes the thoroughly refereed proceedings of the 7th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management, IC3K 2015, held in Lisbon, Portugal, in November 2015.

The 25 full papers presented together with 2 invited papers were carefully reviewed and selected from 280 submissions. The papers are organized in topical sections on knowledge discovery and information retrieval; knowledge engineering and ontology development; and knowledge management and information sharing.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter

Business Ethics as Personal Ethics

Business ethics is a major issue for most, if not all companies. This paper attempts a clarification of the composite of forces and influences involved in the business ethics world. Its performance is then assessed, resulting in a relative failure of the huge efforts of the business world in recent decades, as the public doubts the moral sincerity of managers.The diagnostics of this credibility disease points to a flawed trust on the mechanic and judicial approach to ethics, preferring rules and measurements to character and intentions. The solution presented is the use of a virtue-based ethics, returning to the personal elements of morality.

João César das Neves

Automatic Generation of Poetry Inspired by Twitter Trends

This paper revisits PoeTryMe, a poetry generation platform, and presents its most recent instantiation for producing poetry inspired by trends in the Twitter social network. The presented system searches for tweets that mention a given topic, extracts the most frequent words in those tweets, and uses them as seeds for the generation of new poems. The set of seeds might still be expanded with semantically-relevant words. Generation is performed by the classic PoeTryMe system, based on a semantic network and a grammar, with a previously used generate&test strategy. Illustrative results are presented using different seed expansion settings. They show that the produced poems use semantically-coherent lines with words that, at the time of generation, were associated with the topic. Resulting poems are not really about the topic, but they are a way of expressing, poetically, what the system knows about the semantic domain set by the topic.

Hugo Gonçalo Oliveira

Knowledge Discovery and Information Retrieval

Frontmatter

Exploiting Guest Preferences with Aspect-Based Sentiment Analysis for Hotel Recommendation

This paper presents a collaborative filtering method for hotel recommendation incorporating guest preferences. We used the results of aspect-based sentiment analysis to recommend hotels because whether or not the hotel can be recommended depends on the guest preferences related to the aspects of a hotel. For each aspect of a hotel, we identified the guest preference by using dependency triples extracted from the guest reviews. The triples represent the relationship between aspect and its preference. We calculated transitive association between hotels by using the positive/negative preference on some aspect. Finally, we scored hotels by Markov Random Walk model to explore transitive associations between the hotels. The empirical evaluation showed that aspect-based sentiment analysis improves overall performance. Moreover, we found that it is effective for finding hotels that have never been stayed at but share the same neighborhoods.

Fumiyo Fukumoto, Hiroki Sugiyama, Yoshimi Suzuki, Suguru Matsuyoshi

Searching the Web by Meaning: A Case Study of Lithuanian News Websites

The daily growth of unstructured textual information created on the Web raises significant challenges when it comes to serving user information needs. On the other hand, evolving Semantic Web technology has influenced a wide body of research towards meaning-based text processing and information retrieval methods, that go beyond classical keyword-driven approaches. However, most of the work in the field targets English as the primary language of interest. Hence, in this paper we present a very first attempt to process unstructured Lithuanian text at the level of ontological semantics. We introduce an ontology-based semantic search framework capable of answering structured natural Lithuanian language questions, discuss its language-dependent design decisions and draw some observations from the results of a recent case study carried out over domain-specific Lithuanian web news corpus.

Tomas Vileiniškis, Algirdas Šukys, Rita Butkienė

Piecewise Factorization for Time Series Classification

In the research field of time series analysis and mining, the nearest neighbor classifier (1NN) based on the dynamic time warping distance (DTW) is well known for its high accuracy. However, the high computational complexity of DTW can lead to the expensive time consumption of the classifier. An effective solution is to compute DTW in the piecewise approximation space (PA-DTW). However, most of the existing piecewise approximation methods must predefine the segment length and focus on the simple statistical features, which would influence the precision of PA-DTW. To address this problem, we propose a novel piecewise factorization model (PCHA) for time series, where an adaptive segment method is proposed and the Chebyshev coefficients of subsequences are extracted as features. Based on PCHA, the corresponding PA-DTW measure named ChebyDTW is proposed for the 1NN classifier, which can capture the fluctuation information of time series for the similarity measure. The comprehensive experimental evaluation shows that ChebyDTW can support both accurate and fast 1NN classification.

Qinglin Cai, Ling Chen, Jianling Sun

Experiences in WordNet Visualization with Labeled Graph Databases

Data and Information Visualization is becoming strategic for the exploration and explanation of large data sets due to the great impact that data have from a human perspective. The visualization is the closer phase to the users within the data life cycle’s phases, thus, an effective, efficient and impressive representation of the analyzed data may result as important as the analytic process itself. In this paper, we present our experiences in importing, querying and visualizing graph databases taking one of the most spread lexical database as case study: WordNet. After having defined a meta-model to translate WordNet entities into nodes and arcs inside a labeled oriented graph, we try to define some criteria to simplify the large-scale visualization of WordNet graph, providing some examples and considerations which arise. Eventually, we suggest a new visualization strategy for WordNet synonyms rings by exploiting the features and concepts behind tag clouds.

Enrico Giacinto Caldarola, Antonio Picariello, Antonio M. Rinaldi

Open-Source Search Engines in the Cloud

The key to the success of the analysis of petabytes of textual data available at our fingertips is to do it in the cloud. Today, several extensions exist that bring Lucene, the open-source de facto standard of textual search engine libraries, to the cloud. These extensions come in three main directions: implementing scalable distribution of the indices over the file system, storing them in NoSQL databases, and porting them to inherently distributed ecosystems. In this work, we evaluate the existing efforts in terms of distribution, high availability, fault tolerance, manageability, and high performance. We are committed to using common open-source technology only. So, we restrict our evaluation to publicly available open-source libraries and eventually fix their bugs. For each system under investigation, we build a benchmarking system by indexing the whole Wikipedia content and submitting hundreds of simultaneous search requests. By measuring the performance of both indexing and searching operations, we report of the most favorable constellation of open-source libraries that can be installed in the cloud.

Khaled Nagi

Cross-Domain Sentiment Classification via Polarity-Driven State Transitions in a Markov Model

Nowadays understanding people’s opinions is the way to success, whatever the goal. Sentiment classification automates this task, assigning a positive, negative or neutral polarity to free text concerning services, products, TV programs, and so on. Learning accurate models requires a considerable effort from human experts that have to properly label text data. To reduce this burden, cross-domain approaches are advisable in real cases and transfer learning between source and target domains is usually demanded due to language heterogeneity. This paper introduces some variants of our previous work [1], where both transfer learning and sentiment classification are performed by means of a Markov model. While document splitting into sentences does not perform well on common benchmark, using polarity-bearing terms to drive the classification process shows encouraging results, given that our Markov model only considers single terms without further context information.

Giacomo Domeniconi, Gianluca Moro, Andrea Pagliarani, Roberto Pasolini

A Word Prediction Methodology Based on Posgrams

This work introduces a two steps methodology for the prediction of missing words in incomplete sentences. In a first step the number of candidate words is restricted to the ones fulfilling the predicted part of speech; to this aim a novel algorithm based on “posgrams” analysis is also proposed. Then, in a second step, a word prediction algorithm is applied on the reduced words set. The work quantifies the advantages in predicting a word part of speech before predicting the word itself, in terms of accuracy and execution time. The methodology can be applied in several tasks, such as Text Autocompletion, Speech Recognition and Optical Text Recognition.

Carmelo Spiccia, Agnese Augello, Giovanni Pilato

Is Bitcoin’s Market Predictable? Analysis of Web Search and Social Media

In recent years, Internet has completely changed the way real life works. In particular, it has been possible to witness the online emergence of web 2.0 services that have been widely used as communication media. On one hand, services such as blogs, tweets, forums, chats, email have gained wide popularity. On the other hand, due to the huge amount of available information, searching has become dominant in the use of Internet. Millions of users daily interact with search engines, producing valuable sources of interesting data regarding several aspects of the world. Bitcoin, a decentralized electronic currency, represents a radical change in financial systems, attracting a large number of users and a lot of media attention. In this work we studied whether Bitcoin’s trading volume is related to the web search and social volumes about Bitcoin. We investigated whether public sentiment, expressed in large-scale collections of daily Twitter posts, can be used to predict the Bitcoin market too. We achieved significant cross correlation outcomes, demonstrating the search and social volumes power to anticipate trading volumes of Bitcoin currency.

Martina Matta, Ilaria Lunesu, Michele Marchesi

Knowledge Engineering and Ontology Development

Frontmatter

A Visual Similarity Metric for Ontology Alignment

Ontology alignment is the process where two different ontologies that usually describe similar domains are ‘aligned’, i.e. a set of correspondences between their entities, regarding semantic equivalence, is determined. In order to identify these correspondences several methods have been proposed in literature. The most common features that these methods employ are string-, lexical-, structure- and semantic-based features for which several approaches have been developed. However, what hasn’t been investigated is the usage of visual-based features for determining entity similarity. Nowadays the existence of several resources that map lexical concepts onto images allows for exploiting visual features for this purpose. In this paper, a novel method, defining a visual-based similarity metric for ontology matching, is presented. Each ontological entity is associated with sets of images. State of the art visual feature extraction, clustering and indexing for computing the visual-based similarity between entities is employed. An adaptation of a Wordnet-based matching algorithm to exploit the visual similarity is also proposed. The proposed visual similarity approach is compared with standard metrics and demonstrates promising results.

Charalampos Doulaverakis, Stefanos Vrochidis, Ioannis Kompatsiaris

Model-Intersection Problems and Their Solution Schema Based on Equivalent Transformation

Model-intersection (MI) problems are a very large class of logical problems that includes many useful problem classes, such as proof problems on first-order logic and query-answering (QA) problems in pure Prolog and deductive databases. We propose a general schema for solving MI problems by equivalent transformation (ET), where problems are solved by repeated simplification. The correctness of this solution schema is shown. This general schema is specialized for formalizing solution schemas for QA problems and proof problems. The notion of a target mapping is introduced for generation of ET rules, allowing many possible computation procedures, for instance, computation procedures based on resolution and unfolding. This theory is useful for inventing solutions for many classes of logical problems.

Kiyoshi Akama, Ekawit Nantajeewarawat

Representing and Managing Unbalanced Multi-sets

In Knowledge-Based Systems, experts should model human knowledge as faithful as possible to reality. In this way, it is essential to consider knowledge imperfection. Several approaches have dealt with this kind of data. The most known are fuzzy logic and multi-valued logic. These latter propose a linguistic modeling using linguistic terms that are uniformly distributed on a scale. However, in some cases, we need to assess qualitative aspects by means of variables using linguistic term sets which are not uniformly distributed. We have noticed, in the literature, that in the context of fuzzy logic many researchers have dealt with these term sets. However, it is not the case for multi-valued logic. Thereby, in our work, we aim to establish a methodology to represent and manage this kind of data in the context of multi-valued logic. Two aspects are treated. The first one concerns the representation of terms within an unbalanced multi-set. The second deals with the use of symbolic modifiers within such kind of imperfect knowledge.

Nouha Chaoued, Amel Borgi, Anne Laurent

Ranking with Ties of OWL Ontology Reasoners Based on Learned Performances

Over the last decade, several ontology reasoners have been proposed to overcome the computational complexity of inference tasks on expressive ontology languages such as OWL 2 DL. Nevertheless, it is well-accepted that there is no outstanding reasoner that can outperform in all input ontologies. Thus, deciding the most suitable reasoner for an ontology based application is still a time and effort consuming task. In this paper, we suggest to develop a new system to provide user support when looking for guidance over ontology reasoners. At first, we will be looking at automatically predict a single reasoner empirical performances, in particular its robustness and efficiency, over any given ontology. Later, we aim at ranking a set of candidate reasoners in a most preferred order by taking into account information regarding their predicted performances. We conducted extensive experiments covering over 2500 well selected real-world ontologies and six state-of-the-art of the most performing reasoners. Our primary prediction and ranking results are encouraging and witnessing the potential benefits of our approach.

Nourhène Alaya, Sadok Ben Yahia, Myriam Lamolle

Refinement by Filtering Translation Candidates and Similarity Based Approach to Expand Emotion Tagged Corpus

Researches on emotion estimation from text mostly use machine learning method. Because machine learning requires a large amount of example corpora, how to acquire high quality training data has been discussed as one of its major problems. The existing language resources include emotion corpora; however, they are not available if the language is different. Constructing bilingual corpus manually is also financially difficult. We propose a method to convert a training data into different language using an existing Japanese-English parallel emotion corpus. With a bilingual dictionary, the translation candidates are extracted against every word of each sentence included in the corpus. Then the extracted translation candidates are narrowed down into a set of words that highly contribute to emotion estimation and we used the set of words as training data. Moreover, when one language’s unannotated linguistic resources can be obtained, the words can be expanded based on the word distributed expression. By using this expressions, we can improve accuracy without decreasing information volume of one sentence. Then, we tried the corpus expansion without translating target linguistic resource. As the result of the evaluation experiment using the machine learning algorithm, we could clear the effectiveness of the emotion corpus which expanded based on the original language’s unannotated sentences and based on similar sentence. Moreover, when large amount of linguistic resources without annotation can be obtained in one language, their words can be expanded based on distributed expressions of the words. By using distributed expressions, we can improve accuracy without decreasing information volume of one sentence. Then, we attempted to expand corpus without translating target linguistic resource. The result of the evaluation experiment using the machine learning algorithm showed the effectiveness of the expanded emotion corpus based on the original language’s unannotated sentences and their similar sentences.

Kazuyuki Matsumoto, Fuji Ren, Minoru Yoshida, Kenji Kita

Employing Knowledge Artifacts to Develop Time-Depending Expert Systems

The integration of heterogeneous information gathered from wearable devices, like watches, bracelets and smartphones, is becoming a very important research trend. The Expert Systems’ technology should be able to face this interesting challenge, promoting the development of innovative frameworks runnable on wearables and mobile operating systems. In particular, users and domain experts could be able to interact directly, minimizing the role of knowledge engineer and promoting the real–time updating of knowledge bases when necessary. This paper presents the KAFKA approach to this topic, based on the implementation of the Knowledge Artifact conceptual model supported by Android OS devices.

Fabio Sartori, Riccardo Melen

A Property Grammar-Based Method to Enrich the Arabic Treebank ATB

We present a method based on the formalism of Property Grammars to enrich the Arabic treebank ATB with syntactic constraints (so-called properties). The Property Grammar formalism is an effectively constraint-based approach that directly specifies the constraints on information categories. This can facilitate the enrichment process. The latter is based on three phases: the problem formalization, the Property Grammar induction from the ATB and the treebank regeneration with a new syntactic property-based representation. The enrichment of the ATB can make it more useful for many NLP applications such as the ambiguity resolution. This allows also the acquisition of new linguistic resources and the ease of the probabilistic parsing process. This enrichment process is purely automatic and independent from any language and source corpus formalism. This motivates its reuse. We obtained good and encouraging experiment results and various properties of different types.

Raja Bensalem Bahloul, Kais Haddar, Philippe Blache

Driving Innovation in Youth Policies with Open Data

In December 2007, thirty activists held a meeting in California to define the concept of open public data. For the first time eight Open Government Data (OPG) principles were settled; OPG should be Complete, Primary (reporting data at an high level of granularity), Timely, Accessible, Machine processable, Non-discriminatory, Non-proprietary, License-free. Since the inception of the Open Data philosophy there has been a constant increase in information released improving the communication channel between public administrations and their citizens.Open data offers government, companies and citizens information to make better decisions. We claim Public Administrations, that are the main producers and one of the consumers of Open Data, might effectively extract important information by integrating its own data with open data sources.This paper reports the activities carried on during a research project on Open Data for Youth Policies. The project was devoted to explore the youth situation in the municipalities and provinces of the Emilia Romagna region (Italy), in particular, to examine data on population, education and work. We identified interesting data sources both from the open data community and from the private repositories of local governments related to the Youth Policies. The selected sources have been integrated and, the result of the integration by means of a useful navigator tool have been shown up. In the end, we published new information on the web as Linked Open Data. Since the process applied and the tools used are generic, we trust this paper to be an example and a guide for new projects that aims to create new knowledge through Open Data.

Domenico Beneventano, Sonia Bergamaschi, Luca Gagliardelli, Laura Po

OntologyLine: A New Framework for Learning Non-taxonomic Relations of Domain Ontology

Domain Ontology learning has been introduced as a technology that aims at reducing the bottleneck of knowledge acquisition in the construction of domain ontologies. However, the discovery and the labelling of non-taxonomic relations have been identified as one of the most difficult problems in this learning process. In this paper, we propose OntologyLine, a new system for discovering non-taxonomic relations and building domain ontology from scratch. The proposed system is based on adapting Open Information Extraction algorithms to extract and label relations between domain concepts. OntologyLine was tested in two different domains: the financial and cancer domains. It was evaluated against gold standard ontology and was compared to state-of-the-art ontology learning algorithm. The experimental results show that OntologyLine is more effective for acquiring non-taxonomic relations and gives better results in terms of precision, recall and F-measure.

Omar El idrissi esserhrouchni, Bouchra Frikh, Brahim Ouhbi

Software Is Part Poetry, Part Prose

Software is part Poetry, part Prose. But it has much more in common with both forms of natural language, than usually admitted: software concepts, rather than defined by syntactic oriented computer programming languages, are characterized by the semantics of natural language. This paper exploits these similarities in a two-way sense. In one way the software perspective is relevant to the analysis of natural language forms, such as poems. In the other way round, this paper uses properties of both Poetry and Prose to facilitate a deeper understanding of highest-level software abstractions. Running software or poetry leads to understanding of the meaning conveyed by the conceptual structure. Refactoring embeds the understanding obtained by running software or poetry, into their modified conceptual structure.

Iaakov Exman, Alessio Plebe

Automatic Pattern Generator of Natural Language Text Applied in Public Health

At the moment, a huge amount of scientific articles is available, referring to a wide variety of topics like medicine, technology, economics, finance, and so on. Scientific papers show results of scientific interest and also present the evaluation and interpretation of relevant arguments. Due to the fact that these papers are created with a high frequency it is feasible to analyze how people write in a given domain. Within the discipline of natural language processing there are different approaches to analyze large amounts of text corpus. Identification patterns with semantic elements in a text, let us classify and examine the corpus to facilitate interpretation and management of information through computers. At the moment, a semiautomatic or automatic way to generate natural language patterns is not available or quite complicated. In the paper, it is shown how a tool developed for this research is tested in a domain of public health. The results obtained – by means of a tool and aided by graphs – provide groups of words that are used (to determine if they come from a specific vocabulary), most common grammatical categories, most repeated words in a domain, patterns found, and frequency of patterns found. A domain of public health has been selected containing 800 papers concerning different topics referring to genetics. The topics include mutations, genetic deafness, DNA, trinucleotide, suppressor genes, among others. An ontology of public health has been used to provide the basis of the study.

Anabel Fraga, Juan Llorens, Eugenio Parra, Valentín Moreno

Knowledge Management and Information Sharing

Frontmatter

Active Integrity Constraints: From Theory to Implementation

The problem of database consistency relative to a set of integrity constraints has been extensively studied since the 1980s, and is still recognized as one of the most important and complex in the field. In recent years, with the proliferation of knowledge repositories (not only databases) in practical applications, there has also been an effort to develop implementations of consistency maintenance algorithms that have a solid theoretical basis.The framework of active integrity constraints (AICs) is one example of such an effort, providing theoretical grounds for rule-based algorithms for ensuring database consistency. An AIC consists of an integrity constraint together with a specification of actions that may be taken to repair a database that does not satisfy it. Both denotational and operational semantics have been proposed for AICs. In this paper, we describe repAIrC, a prototype implementation of the algorithms previously proposed targetting SQL databases, i.e., the most prolific type of databases. Using repAIrC, we can both validate an SQL database with respect to a given set of AICs and compute possible repairs in case the database is inconsistent; the tool is able to work with the different kinds of repairs that have been considered, and achieves optimal asymptotic complexity in their computation. It also implements strategies for parallelizing the search for repairs, which in many cases can make untractable problems become easily solvable.

Luís Cruz-Filipe, Michael Franz, Artavazd Hakhverdyan, Marta Ludovico, Isabel Nunes, Peter Schneider-Kamp

Predictive Analytics in Business Intelligence Systems via Gaussian Processes for Regression

A Business Intelligence (BI) system employs tools from several areas of knowledge to deliver information that supports the decision making process. Throughout the present work, we aim to enhance the predictive stage of the BI system maintained by the Brazilian Federal Patrimony Department. The proposal is to use Gaussian Process for Regression (GPR) to model the intrinsic characteristics of the tax collection financial time series that is kept by this BI system, improving its error metrics. GPR natively returns a full statistical description of the estimated variable, which can be treated as a measure of confidence and also be used as a trigger to classify trusted and untrusted data. In our approach, a bidimensional dataset reshape model is used in order to take into account the multidimensional structure of the input data. The resulting algorithm, with GPR at its core, outperforms classical predictive schemes in this scenario such as financial indicators and artificial neural networks.

Bruno H. A. Pilon, Juan J. Murillo-Fuentes, João Paulo C. L. da Costa, Rafael T. de Sousa Júnior, Antonio M. R. Serrano

Semantically Enhancing Recommender Systems

As the amount of content and the number of users in social relationships is continually growing in the Internet, resource sharing and access policy management is difficult, time-consuming and error-prone. Cross-domain recommendation of private or protected resources managed and secured by each domain’s specific access rules is impracticable due to private security policies and poor sharing mechanisms. This work focus on exploiting resource’s content, user’s preferences, users’ social networks and semantic information to cross-relate different resources through their meta information using recommendation techniques that combine collaborative-filtering techniques with semantics annotations, by generating associations between resources. The semantic similarities established between resources are used on a hybrid recommendation engine that interprets user and resources’ semantic information. The recommendation engine allows the promotion and discovery of unknown-unknown resources to users that could not even know about the existence of those resources thus providing means to solve the cross-domain recommendation of private or protected resources.

Nuno Bettencourt, Nuno Silva, João Barroso

Analyzing Social Learning Management Systems for Educational Environments

A Social Learning Management System (Social LMS) is an instantiation of an LMS where there is an inclusion and strong emphasis of the social aspects mediated via the ICT. The amount of data produced within the social educational network (since all the students are potential creators of material) outnumbers the information of a normal LMS and calls for novel analysis methods. At the beginning, we introduce the architecture of the social learning analytics required to manage the knowledge of a Social LMS. At this point, we adapt the Kirkpatrcik-Phillips model for scholastic environments in order to provide assessment and control tools for a Social LMS. This requires the definition of new metrics which clarify aspects related to the single student but also provide global views of the network as a whole. In order to manage and visualize these metrics we suggest to use modular dashboards which accommodate for the different roles present in a learning institution.

Paolo Avogadro, Silvia Calegari, Matteo Dominoni

Extending the BPMN Specification to Support Cost-Centric Simulations of Business Processes

Business Process Simulation is considered by many a very useful technique to analyze the impact of some important choices designers take at process design or optimization time, right before processes are actually implemented and deployed. In order for the simulation to provide accurate and reliable results, process models need to take into account not just the workflow dynamics, but also many other important factors that may impact on the overall performance of process execution, and that form what we refer to as the Context of a process. In this paper we formalize a new Business Process Model that encompasses all the features of a business process in terms of workflow and execution Context respectively. The model allows designers to build a cost-centric perspective of a business process. Also, we propose an extension to the Business Process Model and Notation (BPMN) specification with the aim of enhancing the power of the BPMN to also model resources and the process execution environment. In the paper we provide some details of the implementation of a novel Business Process Simulator capable of simulating the newly introduced process model. To prove the overall approach’s viability, a case study is finally discussed.

Vincenzo Cartelli, Giuseppe Di Modica, Orazio Tomarchio

Supporting Semantic Annotation in Collaborative Workspaces with Knowledge Based on Linked Open Data

The management of shared resources on the Web has become one of the most pervasive activities in everyday life, but the heterogeneity of tools and resource types (documents, emails, Web sites, etc.) usually causes users to be lost and to spend a lot of time in organizing resources and tasks. Structured semantic annotation can provide a smart support to collaborative resource organization, but, as demonstrated by our user studies, users have often to deal with ambiguous or unknown expressions, suggested by the system or by other users. As a consequence, it is important to provide them with an “explanation” of unclear annotations, which can be based on formally encoded domain knowledge, retrieved from the LOD Cloud. We chose commonsense geospatial knowledge to implement a proof-of-concept prototype providing such “explanations”. After a brief presentation of the background, represented by the SemT++ project, we describe the approach and present a user evaluation of it.

Anna Goy, Diego Magro, Giovanna Petrone, Marco Rovera, Marino Segnan

Backmatter

Weitere Informationen

BranchenIndex Online

Die B2B-Firmensuche für Industrie und Wirtschaft: Kostenfrei in Firmenprofilen nach Lieferanten, Herstellern, Dienstleistern und Händlern recherchieren.

Whitepaper

- ANZEIGE -

INDUSTRIE 4.0

Der Hype um Industrie 4.0 hat sich gelegt – nun geht es an die Umsetzung. Das Whitepaper von Protolabs zeigt Unternehmen und Führungskräften, wie sie die 4. Industrielle Revolution erfolgreich meistern. Es liegt an den Herstellern, die besten Möglichkeiten und effizientesten Prozesse bereitzustellen, die Unternehmen für die Herstellung von Produkten nutzen können. Lesen Sie mehr zu: Verbesserten Strukturen von Herstellern und Fabriken | Konvergenz zwischen Soft- und Hardwareautomatisierung | Auswirkungen auf die Neuaufstellung von Unternehmen | verkürzten Produkteinführungszeiten
Jetzt gratis downloaden!

Bildnachweise