Skip to main content

2010 | Buch

Natural Language Processing and Information Systems

15th International Conference on Applications of Natural Language to Information Systems, NLDB 2010, Cardiff, UK, June 23-25, 2010. Proceedings

herausgegeben von: Christina J. Hopfe, Yacine Rezgui, Elisabeth Métais, Alun Preece, Haijiang Li

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

th The 15 International Conference on Applications of Natural Language to Information Systems (NLDB 2010) took place during June 23–25 in Cardiff (UK). Since the first edition in 1995, the NLDB conference has been aiming at bringing together resear- ers, people working in industry and potential users interested in various applications of natural language in the database and information system area. However, in order to reflect the growing importance of accessing information from a diverse collection of sources (Web, Databases, Sensors, Cloud) in an equally wide range of contexts (- cluding mobile and tethered), the theme of the 15th International Conference on - plications of Natural Language to Information Systems 2010 was "Communicating with Anything, Anywhere in Natural Language. " Natural languages and databases are core components in the development of inf- mation systems. Natural language processing (NLP) techniques may substantially enhance most phases of the information system lifecycle, starting with requirement analysis, specification and validation, and going up to conflict resolution, result pr- essing and presentation. Furthermore, natural language-based query languages and user interfaces facilitate the access to information for all and allow for new paradigms in the usage of computerized services. Hot topics such as information retrieval (IR), software engineering applications, hidden Markov models, natural language interfaces and semantic networks and graphs imply a complete fusion of databases, IR and NLP techniques.

Inhaltsverzeichnis

Frontmatter

Information Retrieval

An Approach for Adding Noise-Tolerance to Restricted-Domain Information Retrieval

Corpus of Information Retrieval (IR) systems are formed by text documents that often come from rather heterogeneous sources, such as Web sites or OCR (Optical Character Recognition) systems. Faithfully converting these sources into flat text files is not a trivial task, since noise can be easily introduced due to spelling or typeset errors. Importantly, if the size of the corpus is large enough, then redundancy helps in controlling the effects of noise because the same text often appears with and without noise throughout the corpus. Conversely, noise becomes a serious problem in restricted-domain IR where corpus is usually small and it has little or no redundancy. Therefore, noise hinders the retrieval task in restricted domains and erroneous results are likely to be obtained. In order to overcome this situation, this paper presents an approach for using restricted-domain resources, such as Knowledge Organization Systems (KOS), to add noise-tolerance to existing IR systems. To show the suitability of our approach in one real restricted-domain case study, a set of experiments has been carried out for the agricultural domain.

Katia Vila, Josval Díaz, Antonio Fernández, Antonio Ferrández
Measuring Tree Similarity for Natural Language Processing Based Information Retrieval

Natural language processing based information retrieval (NIR) aims to go beyond the conventional bag-of-words based information retrieval (KIR) by considering syntactic and even semantic information in documents. NIR is a conceptually appealing approach to IR, but is hard due to the need to measure distance/similarity between structures. We aim to move beyond the state of the art in measuring structure similarity for NIR.

In this paper, a novel tree similarity measurement

dtwAcs

is proposed in terms of a novel interpretation of trees as multi dimensional sequences. We calculate the distance between trees by the way of computing the distance between multi dimensional sequences, which is conducted by integrating the all common subsequences into the dynamic time warping method. Experimental result shows that

dtwAcs

outperforms the state of the art.

Zhiwei Lin, Hui Wang, Sally McClean
Sense-Based Biomedical Indexing and Retrieval

This paper tackles the problem of term ambiguity, especially for biomedical literature. We propose and evaluate two methods of Word Sense Disambiguation (WSD) for biomedical terms and integrate them to a sense-based document indexing and retrieval framework. Ambiguous biomedical terms in documents and queries are disambiguated using the Medical Subject Headings (MeSH) thesaurus and semantically indexed with their associated correct sense. The experimental evaluation carried out on the TREC9-FT 2000 collection shows that our approach of WSD and sense-based indexing and retrieval outperforms the baseline.

Duy Dinh, Lynda Tamine

Natural Language Processing

Semantic Content Access Using Domain-Independent NLP Ontologies

We present a lightweight, user-centred approach for document navigation and analysis that is based on an ontology of text mining results. This allows us to bring the result of existing text mining pipelines directly to end users. Our approach is domain-independent and relies on existing NLP analysis tasks such as automatic multi-document summarization, clustering, question-answering, and opinion mining. Users can interactively trigger semantic processing services for tasks such as analyzing product reviews, daily news, or other document sets.

René Witte, Ralf Krestel
Extracting Meronymy Relationships from Domain-Specific, Textual Corporate Databases

Various techniques for learning meronymy relationships from opendomain corpora exist. However, extracting meronymy relationships from domain-specific, textual corporate databases has been overlooked, despite numerous application opportunities particularly in domains like product development and/or customer service. These domains also pose new scientific challenges, such as the absence of elaborate knowledge resources, compromising the performance of supervised meronymy-learning algorithms. Furthermore, the domain-specific terminology of corporate texts makes it difficult to select appropriate seeds for minimally-supervised meronymy-learning algorithms. To address these issues, we develop and present a principled approach to extract accurate meronymy relationships from textual databases of product development and/or customer service organizations by leveraging on reliable meronymy lexico-syntactic patterns harvested from an open-domain corpus. Evaluations on real-life corporate databases indicate that our technique extracts precise meronymy relationships that provide valuable operational insights on causes of product failures and customer dissatisfaction. Our results also reveal that the types of some of the domain-specific meronymy relationships, extracted from the corporate data, cannot be conclusively and unambiguously classified under wellknown taxonomies of relationships.

Ashwin Ittoo, Gosse Bouma, Laura Maruster, Hans Wortmann
Automatic Word Sense Disambiguation Using Cooccurrence and Hierarchical Information

We review in detail here a polished version of the systems with which we participated in the

Senseval-2

competition English tasks (all words and lexical sample). It is based on a combination of selectional preference measured over a large corpus and hierarchical information taken from WordNet, as well as some additional heuristics. We use that information to expand sense glosses of the senses in WordNet and compare the similarity between the contexts vectors and the word sense vectors in a way similar to that used by Yarowsky and Schuetze. A supervised extension of the system is also discussed. We provide new and previously unpublished evaluation over the SemCor collection, which is two orders of magnitude larger than

SENSEVAL-2

collections as well as comparison with baselines. Our systems scored first among unsupervised systems in both tasks. We note that the method is very sensitive to the quality of the characterizations of word senses; glosses being much better than training examples.

David Fernandez-Amoros, Ruben Heradio Gil, Jose Antonio Cerrada Somolinos, Carlos Cerrada Somolinos

Software Engineering Applications

Automatic Quality Assessment of Source Code Comments: The JavadocMiner

An important software engineering artefact used by developers and maintainers to assist in software comprehension and maintenance is source code documentation. It provides insights that help software engineers to effectively perform their tasks, and therefore ensuring the quality of the documentation is extremely important. Inline documentation is at the forefront of explaining a programmer’s original intentions for a given implementation. Since this documentation is written in natural language, ensuring its quality needs to be performed manually. In this paper, we present an effective and automated approach for assessing the quality of inline documentation using a set of heuristics, targeting both

quality

of language and

consistency

between source code and its comments. We apply our tool to the different modules of two open source applications (ArgoUML and Eclipse), and correlate the results returned by the analysis with bug defects reported for the individual modules in order to determine connections between documentation and code quality.

Ninus Khamis, René Witte, Juergen Rilling
Towards Approximating COSMIC Functional Size from User Requirements in Agile Development Processes Using Text Mining

Measurement of software size from user requirements is crucial for the estimation of the developmental time and effort. COSMIC, an ISO/IEC international standard for functional size measurement, provides an objective method of measuring the functional size of the software from user requirements. COSMIC requires the user requirements to be written at a level of granularity, where interactions between the internal and the external environments to the system are visible to the human measurer, in a form similar to use case descriptions. On the other hand, requirements during an agile software development iteration are written in a less formal way than use case descriptions — often in the form of user stories, for example, keeping with the goal of delivering a planned release as quickly as possible. Therefore, size measurement in agile processes uses methods (e.g. story-points, smart estimation) that strictly depend on the subjective judgment of the experts, and avoid using objective measurement methods like COSMIC. In this paper, we presented an innovative concept showing that using a supervised text mining approach, COSMIC functional size can be automatically approximated from informally written textual requirements, demonstrating its applicability in popular agile software development processes, such as Scrum.

Ishrar Hussain, Leila Kosseim, Olga Ormandjieva
Semantic Enriching of Natural Language Texts with Automatic Thematic Role Annotation

This paper proposes an approach which utilizes natural language processing (NLP) and ontology knowledge to automatically denote the implicit semantics of textual requirements. Requirements documents include the syntax of natural language but not the semantics. Semantics are usually interpreted by the human user. In earlier work Gelhausen and Tichy showed that

Sal

E

automatically creates UML domain models from (semantically) annotated textual specifications [1]. This manual annotation process is very time consuming and can only be carried out by annotation experts. We automate semantic annotation so that

Sal

E

can be completely automated. With our approach, the analyst receives the domain model of a requirements specification in a very fast and easy manner. Using these concepts is the first step into farther automation of requirements engineering and software development.

Sven J. Körner, Mathias Landhäußer

Classification

Adaptive Topic Modeling with Probabilistic Pseudo Feedback in Online Topic Detection

Online topic detection (OTD) system seeks to analyze sequential stories in a real-time manner so as to detect new topics or to associate stories with certain existing topics. To handle new stories more precisely, an adaptive topic modeling method that incorporates probabilistic pseudo feedback is proposed in this paper to tune every topic model with a changed environment. Differently, this method considers every incoming story as pseudo feedback with certain probability, which is the similarity between the story and the topic. Experiment results show that probabilistic pseudo feedback brings promising improvement to online topic detection.

Guoyu Tang, Yunqing Xia
An Approach to Indexing and Clustering News Stories Using Continuous Language Models

Within the vocabulary used in a set of news stories a minority of terms will be topic-specific in that they occur largely or solely within those stories belonging to a common event. When applying unsupervised learning techniques such as clustering it is useful to determine which words are event-specific and which topic they relate to. Continuous language models are used to model the generation of news stories over time and from these models two measures are derived: bendiness which indicates whether a word is event specific and shape distance which indicates whether two terms are likely to relate to the same topic. These are used to construct a new clustering technique which identifies and characterises the underlying events within the news stream.

Richard Bache, Fabio Crestani
Spoken Language Understanding via Supervised Learning and Linguistically Motivated Features

In this paper, we reduce the rescoring problem in a spoken dialogue understanding task to a classification problem, by using the semantic error rate as the reranking target value. The classifiers we consider here are trained with linguistically motivated features. We present comparative experimental evaluation results of four supervised machine learning methods: Support Vector Machines, Weighted K-Nearest Neighbors, Naïve Bayes and Conditional Inference Trees. We provide a quantitative evaluation of learning and generalization during the classification supervised training, using cross validation and ROC analysis procedures. The reranking is derived using the posterior knowledge given by the classification algorithms.

Maria Georgescul, Manny Rayner, Pierrette Bouillon

Hidden Markov Models

Topology Estimation of Hierarchical Hidden Markov Models for Language Models

Estimation of topology of probabilistic models provides us with an important technique for many statistical language processing tasks. In this investigation, we propose a new topology estimation method for

Hierarchical Hidden Markov Model

(HHMM) that generalizes Hidden Markov Model (HMM) in a hierarchical manner. HHMM is a stochastic model which has powerful description capability compared to HMM, but it is hard to estimate HHMM topology because we have to give an initial hierarchy structure in advance on which HHMM depends. In this paper we propose a recursive estimation method of HHMM submodels by using frequent similar subsequence sets. We show some experimental results to see the effectiveness of our method.

Kei Wakabayashi, Takao Miura
Speaker Independent Urdu Speech Recognition Using HMM

Automatic Speech Recognition (ASR) is one of the advanced fields of Natural Language Processing (NLP). Recent past has witnessed valuable research activities in ASR in English, European and East Asian languages. But unfortunately South Asian Languages in general and “Urdu” in particular have received very less attention. In this paper we present an approach to develop an ASR system for Urdu language. The proposed system is based on an open source speech recognition framework called Sphinx4 which uses statistical based approach (HMM: Hidden Markov Model) for developing ASR system. We present a Speaker Independent ASR system for small sized vocabulary, i.e. fifty two isolated most spoken Urdu words and suggest that this research work will form the basis to develop medium and large size vocabulary Urdu speech recognition system.

Javed Ashraf, Naveed Iqbal, Naveed Sarfraz Khattak, Ather Mohsin Zaidi
Second-Order HMM for Event Extraction from Short Message

This paper presents a novel integrated second-order Hidden Markov Model (HMM) to extract event related named entities (NEs) and activities from short messages simultaneously. It uses second-order Markov chain to better model the context dependency in the string sequence. For decoding second-order HMM, a two-order Viterbi algorithm is used. The experiments demonstrate that combing NE and activities as an integrated model achieves better results than process them separately by NER for NEs and POS decoding for activities. The experimental results also showed that second-order HMM outperforms than first-order HMM. Furthermore, the proposed algorithm significantly reduces the complexity that can run in the handheld device in the real time.

Huixing Jiang, Xiaojie Wang, Jilei Tian

Querying

Goal Detection from Natural Language Queries

This paper aims to identify the communication goal(s) of a user’s information-seeking query out of a finite set of within-domain goals in natural language queries. It proposes using Tree-Augmented Naive Bayes networks (TANs) for goal detection. The problem is formulated as

N

binary decisions, and each is performed by a TAN. Comparative study has been carried out to compare the performance with Naive Bayes, fully-connected TANs, and multi-layer neural networks. Experimental results show that TANs consistently give better results when tested on the ATIS and DARPA Communicator corpora.

Yulan He
Parsing Natural Language into Content for Storage and Retrieval in a Content-Addressable Memory

This paper explores the possibility of applying Database Semantic (DBS) to textual databases and the WWW. The DBS model of natural language communication is designed as an artificial cognitive agent with a hearer mode, a think mode, and a speaker mode. For the application at hand, the hearer mode is used for (i) parsing language data into sets of proplets, defined as non-recursive feature structures, which are stored in a content-addressable memory called Word Bank, and (ii) for parsing the user query into a DBS schema employed for retrieval. The think mode is used to expand the primary data activated by the query schema to a wider range of relevant secondary and tertiary data. The speaker mode is used to realize the data retrieved in the natural language of the query.

Roland Hausser

Natural Language Interfaces

Vague Relations in Spatial Databases

While qualitative relations (e.g. RCC8 relations) can readily be derived from spatial databases, a more difficult matter is the representation of vague spatial relations such as ‘near-to’,‘next-to’, ‘between’, etc. After surveying earlier approaches, this paper proposes a method that is tractable, learnable and directly suitable for use in natural language interfaces to spatial databases. The approach is based on definite logic programs with contexts represented as first class objects and supervaluation over a set of threshold parameters. Given an initial hand-built program with open threshold parameters, a polynomial-time algorithm finds a setting of threshold parameters that are consistent with a training corpus of vague descriptions of scenes. The results of this algorithm may then be compiled into view definitions which are accessed in real-time by natural language interfaces employing normal, non-exotic query answering mechanisms.

Michael J. Minock
Conceptual Modeling of Online Entertainment Programming Guide for Natural Language Interface

This paper describes a new novel approach to the conceptual modeling of text-based electronic programming guide (EPG) for broadcast TV programs by using a large text corpus constructed from the EPG metadata source. Two empirical experiments are carried out to evaluate the EPG-specific language models created using the new algorithm in context of natural language (NL) based information retrieval systems. The experimental results show the effectiveness of the algorithm for developing low-complexity concept models with high coverage for the user’s language models associated with both typed and spoken queries when interacting with a NL based EPG search interface.

Harry Chang
Integration of Natural Language Dialogues into the Conceptual Model of Storyboard Design

Web information systems are growing with regard to complexity. Speech is a way of increasing usability and facilitating interaction. Moreover it nowadays is an important factor concerning customer needs and wishes. This paper focusses on the extension of an existing conceptual model called storyboard, to support the design of natural language dialogues. It includes a short description of storyboarding and natural language dialogues. Subsequently the outcomes are evaluated, condensed and then integrated into the storyboard model. The results of this work show that only little adaptions regarding the storyboard concept are necessary and the extension of the presentation layer with a channel-dependent renderer is sufficient to be able to model natural language dialogues.

Markus Berg, Antje Düsterhöft, Bernhard Thalheim

Domain Modelling

Autonomous Malicious Activity Inspector – AMAI

Computer networks today are far more complex and managing such networks is not more then a job of an expert. Monitoring systems helps network administrator in monitoring and protecting the network by not allowing the users to run illegal application or changing the configuration of the network node. In this paper, we have proposed Autonomous Malicious Activity Inspector – AMAI which uses ontology based knowledge base to predict unknown illegal applications based on known illegal application behaviors. AMAI is an Intelligent Multi Agent System used to detect known and unknown malicious activities carried out by the users over the network. We have compared ABSAMN and AMAI concurrently at the university campus having seven labs equipped with 20 to 300 number of PCs in various labs; results shows AMAI outperform ABSAMN in every aspect.

Umar Manzoor, Samia Nefti, Yacine Rezgui
Towards Geographic Databases Enrichment

The geographic database (GDB) is the backbone of the geographic information system (GIS). Indeed, all kinds of data managements are based and strongly affected by the type, relevancy and scope of the stored data. Nevertheless, this dataset is sometimes insufficient to make the adequate decision. Then, it is of primary interest to provide other data sources to complement the inherent GDB. In this context, we propose a four staged semantic data enrichment approach consisting of: text segmentation, theme identification, delegation and text filtering. Besides, a refinement is eventually executed to enhance the data enrichment results.

Khaoula Mahmoudi, Sami Faïz
On-Demand Extraction of Domain Concepts and Relationships from Social Tagging Websites

Much content on the World Wide Web is becoming tagged with simple words or phrases in natural language as web citizens create tags that organize information primarily to facilitate their personal retrieval and use. These tags represent, often incomplete, pieces of knowledge about concepts in a domain. Aggregated across a large number of contributors, these tags provide the potential to identify, in a bottom-up manner, key constructs in a domain. This research develops a set of heuristics that aggregate and analyze tags contributed by individual users on the web to extract and generate domain-level constructs. The heuristics infer the existence of constructs, and distinguish entities, attributes, and relationships.

Vijayan Sugumaran, Sandeep Purao, Veda C. Storey, Jordi Conesa

Information Extraction

Analysis of Definitions of Verbs in an Explanatory Dictionary for Automatic Extraction of Actants Based on Detection of Patterns

Due to the importance that verbs have in language an identification of their actants (obligatory complements) is important for understanding of the meaning of sentences. Usually, the solution of this problem in natural language processing is based on machine learning approaches, which are trained on large sets of tagged texts. We show that it is possible to work with other kind of sources, i.e., explanatory dictionaries. Dictionary definitions have patterns that provide enough information for identifying actants. We develop a heuristic approach in order to obtain this information and developed an algorithm for detection of actants in texts.

Noé Alejandro Castro-Sánchez, Grigori Sidorov
An Automatic Definition Extraction in Arabic Language

During the last few years, a lot of researches have focused on automatic definition extraction in the context of question answering systems. Although, these researches have been conducted for different languages, no research has been proposed for Arabic. In this paper, we tackle the automatic definition extraction in the context of Question Answering systems. We propose a method based on patterns to automatically identify a definition answer to a definition question. The proposed method is implemented in an Arabic definitional question answering system. We experimented this system using a set of 50 definition questions, and a corpus of 2000 snippets collected from the Web. The obtained results are very encouraging: 94% of the definition questions have complete definitions among their first 5 answers.

Omar Trigui, Lamia Hadrich Belguith, Paolo Rosso
Automatic Term Extraction Using Log-Likelihood Based Comparison with General Reference Corpus

In the paper we present a method that allows an extraction of single-word terms for a specific domain. At the next stage these terms can be used as candidates for multi-word term extraction. The proposed method is based on comparison with general reference corpus using log-likelihood similarity. We also perform clustering of the extracted terms using k-means algorithm and cosine similarity measure. We made experiments using texts of the domain of computer science. The obtained term list is analyzed in detail.

Alexander Gelbukh, Grigori Sidorov, Eduardo Lavin-Villa, Liliana Chanona-Hernandez
Weighted Vote Based Classifier Ensemble Selection Using Genetic Algorithm for Named Entity Recognition

In this paper, we report the search capability of genetic algorithm (GA) to construct a weighted vote based classifier ensemble for Named Entity Recognition (NER). Our underlying assumption is that the reliability of predictions of each classifier differs among the various named entity (NE) classes. Weights of voting should be high for the NE classes for which the classifier is most reliable and low for the NE classes for which the classifier is not at all reliable. Here, an attempt is made to quantify the amount of voting for each class in each classifier using GA. We use Maximum Entropy (ME) framework to build a number of classifiers depending upon the various representations of a set of features, language independent in nature. The proposed technique is evaluated with two resource-constrained languages, namely Bengali and Hindi. Evaluation results yield the recall, precision and F-measure values of 73.81%, 84.92% and 78.98%, respectively for Bengali and 65.12%, 82.03% and 72.60%, respectively for Hindi. Results also show that the proposed weighted vote based classifier ensemble identified by GA outperforms all the individual classifiers and three conventional

baseline

ensemble techniques for both the languages.

Asif Ekbal, Sriparna Saha
Refactoring of Process Model Activity Labels

Recently many companies have expanded their business process modeling projects such that thousands of process models are designed and maintained. Activity labels of these models are related to different styles according to their grammatical structure. There are several guidelines that suggest using a verb-object labeling style. Meanwhile, real-world process models often include labels that do not follow this style. In this paper we investigate the potential to improve the label quality automatically. We define and implement an approach for automatic refactoring of labels following action-noun style into verb-object labels. We evaluate the proposed techniques using a collection of real-world process models—the SAP Reference Model.

Henrik Leopold, Sergey Smirnov, Jan Mendling
Unsupervised Ontology Acquisition from Plain Texts: The OntoGain System

We propose

OntoGain

, a system for unsupervised ontology acquisition from unstructured text which relies on multi-word term extraction. For the acquisition of taxonomic relations, we exploit inherent multi-word terms’ lexical information in a comparative implementation of agglomerative hierarchical clustering and formal concept analysis methods. For the detection of non-taxonomic relations, we comparatively investigate in

OntoGain

an association rules based algorithm and a probabilistic algorithm. The

OntoGain

system allows for transformation of the derived ontology into standard OWL statements.

OntoGain

results are compared to both hand-crafted ontologies, as well as to a state-of-the art system, in two different domains: the medical and computer science domains.

Euthymios Drymonas, Kalliopi Zervanou, Euripides G. M. Petrakis

Semantic Networks & Graphs

Identifying Writers’ Background by Comparing Personal Sense Thesauri

Analysis of blogpost writings is an important and growing research area. Both objective and subjective characteristics of a writer are detected. Words have word meaning that is common in the language and that is represented in their usage. Another component of word meaning, “personal sense”, not inherent in the language, but different for each person, reflects a meaning of words in terms of unique personal experience and carries the personal characteristics.

In our research word meaning techniques are applied to represent personal sense of words in texts by different authors. Personalized concept structures are construed and used to infer authors’ perspective from text: various notions of context combined with different thesaurus similarity scales are applied to confirm that from a certain perspective similarity in the personalized thesauri with some restrictions can correspond to similarities in the occupation of the authors.

Polina Panicheva, John Cardiff, Paolo Rosso
Retrieval of Similar Electronic Health Records Using UMLS Concept Graphs

Physicians often use information from previous clinical cases in their decision-making process. However, the large amount of patient records available in hospitals makes an exhaustive search unfeasible. We propose a method for the retrieval of similar clinical cases, based on mapping the text onto UMLS concepts and representing the patient records as semantic graphs. The method also deals with the problems of negation detection and concept identification in clinical free text. To evaluate the approach, an evaluation collection has been developed. The results show that our method correlates well with the expert judgments and outperforms remarkably the traditional term-vector space model.

Laura Plaza, Alberto Díaz
Supporting the Abstraction of Clinical Practice Guidelines Using Information Extraction

Modelling clinical practice guidelines in a computer-interpretable format is a challenging and complex task. The modelling process involves both medical experts and computer scientists, who have to interact and communicate together. In order to support both modeller groups we propose to provide them with helpful information automatically generated using NLP methods. We identify this information using rules based on both syntactic and semantic information. The majority of the defined information extraction rules are based on semantic relationships derived from the UMLS Semantic Network. Findings in the evaluation indicate that using rules based on semantic and syntactic information provide valuable and helpful results.

Katharina Kaiser, Silvia Miksch
Backmatter
Metadaten
Titel
Natural Language Processing and Information Systems
herausgegeben von
Christina J. Hopfe
Yacine Rezgui
Elisabeth Métais
Alun Preece
Haijiang Li
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-13881-2
Print ISBN
978-3-642-13880-5
DOI
https://doi.org/10.1007/978-3-642-13881-2

Premium Partner