Skip to main content

2010 | Buch

The Semantic Web – ISWC 2010

9th International Semantic Web Conference, ISWC 2010, Shanghai, China, November 7-11, 2010, Revised Selected Papers, Part II

herausgegeben von: Peter F. Patel-Schneider, Yue Pan, Pascal Hitzler, Peter Mika, Lei Zhang, Jeff Z. Pan, Ian Horrocks, Birte Glimm

Verlag: Springer Berlin Heidelberg

Buchreihe : Lecture Notes in Computer Science

insite
SUCHEN

Über dieses Buch

The two-volume set LNCS 6496 and 6497 constitutes the refereed proceedings of the 9th International Semantic Web Conference, ISWC 2010, held in Shanghai, China, during November 7-11, 2010. Part I contains 51 papers out of 578 submissions to the research track. Part II contains 18 papers out of 66 submissions to the semantic Web in-use track, 6 papers out of 26 submissions to the doctoral consortium track, and also 4 invited talks. Each submitted paper were carefully reviewed. The International Semantic Web Conferences (ISWC) constitute the major international venue where the latest research results and technical innovations on all aspects of the Semantic Web are presented. ISWC brings together researchers, practitioners, and users from the areas of artificial intelligence, databases, social networks, distributed computing, Web engineering, information systems, natural language processing, soft computing, and human computer interaction to discuss the major challenges and proposed solutions, the success stories and failures, as well the visions that can advance research and drive innovation in the Semantic Web.

Inhaltsverzeichnis

Frontmatter

Semantic-Web-In-Use Track

I18n of Semantic Web Applications
Abstract
Recently, the use of semantic technologies has gained quite some traction. With increased use of these technologies, their maturation not only in terms of performance, robustness but also with regard to support of non-latin-based languages and regional differences is of paramount importance. In this paper, we provide a comprehensive review of the current state of the internationalization (I18n) of Semantic Web technologies. Since resource identifiers play a crucial role for the Semantic Web, the internatinalization of resource identifiers is of high importance. It turns out that the prevalent resource identification mechanism on the Semantic Web, i.e. URIs, are not sufficient for an efficient internationalization of knowledge bases. Fortunately, with IRIs a standard for international resource identifiers is available, but its support needs much more penetration and homogenization in various semantic web technology stacks. In addition, we review various RDF serializations with regard to their support for internationalized knowledge bases. The paper also contains an in-depth review of popular semantic web tools and APIs with regard to their support for internationalization.
Sören Auer, Matthias Weidl, Jens Lehmann, Amrapali J. Zaveri, Key-Sun Choi
Social Dynamics in Conferences: Analyses of Data from the Live Social Semantics Application
Abstract
Popularity and spread of online social networking in recent years has given a great momentum to the study of dynamics and patterns of social interactions. However, these studies have often been confined to the online world, neglecting its interdependencies with the offline world. This is mainly due to the lack of real data that spans across this divide. The Live Social Semantics application is a novel platform that dissolves this divide, by collecting and integrating data about people from (a) their online social networks and tagging activities from popular social networking sites, (b) their publications and co-authorship networks from semantic repositories, and (c) their real-world face-to-face contacts with other attendees collected via a network of wearable active sensors. This paper investigates the data collected by this application during its deployment at three major conferences, where it was used by more than 400 people. Our analyses show the robustness of the patterns of contacts at various conferences, and the influence of various personal properties (e.g. seniority, conference attendance) on social networking patterns.
Alain Barrat, Ciro Cattuto, Martin Szomszor, Wouter Van den Broeck, Harith Alani
Using Semantic Web Technologies for Clinical Trial Recruitment
Abstract
Clinical trials are fundamental for medical science: they provide the evaluation for new treatments and new diagnostic approaches. One of the most difficult parts of clinical trials is the recruitment of patients: many trials fail due to lack of participants. Recruitment is done by matching the eligibility criteria of trials to patient conditions. This is usually done manually, but both the large number of active trials and the lack of time available for matching keep the recruitment ratio low.
In this paper we present a method, entirely based on standard semantic web technologies and tool, that allows the automatic recruitment of a patient to the available clinical trials. We use a domain specific ontology to represent data from patients’ health records and we use SWRL to verify the eligibility of patients to clinical trials.
Paolo Besana, Marc Cuggia, Oussama Zekri, Annabel Bourde, Anita Burgun
Experience of Using OWL Ontologies for Automated Inference of Routine Pre-operative Screening Tests
Abstract
We describe our experience of designing and implementing a knowledge-based pre-operative assessment decision support system. We developed the system using semantic web technology, including modular ontologies developed in the OWL Web Ontology Language, the OWL Java Application Programming Interface and an automated logic reasoner. Using ontologies at the core of the system’s architecture permits to efficiently manage a vast repository of pre-operative assessment domain knowledge, including classification of surgical procedures, classification of morbidities, and guidelines for routine pre-operative screening tests. Logical inference on the domain knowledge, according to individual patient’s medical context (medical history combined with planned surgical procedure) enables to generate personalised patients’ reports, consisting of a risk assessment and clinical recommendations, including relevant pre-operative screening tests.
Matt-Mouley Bouamrane, Alan Rector, Martin Hurrell
Enterprise Data Classification Using Semantic Web Technologies
Abstract
Organizations today collect and store large amounts of data in various formats and locations. However they are sometimes required to locate all instances of a certain type of data. Good data classification allows marking enterprise data in a way that enables quick and efficient retrieval of information when needed. We introduce a generic, automatic classification method that exploits Semantic Web technologies to assist in several phases in the classification process; defining the classification requirements, performing the classification and representing the results. Using Semantic Web technologies enables flexible and extensible configuration, centralized management and uniform results. This approach creates general and maintainable classifications, and enables applying semantic queries, rule languages and inference on the results.
David Ben-David, Tamar Domany, Abigail Tarem
Semantic Techniques for Enabling Knowledge Reuse in Conceptual Modelling
Abstract
Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich the modelling process. We propose to apply Semantic Web techniques to the context of conceptual modelling (more particularly to the domain of qualitative reasoning), to smoothly interconnect conceptual models created by different users, thus facilitating the global sharing of scientific data contained in such models and creating new learning opportunities for people who start modelling. This paper describes how semantic grounding techniques can be used during the creation of qualitative reasoning models, to bridge the gap between the imprecise user terminology and a well defined external common vocabulary. We also explore the application of ontology matching techniques between models, which can provide valuable feedback during the model construction process.
Jorge Gracia, Jochem Liem, Esther Lozano, Oscar Corcho, Michal Trna, Asunción Gómez-Pérez, Bert Bredeweg
Semantic Technologies for Enterprise Cloud Management
Abstract
Enterprise clouds apply the paradigm of cloud computing to enterprise IT infrastructures, with the goal of providing easy, flexible, and scalable access to both computing resources and IT services. Realizing the vision of the fully automated enterprise cloud involves addressing a range of technological challenges. In this paper, we focus on the challenges related to intelligent information management in enterprise clouds and discuss how semantic technologies can help to fulfill them. In particular, we address the topics of data integration, collaborative documentation and annotation and intelligent information access and analytics and present solutions that are implemented in the newest addition to our eCloudManager product suite: The Intelligence Edition.
Peter Haase, Tobias Mathäß, Michael Schmidt, Andreas Eberhart, Ulrich Walther
Semantic MediaWiki in Operation: Experiences with Building a Semantic Portal
Abstract
Wikis allow users to collaboratively create and maintain content. Semantic wikis, which provide the additional means to annotate the content semantically and thereby allow to structure it, experience an enormous increase in popularity, because structured data is more usable and thus more valuable than unstructured data. As an illustration of leveraging the advantages of semantic wikis for semantic portals, we report on the experience with building the AIFB portal based on Semantic MediaWiki. We discuss the design, in particular how free, wiki-style semantic annotations and guided input along a predefined schema can be combined to create a flexible, extensible, and structured knowledge representation. How this structured data evolved over time and its flexibility regarding changes are subsequently discussed and illustrated by statistics based on actual operational data of the portal. Further, the features exploiting the structured data and the benefits they provide are presented. Since all benefits have its costs, we conducted a performance study of the Semantic MediaWiki and compare it to MediaWiki, the non-semantic base platform. Finally we show how existing caching techniques can be applied to increase the performance.
Daniel M. Herzig, Basil Ell
A Case Study of Linked Enterprise Data
Abstract
Even though its adoption in the enterprise environment lags behind the public domain, semantic (web) technologies, more recently the linked data initiative, started to penetrate into business domain with more and more people recognising the benefit of such technologies. An evident advantage of leveraging semantic technologies is the integration of distributed data sets that benefit companies with a great return of value. Enterprise data, however, present significantly different characteristics from public data on the Internet. These differences are evident in both technical and managerial perspectives. This paper reports a pilot study, carried out in an international organisation, aiming to provide a collaborative workspace for fast and low-overhead data sharing and integration. We believe that the design considerations, study outcomes, and learnt lessons can help making decisions of whether and how one should adopt semantic technologies in similar contexts.
Bo Hu, Glenn Svensson
Linkage of Heterogeneous Knowledge Resources within In-Store Dialogue Interaction
Abstract
Dialogue interaction between customers and products improves presentation of relevant product information in in-store shopping situations. Thus, information needs of customers can be addressed more intuitive. In this article, we describe how access to product information can be improved based on dynamic linkage of heterogeneous knowledge representations. We therefore introduce a conceptual model of dialogue interaction based on multiple knowledge resources for in-store shopping situations and empirically test its utility with end-users.
Sabine Janzen, Tobias Kowatsch, Wolfgang Maass, Andreas Filler
ISReal: An Open Platform for Semantic-Based 3D Simulations in the 3D Internet
Abstract
We present the first open and cross-disciplinary 3D Internet research platform, called ISReal, for intelligent 3D simulation of realities. Its core innovation is the comprehensively integrated application of semantic Web technologies, semantic services, intelligent agents, verification and 3D graphics for this purpose. In this paper, we focus on the interplay between its components for semantic XML3D scene query processing and semantic 3D animation service handling, as well as the semantic-based perception and action planning with coupled semantic service composition by agent-controlled avatars in a virtual world. We demonstrate the use of the implemented platform for semantic-based 3D simulations in a small virtual world example with an intelligent user avatar and discuss results of the platform performance evaluation.
Patrick Kapahnke, Pascal Liedtke, Stefan Nesbigall, Stefan Warwas, Matthias Klusch
ORE - A Tool for Repairing and Enriching Knowledge Bases
Abstract
While the number and size of Semantic Web knowledge bases increases, their maintenance and quality assurance are still difficult. In this article, we present ORE, a tool for repairing and enriching OWL ontologies. State-of-the-art methods in ontology debugging and supervised machine learning form the basis of ORE and are adapted or extended so as to work well in practice. ORE supports the detection of a variety of ontology modelling problems and guides the user through the process of resolving them. Furthermore, the tool allows to extend an ontology through (semi-)automatic supervised learning. A wizard-like process helps the user to resolve potential issues after axioms are added.
Jens Lehmann, Lorenz Bühmann
Mapping Master: A Flexible Approach for Mapping Spreadsheets to OWL
Abstract
We describe a mapping language for converting data contained in spreadsheets into the Web Ontology Language (OWL). The developed language, called M2, overcomes shortcomings with existing mapping techniques, including their restriction to well-formed spreadsheets reminiscent of a single relational database table and verbose syntax for expressing mapping rules when transforming spreadsheet contents into OWL. The M2 language provides expressive, yet concise mechanisms to create both individual and class axioms when generating OWL ontologies. We additionally present an implementation of the mapping approach, Mapping Master, which is available as a plug-in for the Protégé ontology editor.
Martin J. O’Connor, Christian Halaschek-Wiener, Mark A. Musen
dbrec — Music Recommendations Using DBpedia
Abstract
This paper describes the theoretical background and the implementation of dbrec, a music recommendation system built on top of DBpedia, offering recommendations for more than 39,000 bands and solo artists. We discuss the various challenges and lessons learnt while building it, providing relevant insights for people developing applications consuming Linked Data. Furthermore, we provide a user-centric evaluation of the system, notably by comparing it to last.fm.
Alexandre Passant
Knowledge Engineering for Historians on the Example of the Catalogus Professorum Lipsiensis
Abstract
Although the Internet, as an ubiquitous medium for communication, publication and research, already significantly influenced the way historians work, the capabilities of the Web as a direct medium for collaboration in historic research are not much explored. We report about the application of an adaptive, semantics-based knowledge engineering approach for the development of a prosopographical knowledge base on the Web - the Catalogus Professorum Lipsiensis. In order to enable historians to collect, structure and publish prosopographical knowledge an ontology was developed and knowledge engineering facilities based on the semantic data wiki OntoWiki were implemented. The resulting knowledge base contains information about more than 14.000 entities and is tightly interlinked with the emerging Web of Data. For access and exploration by other historians a number of access interfaces were developed, such as a visual SPARQL query builder, a relationship finder and a Linked Data interface. The approach is transferable to other prosopographical research projects and historical research in general, thus improving the collaboration in historic research communities and facilitating the reusability of historic research results.
Thomas Riechert, Ulf Morgenstern, Sören Auer, Sebastian Tramp, Michael Martin
Time-Oriented Question Answering from Clinical Narratives Using Semantic-Web Techniques
Abstract
The ability to answer temporal-oriented questions based on clinical narratives is essential to clinical research. The temporal dimension in medical data analysis enables clinical researches on many areas, such as, disease progress, individualized treatment, and decision support. The Semantic Web provides a suitable environment to represent the temporal dimension of the clinical data and reason about them. In this paper, we introduce a Semantic-Web based framework, which provides an API for querying temporal information from clinical narratives. The framework is centered by an OWL ontology called CNTRO (Clinical Narrative Temporal Relation Ontology), and contains three major components: time normalizer, SWRL based reasoner, and OWL-DL based reasoner. We also discuss how we adopted these three components in the clinical domain, their limitations, as well as extensions that we found necessary or desirable to archive the purposes of querying time-oriented data from real-world clinical narratives.
Cui Tao, Harold R. Solbrig, Deepak K. Sharma, Wei-Qi Wei, Guergana K. Savova, Christopher G. Chute
Will Semantic Web Technologies Work for the Development of ICD-11?
Abstract
The World Health Organization is beginning to use Semantic Web technologies in the development of the 11th revision of the International Classification of Diseases (ICD-11). Health officials use ICD in all United Nations member countries to compile basic health statistics, to monitor health-related spending, and to inform policy makers. While previous revisions of ICD encoded minimal information about a disease, and were mainly published as books and tabulation lists, the creators of ICD-11 envision that it will become a multi-purpose and coherent classification ready for electronic health records. Most important, they plan to have ICD-11 applied for a much broader variety of uses than previous revisions. The new requirements entail significant changes in the way we represent disease information, as well as in the technologies and processes that we use to acquire the new content. In this paper, we describe the previous processes and technologies used for developing ICD. We then describe the requirements for the new development process and present the Semantic Web technologies that we use for ICD-11. We outline the experiences of the domain experts using the software system that we implemented using Semantic Web technologies. We then discuss the benefits and challenges in following this approach and conclude with lessons learned from this experience.
Tania Tudorache, Sean Falconer, Csongor Nyulas, Natalya F. Noy, Mark A. Musen
Using SPARQL to Test for Lattices: Application to Quality Assurance in Biomedical Ontologies
Abstract
We present a scalable, SPARQL-based computational pipeline for testing the lattice-theoretic properties of partial orders represented as RDF triples. The use case for this work is quality assurance in biomedical ontologies, one desirable property of which is conformance to lattice structures. At the core of our pipeline is the algorithm called NuMi, for detecting the Number of Minimal upper bounds of any pair of elements in a given finite partial order. Our technical contribution is the coding of NuMi completely in SPARQL. To show its scalability, we applied NuMi to the entirety of SNOMED CT, the largest clinical ontology (over 300,000 conepts). Our experimental results have been groundbreaking: for the first time, all non-lattice pairs in SNOMED CT have been identified exhaustively from 34 million candidate pairs using over 2.5 billion queries issued to Virtuoso. The percentage of non-lattice pairs ranges from 0 to 1.66 among the 19 SNOMED CT hierarchies. These non-lattice pairs represent target areas for focused curation by domain experts. RDF, SPARQL and related tooling provide an efficient platform for implementing lattice algorithms on large data structures.
Guo-Qiang Zhang, Olivier Bodenreider

Doctoral Consortium

Exploiting Relation Extraction for Ontology Alignment
Abstract
When multiple ontologies are used within one application system, aligning the ontologies is a prerequisite for interoperability and unhampered semantic navigation and search. Various methods have been proposed to compute mappings between elements from different ontologies, the majority of which being based on various kinds of similarity measures. As a major shortcoming of these methods it is difficult to decode the semantics of the results achieved. In addition, in many cases they miss important mappings due to poorly developed ontology structures or dissimilar ontology designs. I propose a complementary approach making massive use of relation extraction techniques applied to broad-coverage text corpora. This approach is able to detect different types of semantic relations, dependent on the extraction techniques used. Furthermore, exploiting external background knowledge, it can detect relations even without clear evidence in the input ontologies themselves.
Elena Beisswanger
Towards Semantic Annotation Supported by Dependency Linguistics and ILP
Abstract
In this paper we present a method for semantic annotation of texts, which is based on a deep linguistic analysis (DLA) and Inductive Logic Programming (ILP). The combination of DLA and ILP have following benefits: Manual selection of learning features is not needed. The learning procedure has full available linguistic information at its disposal and it is capable to select relevant parts itself. Learned extraction rules can be easily visualized, understood and adapted by human. A description, implementation and initial evaluation of the method are the main contributions of the paper.
Jan Dědek
Towards Technology Structure Mining from Scientific Literature
Abstract
This paper introduces the task of Technology-Structure Mining to support Management of Technology. We propose a linguistic based approach for identification of Technology Interdependence through extraction of technology concepts and relations between them. In addition, we introduce Technology Structure Graph for the task formalization. While the major challenge in technology structure mining is the lack of a benchmark dataset for evaluation and development purposes, we describes steps that we have taken towards providing such a benchmark. The proposed approach is initially evaluated and applied in the domain of Human Language Technology and primarily results are demonstrated. We further explain plans and research challenges for evaluation of the proposed task.
Behrang QasemiZadeh
Auto-experimentation of KDD Workflows Based on Ontological Planning
Abstract
One of the problems of Knowledge Discovery in Databases (KDD) is the lack of user support for solving KDD problems. Current Data Mining (DM) systems enable the user to manually design workflows but this becomes difficult when there are too many operators to choose from or the workflow’s size is too large. Therefore we propose to use auto-experimentation based on ontological planning to provide the users with automatic generated workflows as well as rankings for workflows based on several criteria (execution time, accuracy, etc.). Moreover auto-experimentation will help to validate the generated workflows and to prune and reduce their number. Furthermore we will use mixed-initiative planning to allow the users to set parameters and criteria to limit the planning search space as well as to guide the planner towards better workflows.
Floarea Serban
Customizing the Composition of Actions, Programs, and Web Services with User Preferences
Abstract
Web service composition (WSC) – loosely, the composition of web-accessible software systems – requires a computer program to automatically select, integrate, and invoke multiple web services in order to achieve a user-defined objective. It is an example of the more general task of composing business processes or component-based software. Our doctoral research endeavours to make fundamental contributions to the knowledge representation and reasoning principles underlying the task of WSC, with a particular focus on the customization of compositions with respect to individual preferences. The setting for our work is the semantic web, where the properties and functioning of services and data are described in a computer-interpretable form. In this setting we conceive of WSC as an Artificial Intelligence planning task. This enables us to bring to bear many of the theoretical and computational advances in reasoning about action and planning to the task of WSC. However, WSC goes far beyond the reaches of classical planning, presenting a number of interesting challenges that are relevant not only to WSC but to a large body of problems related to the composition of actions, programs, business processes, and services. In what follows we identify a set of challenges facing our doctoral research and report on our progress to date in addressing these challenges.
Shirin Sohrabi
Adding Integrity Constraints to the Semantic Web for Instance Data Evaluation
Abstract
This paper presents our work on supporting evaluation of integrity constraint issues in semantic web instance data. We propose an alternative semantics for the ontology language, i.e., OWL, a decision procedure for constraint evaluation by query answering, and an approach of explaining and repairing integrity constraint violations by utilizing the justifications of conjunctive query answers.
Jiao Tao

Invited Talks

Abstract: The Open Graph Protocol Design Decisions
Abstract
The Open Graph protocol enables any web page to become a rich object in a social graph. It was created by Facebook but designed to be generally useful to anyone. While many different technologies and schemas exist and could be combined together, there is not a single technology which provides enough information to richly represent any web page within the social graph. The Open Graph protocol builds on these existing technologies and gives developers one thing to implement. Developer simplicity is a key goal of the Open Graph protocol which has informed many of the technical design decisions. This talk will explore the motivation of the Open Graph protocol and the design decisions which went into creating it.
Austin Haugen
Evaluating Search Engines by Clickthrough Data
Abstract
It is no doubt that search is critical to the web. And it will be of similar importance to the semantic web. Once searching from billions of objects, it will be impossible to always give a single right result, no matter how intelligent the search engine is. Instead, a set of possible results will be provided for the user to choose from. Moreover, if we consider the trade-off between the system costs of generating a single right result and a set of possible results, we may choose the latter. This will naturally lead to the question of how to decide on and present the set to the user and how to evaluate the outcome.
In this paper, we introduce some new methodology in evaluation of web search technologies and systems. Historically, the dominant method for evaluating search engines is the Cranfield paradigm, which employs a test collection to qualify the systems’ performance. However, the modern search engines are much different from the IR systems when the Cranfield paradigm was proposed: 1) Most modern search engines have much more features, such as snippets and query suggestions, and the quality of such features can affect the users’ utility; 2) The document collections used in search engines are much larger than ever, so the complete test collection that contains all query-document judgments is not available. As response to the above differences and difficulties, the evaluation based on implicit feedback is a promising alternative employed in IR evaluation. With this approach, no extra human effort is required to judge the query-document relevance. Instead, such judgment information can be automatically predicted from real users’ implicit feedback data. There are three key issues in this methodology: 1) How to estimate the query-document relevance and other useful features that useful to qualify the search engine performance; 2) If the complete ”judgments” are not available, how can we efficiently collect the most critical information from which the system performance can be derived; 3) Because query-document relevance is not only feature that can affect the performance, how can we integrate others to be a good metric to predict the system performance. We will show a set of technologies dealing with these issues.
Jing He, Xiaoming Li
Abstract: Semantic Technology at The New York Times: Lessons Learned and Future Directions
Abstract
At last year’s International Semantic Web Conference, The New York Times Company announced the release of our Linked Open Data Platform available at http://data.nytimes.com . In the subsequent year, we have continued our efforts in this space and learned many valuable lessons. In our remarks, we will review these lessons; demonstrate innovative prototypes built on our linked data; explore the future of RDF and RDFa in the News Industry and announce an exciting new milestone in our Linked Data efforts.
Evan Sandhaus
What does It Look Like, Really? Imagining how Citizens might Effectively, Usefully and Easily Find, Explore, Query and Re-present Open/Linked Data
Abstract
Are we in the semantic web/linked data community effectively attempting to make possible a new literacy - one of data rather than document analysis? By opening up data beyond the now familiar hand crafted Web 2 mash up of data about X plus geography, what are we trying to do, really? Is the goal at least in part to enable net citizens rather than only geeks the ability to pick up, explore, blend, interogate and represent data sources so that we may draw our own statistically informed conclusions about information, and thereby build new knowledge in ways not readily possible before without access to these data seas? If we want citizens rather than just scientists or statisticians or journalists for that matter to be able to pour over data and ask statistically sophisticated questions of comparison and contrast betewen times, places and people, does that mission re-order our research priorities at all? If the goal is to enpower citizens to be able to make use of data, what do we need to make this vision real beyond attending to Tim Berners-Lee’s call to "free your data"? The purpose of this talk therefore will be to look at key ineraction issues around defining and delivering a useful, usable *data explorotron* for citizens. In particular, we’ll consider who is a "citizen user" and what access to and tools for linked data sense making means in this case. From that perspective, we’ll consider research issues around discovery, exploration, interrogation and representation of data for not only a single wild data source but especially for multiple wild heterogeneous data sources. I hope this talk may help frame some stepping stones towards useful and usable interaction with linked data, and look forward to input from the community to refine such a new literacy agenda further.
mc schraefel
Backmatter
Metadaten
Titel
The Semantic Web – ISWC 2010
herausgegeben von
Peter F. Patel-Schneider
Yue Pan
Pascal Hitzler
Peter Mika
Lei Zhang
Jeff Z. Pan
Ian Horrocks
Birte Glimm
Copyright-Jahr
2010
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-642-17749-1
Print ISBN
978-3-642-17748-4
DOI
https://doi.org/10.1007/978-3-642-17749-1

Premium Partner