Skip to main content

2008 | Buch

Software and Data Technologies

First International Conference, ICSOFT 2006, Setúbal, Portugal, September 11-14, 2006, Revised Selected Papers

herausgegeben von: Joaquim Filipe, Boris Shishkov, Markus Helfert

Verlag: Springer Berlin Heidelberg

Buchreihe : Communications in Computer and Information Science

insite
SUCHEN

Über dieses Buch

This book contains the best papers of the First International Conference on Software and Data Technologies (ICSOFT 2006), organized by the Institute for Systems and Technologies of Information, Communication and Control (INSTICC) in cooperation with the Object Management Group (OMG). Hosted by the School of Business of the Polytechnic Institute of Setubal, the conference was sponsored by Enterprise Ireland and the Polytechnic Institute of Setúbal. The purpose of ICSOFT 2006 was to bring together researchers and practitioners int- ested in information technology and software development. The conference tracks were “Software Engineering”, “Information Systems and Data Management”, “Programming Languages”, “Distributed and Parallel Systems” and “Knowledge Engineering.” Being crucial for the development of information systems, software and data te- nologies encompass a large number of research topics and applications: from imp- mentation-related issues to more abstract theoretical aspects of software engineering; from databases and data-warehouses to management information systems and kno- edge-base systems; next to that, distributed systems, pervasive computing, data quality and other related topics are included in the scope of this conference. ICSOFT included in its program a panel to discuss the future of software devel- ment, composed by six distinguished world-class researchers. Furthermore, the c- ference program was enriched by a tutorial and six keynote lectures. ICSOFT 2006 received 187 paper submissions from 39 countries in all continents.

Inhaltsverzeichnis

Frontmatter

Invited Papers

Frontmatter
Adaptive Integration of Enterprise and B2B Applications
Abstract
Whether application integration is internal to the enterprise or takes the form of external Business-to-Business (B2B) automation, the main integration challenge is similar – how to ensure that the integration solution has the quality of adaptiveness (i.e. it is understandable, maintainable, and scalable)? This question is hard enough for stand-alone application developments, let alone integration developments in which the developers may have little control over participating applications. This paper identifies main strategic (architectural), tactical (engineering), and operational (managerial) imperatives for buil-ding adaptiveness into solutions resulting from integration projects.
Leszek A. Maciaszek
Ambient Intelligence: Basic Concepts and Applications
Abstract
Ambient Intelligence is a multi-disciplinary approach which aims to enhance the way environments and people interact with each other. The ultimate goal of the area is to make the places we live and work in more beneficial to us. Smart Homes is one example of such systems but the idea can be also used in relation to hospitals, public transport, factories and other environments. The achievement of Ambient Intelligence largely depends on the technology deployed (sensors and devices interconnected through networks) as well as on the intelligence of the software used for decision-making. The aims of this article are to describe the characteristics of systems with Ambient Intelligence, to provide examples of their applications and to highlight the challenges that lie ahead, especially for the Software Engineering and Knowledge Engineering communities.
Juan Carlos Augusto
How to Quantify Quality: Finding Scales of Measure
Abstract
Quantification is key to controlling system performance attributes. This paper describes how to quantify performance requirements using Planguage - a specification language developed by the author. It discusses in detail how to develop and use tailored scales of measure.
Tom Gilb
Metamodeling as an Integration Concept
Abstract
This paper aims to provide an overview of existing applications of the metamodeling concept in the area of computer science. In order to do so, a literature survey has been performed that indicates that metamodeling is basically applied for two main purposes: design and integration. In the course of describing these two applications we are also going to briefly describe some of the existing work we came across. Furthermore, we provide an insight into the important area of semantic integration and interoperability, whereas we show how metamodels can be brought together with ontologies in this context. The paper is concluded with an outlook on relevant future work in the field of metamodeling.
Dimitris Karagiannis, Peter Höfferer
Engineering Object and Agent Methodologies
Abstract
Method engineering provides an excellent base for constructing situation-specific software engineering methodologies for both object (OO) and agent (AO) software development. Both the OPEN Process Framework (OPF) and the Framework for Agent-oriented Method Engineering (FAME) use an existing repository coupled to an appropriate metamodel (which in the near future will be the new ISO standard metamodel ISO/IEC 24744, itself based on the concept of powertypes). This flexible, yet standardized, repository supplies method fragments that are then configured to support specific projects. In addition, all existing, and new, OO and AO methodologies can be recreated, thus providing an industry strength resource for object-oriented and agent-oriented software development.
B. Henderson-Sellers

Part I: Programming Languages

Frontmatter
From Static to Dynamic Process Types
Abstract
Process types – a kind of behavioral types – specify constraints on message acceptance for the purpose of synchronization and to determine object usage and component behavior in object-oriented languages. So far process types have been regarded as a purely static concept for Actor languages incompatible with inherently dynamic programming techniques. We propose solutions of related problems causing the approach to become useable in more conventional dynamic and concurrent languagues. The proposed approach can ensure message acceptability and support local and static checking of race-free programs.
Franz Puntigam
Aspectboxes: Controlling the Visibility of Aspects
Abstract
Aspect composition is still a hot research topic where there is no consensus on how to express where and when aspects have to be composed into a base system. In this paper we present a modular construct for aspects, called aspectboxes, that enables aspects application to be limited to a well defined scope. An aspectbox encapsulates class and aspect definitions. Classes can be imported into an aspectbox defining a base system to which aspects may then be applied. Refinements and instrumentation defined by an aspect are visible only within this particular aspectbox leaving other parts of the system unaffected.
Alexandre Bergel, Robert Hirschfeld, Siobhán Clarke, Pascal Costanza
On State Classes and Their Dynamic Semantics
Abstract
We introduce state classes, a construct to program objects that can be safely concurrently accessed. State classes model the notion of object’s state (intended as some abstraction over the value of fields) that plays a key role in concurrent object-oriented programming (as the state of an object changes, so does its coordination behavior). We show how state classes can be added to Java-like languages by presenting StateJ, an extension of Java with state classes. The operational semantics of the state class construct is illustrated both at an abstract level, by means of a core calculus for StateJ, and at a concrete level, by defining a translation from StateJ into Java.
Ferruccio Damiani, Elena Giachino, Paola Giannini, Emanuele Cazzola
Software Implementation of the IEEE 754R Decimal Floating-Point Arithmetic
Abstract
The IEEE Standard 754-1985 for Binary Floating-Point Arithmetic [1] is being revised [2], and an important addition to the current text is the definition of decimal floating-point arithmetic [3]. This is aimed mainly to provide a robust, reliable framework for financial applications that are often subject to legal requirements concerning rounding and precision of the results in the areas of banking, telephone billing, tax calculation, currency conversion, insurance, or accounting in general. Using binary floating-point calculations to approximate decimal calculations has led in the past to the existence of numerous proprietary software packages, each with its own characteristics and capabilities. New algorithms are presented in this paper which were used for a generic implementation in software of the IEEE 754R decimal floating-point arithmetic, but may also be suitable for a hardware implementation. In the absence of hardware to perform IEEE 754R decimal floating-point operations, this new software package that will be fully compliant with the standard proposal should be an attractive option for various financial computations. The library presented in this paper uses the binary encoding method from [2] for decimal floating-point values. Preliminary performance results show one to two orders of magnitude improvement over a software package currently incorporated in GCC, which operates on values encoded using the decimal method from [2].
Marius Cornea, Cristina Anderson, Charles Tsen

Part II: Software Engineering

Frontmatter
Bridging between Middleware Systems: Optimisations Using Downloadable Code
Abstract
There are multiple middleware systems and no single system is likely to become predominant. There is therefore an interoperability requirement between clients and services belonging to different middleware systems. Typically this is done by a bridge between invocation and discovery protocols. In this paper we introduce three design patterns based on a bridging service cache manager and dynamic proxies. This is illustrated by examples including a new custom lookup service which allows Jini clients to discover and invoke UPnP services. There is a detailed discussion of the pros and cons of each pattern.
Jan Newmarch
MDE for BPM: A Systematic Review
Abstract
Due to the rapid change in the business processes of organizations, Business Process Management (BPM) has come into being. BPM helps business analysts to manage all concerns related to business processes, but the gap between these analysts and people who build the applications is still large. The organization’s value chain changes very rapidly; to modify simultaneously the systems that support the business management process is impossible. MDE (Model Driven Engineering) is a good support for transferring these business process changes to the systems that implement these processes. Thus, by using any MDE approach, such as MDA, the alignment between business people and software engineering should be improved. To discover the different proposals that exist in this area, a systematic review was performed. As a result, the OMG’s Business Process Definition Metamodel (BPDM) has been identified as the standard that will be the key for the application of MDA for BPM.
Jose Manuel Perez, Francisco Ruiz, Mario Piattini
Exploring Feasibility of Software Defects Orthogonal Classification
Abstract
Defect categorization is the basis of many works that relate to software defect detection. The assumption is that different subjects assign the same category to the same defect. Because this assumption was questioned, our following decision was to study the phenomenon, in the aim of providing empirical evidence. Because defects can be categorized by using different criteria, and the experience of the involved professionals in using such a criterion could affect the results, our further decisions were: (i) to focus on the IBM Orthogonal Defect Classification (ODC); (ii) to involve professionals after having stabilized process and materials with students. This paper is concerned with our basic experiment. We analyze a benchmark including two thousand and more data that we achieved through twenty-four segments of code, each segment seeded with one defect, and by one hundred twelve sophomores, trained for six hours, and then assigned to classify those defects in a controlled environment for three continual hours. The focus is on: Discrepancy among categorizers, and orthogonality, affinity, effectiveness, and efficiency of categorizations. Results show: (i) training is necessary to achieve orthogonal and effective classifications, and obtain agreement between subjects, (ii) efficiency is five minutes per defect classification in the average, (iii) there is affinity between some categories.
Davide Falessi, Giovanni Cantone
Mapping Medical Device Standards Against the CMMI for Configuration Management
Abstract
This paper outlines the development of a Configuration Management model for the MEDical device software industry (CMMED). The paper details how medical device regulations associated with Configuration Management (CM) may be satisfied by adopting less than half of the practices from the CM process area of the Capability Maturity Model Integration (CMMI). It also investigates how the CMMI CM process area may be extended with additional practices that are outside the remit of the CMMI, but are required in order to satisfy medical device regulatory guidelines.
Fergal McCaffery, Rory V O’Connor, Gerry Coleman
A Systematic Review Measurement in Software Engineering: State-of-the-Art in Measures
Abstract
The present work provides a summary of the state of art in software measures by means of a systematic review on the current literature. Nowadays, many companies need to answer the following questions: How to measure?, When to measure and What to measure?. There have been a lot of efforts made to attempt to answer these questions, and this has resulted in a large amount of data what is sometimes confusing and unclear information. This needs to be properly processed and classified in order to provide a better overview of the current situation. We have used a Measurement Software Ontology to classify and put the amount of data in this field in order. We have also analyzed the results of the systematic review, to show the trends in the software measurement field and the software process on which the measurement efforts have focused. It has allowed us to discover what parts of the process are not supported enough by measurements, to thus motivate future research in those areas.
Oswaldo Gómez, Hanna Oktaba, Mario Piattini, Félix García
Engineering a Component Language: CompJava
Abstract
After first great enthusiasm about the new generation of component languages, a closer inspection and use identified together with very strong points some disturbing drawbacks, which seem to have been an important impediment for a wider acceptance. A restricted acceptance of component languages would be harmful since the integration of architecture description with a programming language increases the quality of application architecture and applications, as our experience confirms. Therefore, we took an engineering approach to the construction of a new Java-based component language without these drawbacks. After deriving component language requirements, we designed a first language version meeting the requirements and developed a compiler. We used it in several projects; and re-iterated three times through the same cycle with improved language versions. The result, called CompJava, to be presented in the paper seems to be mature for use in an industrial environment.
Hans Albrecht Schmid, Marco Pfeifer

Part III: Distributed and Parallel Systems

Frontmatter
Towards a Quality Model for Grid Portals
Abstract
Researchers require multiple computing resources when conducting their computational research; this makes necessary the use of distributed resources. In response to the need for dependable, consistent and pervasive access to distributed resources, the Grid came into existence. Grid portals subsequently appeared with the aim of facilitating the use and management of distributed resources. Nowadays, many Grid portals can be found. In addition, users can change from one Grid portal to another with only a click of a mouse. So, it is very important that users regularly return to the same Grid portal, since otherwise the Grid portal might disappear. However, the only mechanism that makes users return is high quality. Therefore, in this paper and with all the above considerations in mind, we have developed a Grid portal quality model from an existing portal quality model, namely, PQM. In addition, the model produced has been applied to two specific Grid portals.
Ma Ángeles Moraga, Coral Calero, Mario Piattini, David Walker
Algorithmic Skeletons for Branch and Bound
Abstract
Algorithmic skeletons are predefined components for parallel programming. We will present a skeleton for branch & bound problems for MIMD machines with distributed memory. This skeleton is based on a distributed work pool. We discuss two variants, one with supply-driven work distribution and one with demand-driven work distribution. This approach is compared to a simple branch & bound skeleton with a centralized work pool, which has been used in a previous version of our skeleton library Muesli. Based on experimental results for two example applications, namely the n-puzzle and the traveling salesman problem, we show that the distributed work pool is clearly better and enables good runtimes and in particular scalability. Moreover, we discuss some implementation aspects such as termination detection as well as overlapping computation and communication.
Michael Poldner, Herbert Kuchen
A Hybrid Topology Architecture for P2P File Sharing Systems
Abstract
Over the Internet today, there has been much interest in emerging Peer-to-Peer (P2P) networks because they provide a good substrate for creating data sharing, content distribution, and application layer multicast applications. There are two classes of P2P overlay networks: structured and unstructured. Structured networks can efficiently locate items, but the searching process is not user friendly. Conversely, unstructured networks have efficient mechanisms to search for a content, but the lookup process does not take advantage of the distributed system nature. In this paper, we propose a hybrid structured and unstructured topology in order to take advantages of both kind of networks. In addition, our proposal guarantees that if a content is at any place in the network, it will be reachable with probability one. Simulation results show that the behaviour of the network is stable and that the network distributes the contents efficiently to avoid network congestion.
J. P. Muñoz-Gea, J. Malgosa-Sanahuja, P. Manzanares-Lopez, J. C. Sanchez-Aarnoutse, A. M. Guirado-Puerta
Parallel Processing of “Group-By Join” Queries on Shared Nothing Machines
Abstract
SQL queries involving join and group-by operations are frequently used in many decision support applications. In these applications, the size of the input relations is usually very large, so the parallelization of these queries is highly recommended in order to obtain a desirable response time. The main drawbacks of the presented parallel algorithms that treat this kind of queries are that they are very sensitive to data skew and involve expansive communication and Input/Output costs in the evaluation of the join operation. In this paper, we present an algorithm that minimizes the communication cost by performing the group-by operation before redistribution where only tuples that will be present in the join result are redistributed. In addition, it evaluates the query without the need of materializing the result of the join operation and thus reducing the Input/Output cost of join intermediate results. The performance of this algorithm is analyzed using the scalable and portable BSP (Bulk Synchronous Parallel) cost model which predicts a near-linear speed-up even for highly skewed data.
M. Al Hajj Hassan, M. Bamha
Impact of Wrapped System Call Mechanism on Commodity Processors
Abstract
Split-phase style transactions separate issuing a request and receiving the result of an operation in different threads. We apply this style to system call mechanism so that a system call is split into several threads in order to cut off the mode changes from system call execution inside the kernel. This style of system call mechanism improves throughput, and is also useful in enhancing locality of reference. In this paper, we call this mechanism as Wrapped System Call (WSC) mechanism, and we evaluate the effectiveness of WSC on commodity processors. WSC mechanism can be effective even on commodity platforms which do not have explicit multithread support. We evaluate WSC mechanism based on a performance evaluation model by using a simplified benchmark. We also apply WSC mechanism to variants of cp program to observe the effect on the enhancement of locality of reference. When we apply WSC mechanism to cp program, the combination of our split-phase style system calls and our scheduling mechanism is effective in improving throughput by reducing mode changes and exploiting locality of reference.
Satoshi Yamada, Shigeru Kusakabe, Hideo Taniguchi

Part IV: Information Systems and Data Management

Frontmatter
Adding More Support for Associations to the ODMG Object Model
Abstract
The Object Model defined in the ODMG standard for object data management systems (ODMSs) provides referential integrity support for one-to-one, one-to-many, and many-to-many associations. It does not, however, provide support that enforces the multiplicities often specified for such associations in UML class diagrams, nor does it provide the same level of support for associations that is provided in relational systems via the SQL references clause. The Object Relationship Notation (ORN) is a declarative scheme that provides for the specification of enhanced association semantics. These semantics include multiplicities and are more powerful than those provided by the SQL references clause. This paper describes how ORN can be added to the ODMG Object Model and discusses algorithms that can be used to support ORN association semantics in an ODMG-compliant ODMS. The benefits of such support are improved productivity in developing object database systems and increased system reliability.
Bryon K. Ehlmann
Measuring Effectiveness of Computing Facilities in Academic Institutes: A New Solution for a Difficult Problem
Abstract
There has been a constant effort to evaluate the success of Information Technology in organizations. This kind of investment is extremely hard to evaluate because of difficulty in identifying tangible benefits, as well as high uncertainty about achieving the expected value. Though a lot of research has taken place in this direction, but not much is written about evaluating IT in non-profit organizations like educational institutions. Measures for evaluating success of IT in such kind of institutes are markedly different from that of business organizations. The purpose of this paper is to build further upon the existing body of research by proposing a new model for measuring effectiveness of computing facilities in academic institutes. As a baseline, Delone & McLean’s model for measuring the success of Information System [2], [3] is used, as it is the most pioneering model in this regard.
Smriti Sharma, Veena Bansal
Combining Information Extraction and Data Integration in the ESTEST System
Abstract
We describe an approach which builds on techniques from Data Integration and Information Extraction in order to make better use of the unstructured data found in application domains such as the Semantic Web which require the integration of information from structured data sources, ontologies and text. We describe the design and implementation of the ESTEST system which integrates available structured and semi-structured data sources into a virtual global schema which is used to partially configure an information extraction process. The information extracted from the text is merged with this virtual global database and is available for query processing over the entire integrated resource. As a result of this semantic integration, new queries can now be answered which would not be possible from the structured and semi-structured data alone. We give some experimental results from the ESTEST system in use.
Dean Williams, Alexandra Poulovassilis
Introducing a Change-Resistant Framework for the Development and Deployment of Evolving Applications
Abstract
Software development is an R&D intensive activity, dominated by human creativity and diseconomies of scale. Current efforts focus on design patterns, reusable components and forward-engineering mechanisms as the right next stage in cutting the Gordian knot of software. Model-driven development improves productivity by introducing formal models that can be understood by computers. Through these models the problems of portability, interoperability, maintenance, and documentation are also successfully addressed. However, the problem of evolving requirements, which is more prevalent within the context of business applications, additionally calls for efficient mechanisms that ensure consistency between models and code, and enable seamless and rapid accommodation of changes, without interrupting severely the operation of the deployed application. This paper introduces a framework that supports rapid development and deployment of evolving web-based applications, based on an integrated database schema. The proposed framework can be seen as an extension of the Model Driven Architecture targeting a specific family of applications.
Georgios Voulalas, Georgios Evangelidis
Smart Business Objects for Web Applications: A New Approach to Model Business Objects
Abstract
At present, there is a growing need to accelerate the development of web applications and to support continuous evolution of web applications due to evolving business needs. The object persistence capability and web interface generation capability in contemporary MVC (Model View Controller) web application development frameworks and model-to-code generation capability in Model-Driven Development tools has simplified the modelling of business objects for developing web applications. However, there is still a mismatch between the current technologies and the essential support for high-level, semantic-rich modelling of web-ready business objects for rapid development of modern web applications. Therefore, we propose a novel concept called Smart Business Object (SBO) to solve the above-mentioned problem. In essence, SBOs are web-ready business objects. SBOs have high-level, web-oriented attributes such as email, URL, video, image, document, etc. This allows SBO to be modelled at a higher-level of abstraction than traditional modelling approaches. A lightweight, near-English modelling language called SBOML (Smart Business Object Modelling Language) is proposed to model SBOs. We have created a toolkit to streamline the creation (modelling) and consumption (execution) of SBOs. With these tools, we are able to build fully functional web applications in a very short time without any coding.
Xufeng (Danny) Liang, Athula Ginige
A Data Mining Approach to Learning Probabilistic User Behavior Models from Database Access Log
Abstract
The problem of user behavior modeling arises in many fields of computer science and software engineering. In this paper we investigate a data mining approach for learning probabilistic user behavior models from the database usage logs. We propose a procedure for translating database traces into representation suitable for applying data mining methods. However, most existing data mining methods rely on the order of actions and ignore time intervals between actions. To avoid this problem we propose novel method based on combination of decision tree classification algorithm and empirical time-dependent feature map, motivated by potential functions theory. The performance of the proposed method was experimentally evaluated on real-world data. The comparison with existing state-of-the-art data mining methods has confirmed outstanding performance of our method in predictive user behavior modeling and has demonstrated competitive results in anomaly detection.
Mikhail Petrovskiy

PART V Knowledge Engineering

Frontmatter
Approximate Reasoning to Learn Classification Rules
Abstract
In this paper, we propose an original use of approximate reasoning not only as a mode of inference but also as a means to refine a learning process. This work is done within the framework of the supervised learning method SUCRAGE which is based on automatic generation of classification rules. Production rules whose conclusions are accompanied by belief degrees, are obtained by supervised learning from a training set. These rules are then exploited by a basic inference engine: it fires only the rules with which the new observation to classify matches exactly. To introduce more flexibility, this engine was extended to an approximate inference which allows to fire rules not too far from the new observation. In this paper, we propose to use approximate reasoning to generate new rules with widened premises: thus imprecision of the observations are taken into account and problems due to the discretization of continuous attributes are eased. The objective is then to exploit the new base of rules by a basic inference engine, easier to interpret. The proposed method was implemented and experimental tests were carried out.
Amel Borgi
Combining Metaheuristics for the Job Shop Scheduling Problem with Sequence Dependent Setup Times
Abstract
The Job Shop Scheduling (JSS) is a hard problem that has interested to researchers in various fields such as Operations Research and Artificial Intelligence during the last decades. Due to its high complexity, only small instances can be solved by exact methods, while instances with a size of practical interest should be solved by means of approximate methods guided by heuristic knowledge. In this paper we confront the Job Shop Scheduling with Sequence Dependent Setup Times (SDJSS). The SDJSS problem models many real situations better than the JSS. Our approach consists in extending a genetic algorithm and a local search method that demonstrated to be efficient in solving the JSS problem. We report results from an experimental study showing that the proposed approaches are more efficient than other genetic algorithm proposed in the literature, and that it is quite competitive with some of the state-of-the-art approaches.
Miguel A. González, María R. Sierra, Camino R. Vela, Ramiro Varela, Jorge Puente
A Description Clustering Data Mining Technique for Heterogeneous Data
Abstract
In this work we present a formal framework for mining complex objects, being those characterized by a set of heterogeneous attributes and their corresponding values. First we introduce several Data Mining techniques available in the literature to extract association rules. We show as well some of the drawbacks of these techniques and how our proposed solution is going to tackle them. Then demonstrate how applying a clustering algorithm as a pre-processing step on the data allow us to find groups of attributes and objects that provide us with a richer starting point for the Data Mining process. Then define the formal framework, its decision functions and its interesting measurement rules, as well as a newly designed Data Mining algorithms specifically tuned for our objectives. We also show the type of knowledge to be extracted in the form of a set of association rules. Finally state our conclusions and propose the future work.
Alejandro García López, Rafael Berlanga, Roxana Danger
A Pattern Selection Algorithm in Kernel PCA Applications
Abstract
Principal Component Analysis (PCA) has been extensively used in different fields including earth science for spatial pattern identification. However, the intrinsic linear feature associated with standard PCA prevents scientists from detecting nonlinear structures. Kernel-based principal component analysis (KPCA), a recently emerging technique, provides a new approach for exploring and identifying nonlinear patterns in scientific data. In this paper, we recast KPCA in the commonly used PCA notation for earth science communities and demonstrate how to apply the KPCA technique into the analysis of earth science data sets. In such applications, a large number of principal components should be retained for studying the spatial patterns, while the variance cannot be quantitatively transferred from the feature space back into the input space. Therefore, we propose a KPCA pattern selection algorithm based on correlations with a given geophysical phenomenon. We demonstrate the algorithm with two widely used data sets in geophysical communities, namely the Normalized Difference Vegetation Index (NDVI) and the Southern Oscillation Index (SOI). The results indicate the new KPCA algorithm can reveal more significant details in spatial patterns than standard PCA.
Ruixin Yang, John Tan, Menas Kafatos
Backmatter
Metadaten
Titel
Software and Data Technologies
herausgegeben von
Joaquim Filipe
Boris Shishkov
Markus Helfert
Copyright-Jahr
2008
Verlag
Springer Berlin Heidelberg
Electronic ISBN
978-3-540-70621-2
Print ISBN
978-3-540-70619-9
DOI
https://doi.org/10.1007/978-3-540-70621-2