Skip to main content

1999 | Buch

Rendering Techniques’ 99

Proceedings of the Eurographics Workshop in Granada, Spain, June 21–23, 1999

herausgegeben von: Dr. Dani Lischinski, Dr. Greg Ward Larson

Verlag: Springer Vienna

Buchreihe : Eurographics

insite
SUCHEN

Über dieses Buch

This book contains the proceedings of the 10th Eurographics Workshop on Rendering, which took place from the 21st to the 23rd of June, 1999, in Granada, Spain. Origi­ nally an outgrowth of the annual Eurographics meeting, the workshop was organized by a dedicated group of researchers who felt there was insufficient opportunity at Eu­ rographics and Siggraph to exchange ideas specifically on rendering. Over the past 9 years, the workshop has become renown as an international watershed for top quality work in this field, attracting between 50 and 100 attendees each year to share their latest research. This year we received a total of 63 submissions. Each paper was carefully reviewed by two of the 25 international programme committee members, as well as two external reviewers, selected by the co-chairs from a pool of 71 individuals. (The programme committee and external reviewers are listed following the contents pages.) In this new review process, all submissions and reviews were handled electronically, with the ex­ ception of videos submitted with a few of the papers. This streamlined the review process considerably, while reducing the costs and confusion associated with courier delivery of hundreds of papers.

Inhaltsverzeichnis

Frontmatter
Disruptive Technologies in Computer Graphics: Past, Present, and Future
Abstract
The history and famous landmarks of computer graphics hardware are well known. Starting with Ivan Sutherland’s Sketchpad system in the early 1960’s, the first generation of computer graphics hardware consisted of calligraphic (vector) displays capable of drawing complex three-dimensional wireframe models at interactive rates. In the early 1970’s expensive color frame buffers with the capability for displaying static color images were introduced. Although more and more intelligence was added to these frame buffers, Jim Clark’s geometry engine and the first graphics workstations were not introduced until the 1980k. During the 1970’s, only the very costly and specialized hardware used for military and aerospace simulations was capable of real-time surface color display.
Donald P. Greenberg
Perceptually-informed accelerated rendering of high quality walkthrough sequences
Abstract
In this paper, we consider accelerated rendering of walkthrough animation sequences using combination of ray tracing and Image-Based Rendering (IBR) techniques. Our goal is to derive as many pixels as possible using inexpensive IBR techniques without affecting the animation quality. A perception-based spatio-temporal Animation Quality Metric (AQM) is used to automatically guide such a hybrid rendering. The Pixel Flow (PF) obtained as a by-product of the IBR computation is an integral part of the AQM. The final animation quality is enhanced by an efficient spatio-temporal antialiasing, which utilize the PF to perform a motion-compensated filtering.
Karol Myszkowski, Przemyslaw Rokita, Takehiro Tawara
Interactive Rendering using the Render Cache
Abstract
Interactive rendering requires rapid visual feedback. The render cache is a new method for achieving this when using high-quality pixel-oriented Tenderers such as ray tracing that are usually considered too slow for interactive use. The render cache provides visual feedback at a rate faster than the renderer can generate complete frames, at the cost of producing approximate images during camera and object motion. The method works both by caching previous results and reprojecting them to estimate the current image and by directing the Tenderer’s sampling to more rapidly improve subsequent images.
Our implementation demonstrates an interactive application working with both ray tracing and path tracing Tenderers in situations where they would normally be considered too expensive. Moreover we accomplish this using a software only implementation without the use of 3D graphics hardware.
Bruce Walter, George Drettakis, Steven Parker
Interactive Ray-Traced Scene Editing Using Ray Segment Trees
Abstract
This paper presents a ray tracer that facilitates near-interactive scene editing with incremental rendering; the user can edit the scene both by manipulating objects and by changing the viewpoint. Our system uses object-space radiance interpolants to accelerate ray tracing by approximating radiance, while bounding error. We introduce a new hierarchical data structure, the ray segmenttree (RST), which tracks the dependencies of radiance interpolants on regions of world space. When the scene is edited, affected interpolants are rapidly identified— typically in 0.1 seconds—by traversing these ray segment trees. The affected interpolants are updated and used to re-render the scene with a 3 to 4 × speedup over the base ray tracer, even when the viewpoint is changed. Although the system does no pre-processing, performance is better than for the base ray tracer even on the first rendered frame.
Kavita Bala, Julie Dorsey, Seth Teller
Decoupling Polygon Rendering from Geometry using Rasterization Hardware
Abstract
The dramatically increasing size of polygonal models resulting from 3D scanning devices and advanced modeling techniques requires new approaches to reduce the load of geometry transfer and processing. In order to supplement methods like polygon reduction or geometry compression we suggest to exploit the processing power and functionality of the rasterization and texture subsystem of advanced graphics hardware. We demonstrate that 3D-texture maps can be used to render voxelized polygon models of arbitrary complexity at interactive rates by extracting isosurfaces from distance volumes. Therefore, we propose two fundamental algorithms to limit the rasterization load: First, the model is partitioned into a hierarchy of axis-aligned bounding boxes that are voxelized in an error controlled multi-resolution representation. Second, rasterization is restricted to the thin boundary regions around the isosurface representing the voxelized geometry. Furthermore, we suggest and simulate an OpenGL extension enabling advanced per-pixel lighting and shading. Although the presented approach exhibits certain limitations we consider it as a starting point for hybrid solutions balancing load between the geometry and the rasterization stage and we expect some influence on future hardware design.
Rüdiger Westermann, Ove Sommer, Thomas Ertl
Hierarchical Image-Based Rendering using Texture Mapping Hardware
Abstract
Multi-layered depth images containing color and normal information for subobjects in a hierarchical scene model are precomputed with standard z-buffer hardware for six orthogonal views. These are adaptively selected according to the proximity of the viewpoint, and combined using hardware texture mapping to create “reprojected” output images for new viewpoints. (If a subobject is too close to the viewpoint, the polygons in the original model are rendered.) Specific z-ranges are selected from the textures with the hardware alpha test to give accurate 3D reprojection. The OpenGL color matrix is used to transform the precomputed normals into their orientations in the final view, for hardware shading.
Nelson Max, Oliver Deussen, Brett Keating
Towards Interactive Photorealistic Rendering of Indoor Scenes: A Hybrid Approach
Abstract
Photorealistic rendering methods produce accurate solutions to the rendering equation but are computationally expensive and typically non-interactive. Some researchers have used graphics hardware to obtain photorealistic effects but not at interactive frame rates. We describe a technique to achieve near photorealism of simple indoor scenes at interactive rates using both CPUs and graphics hardware in parallel. This allows the user the ability to interactively move objects and lights in the scene. Our goal is to introduce as many global illumination effects as possible while maintaining a high frame rate. We describe methods to generate soft shadows, approximate one-bounce indirect lighting, and specular reflection and refraction effects.
Tushar Udeshi, Charles D. Hansen
Group Accelerated Shooting Methods for Radiosity
Abstract
The introduction of the Progressive Refinement method was the starting point of interactivity in the radiosity illumination process. Overshooting methods brought an important acceleration to the convergence particularly for scenes with a high mean reflectivity.
In this paper we present a new acceleration technique to PR and overshooting methods based on group shooting methods. The acceleration is obtained by occasionally selecting groups of interacting patches and by solving the subsystem built from this group.
This technique allows us to reduce the number of iterations that are required to solve the radiosity system and only involves a small computation overhead. Comparing different algorithms for scenes with particular properties, we highlight interesting results of the Group Accelerated Shooting Methods especially when considering complex scenes with many occlusions.
François Rousselle, Christophe Renaud
Gathering for Free in Random Walk Radiosity
Abstract
We present a simple technique that improves the efficiency of random walk algorithms for radiosity. Each generated random walk is used to simultaneously sample two distinct radiosity estimators. The first estimator is the commonly used shooting estimator, in which the radiosity due to self-emitted light at the origin of the random walk is recorded at each subsequently visited patch. With the second estimator, the radiosity due to self-emitted light at subsequent destinations is recorded at each visited patch. Closed formulae for the variance of the involved estimators allow to derive a cheap heuristic for combining the resulting radiosity estimates. Empirical results agree well with the heuristic prediction. A fair error reduction is obtained at a negligible additional cost.
Mateu Sbert, Alex Brusi, Philippe Bekaert
Information Theory Tools for Scene Discretization
Abstract
Finding an optimal discretization of a scene is an important but difficult problem in radiosity. The efficiency of hierarchical radiosity for instance, depends entirely on the subdivision criterion and strategy that is used. We study the problem of adaptive scene discretization from the point of view of information theory. In previous work, we have introduced the concept of mutual information, which represents the information transfer or correlation in a scene, as a complexity measure and presented some intuitive arguments and preliminary results concerning the relation between mutual information and scene discretization. In this paper, we present a more general treatment supporting and extending our previous findings to the level that the development of practical information theory-based tools for optimal scene discretization becomes feasible.
Miquel Feixas, Esteve del Acebo, Philippe Bekaert, Mateu Sbert
Geospecific rendering of alpine terrain
Abstract
Realistic rendering of outdoor terrain requires both that the geometry of the environment be modeled accurately and that appropriate texturing be laid down on top of that geometry. While elevation data is widely available for much of the world and many methods exist for converting this data to forms suitable for graphics systems, we have much less experience with patterning the resulting surface. This paper describes an approach for using panchromatic (grayscale) aerial imagery to produce color views of alpine scenes. The method is able to remove shading and shadowing effects in the original image so that shading and shadowing appropriate to variable times of day can be added. Seasonal snow cover can be added in a physically plausible manner. Finally, 3-D instancing of trees and brush can be added in locations consistent with the imagery, significantly improving the visual quality.
Simon Premože, William B. Thompson, Peter Shirley
Multiple Textures Stitching and Blending on 3D Objects
Abstract
In this paper we propose a new approach for mapping and blending textures on 3D geometries. The system starts from a 3D mesh which represents a real object and improves this model with pictorial detail. Texture detail is acquired via a common photographic process directly from the real object. These images are then registered and stitched on the 3D mesh, by integrating them into a single standard texture map. An optimal correspondence between regions of the 3D mesh and sections of the acquired images is built. Then, a new approach is proposed to produce a smooth join between different images that map on adjacent sections of the surface, based on texture blending. For each mesh face which is on the adjacency border between different observed images, a corresponding triangular texture patch is resampled as a weighted blend of the corresponding adjacent images sections. The accuracy of the resampling and blending process is improved by computing an accurate piecewise local registration of the original images with respect to the current face vertices. Examples of the results obtained with sample Cultural Heritage objects are presented and discussed.
C. Rocchini, P. Cignoni, C. Montani, R. Scopigno
Image-Based BRDF Measurement Including Human Skin
Abstract
We present a new image-based process for measuring the bidirectional reflectance of homogeneous surfaces rapidly, completely, and accurately. For simple sample shapes (spheres and cylinders) the method requires only a digital camera and a stable light source. Adding a 3D scanner allows a wide class of curved near-convex objects to be measured. With measurements for a variety of materials from paints to human skin, we demonstrate the new method’y to achieve high resolution and accuracy over a large domain of illumination and reflection directions. We verify our measurements by tests of internal consistency and by comparison against measurements made using a gonioreflectomter.
Stephen R. Marschner, Stephen H. Westin, Eric P. F. Lafortune, Kenneth E. Torrance, Donald P. Greenberg
Real-Time Rendering of Real World Environments
Abstract
One of the most important goals of interactive computer graphics is to allow a user to freely walk around a virtual recreation of a real environment that looks as real as the world around us. But hand-modeling such a virtual environment is inherently limited and acquiring the scene model using devices also presents challenges. Interactively rendering such a detailed model is beyond the limits of current graphics hardware, but image-based approaches can significantly improve the status quo.
We present an end-to-end system for acquiring highly detailed scans of large real world spaces, consisting of forty to eighty million range and color samples, using a digital camera and laser rangefinder. We explain successful techniques to represent these large data sets as image-based models and present contributions to image-based rendering that allow these models to be rendered in real time on existing graphics hardware without sacrificing the high resolution at which the data sets were acquired.
David K. McAllister, Lars Nyland, Voicu Popescu, Anselmo Lastra, Chris McCue
Computing Visibility for Triangulated Panoramas
Abstract
A visibility algorithm for triangulated panoramas is proposed. The algorithm can correctly resolve the visibility without making use of any depth information. It is especially useful when depth information is not available, such as in the case of real-world photographs. Based on the optical flow information and the image intensity, the panorama is subdivided into variable-sized triangles, image warping is then efficiently applied on these triangles using existing graphics hardware. The visibility problem is resolved by drawing the warped triangles in a specific order. This drawing order is derived from epipolar geometry. Using this partial drawing order, a graph can be built and topological sorting is applied on the graph to obtain the complete drawing order of all triangles. We will show that the time complexity of graph construction and topological sorting are both linear to the total number of triangles.
Chi-Wing Fu, Tien-Tsin Wong, Pheng-Ann Heng
Efficient Displacement Mapping by Image Warping
Abstract
While displacement maps can provide a rich set of visual detail on otherwise simple surfaces, they have always been very expensive to render. Rendering has been done using ray-tracing and by introducing a great number of micro-polygons. We present a new image-based approach by showing that rendering displacement maps is sufficiently similar to image warping for parallel displacements and displacements originating form a single point. Our new warping algorithm is particularly well suited for this class of displacement maps. It allows efficient modeling of complicated shapes with few displacement mapped polygons and renders them at interactive rates.
Gernot Schaufler, Markus Priglinger
Light Field Techniques for Reflections and Refractions
Abstract
Reflections and refractions are important visual effects that have long been considered too costly for interactive applications. Although most contemporary graphics hardware supports reflections off curved surfaces in the form of environment maps, refractions in thick, solid objects cannot be handled with this approach, and the simplifying assumptions of environment maps also produce visible artifacts for reflections.
Only recently have researchers developed techniques for the interactive rendering of true reflections and refractions in curved objects. This paper introduces a new, light field based approach to achieving this goal. The method is based on a strict decoupling of geometry and illumination. Hardware support for all stages of the technique is possible through existing extensions of the OpenGL rendering pipeline. In addition, we also discuss storage issues and introduce methods for handling vector-quantized data with graphics hardware.
Wolfgang Heidrich, Hendrik Lensch, Michael F. Cohen, Hans-Peter Seidel
Shadow Penumbras for Complex Objects by Depth-Dependent Filtering of Multi-Layer Depth Images
Abstract
This paper presents an efficient algorithm for filtering multi-layer depth images (MDIs) in order to produce approximate penumbras. The filtering is performed on a MDI that represents the view from the light source. The algorithm is based upon both ray tracing and the z-buffer shadow algorithm, and is closely related to convolution methods. The method’s effectiveness is demonstrated on especially complex objects such as trees, whose soft shadows are expensive to compute by other methods. The method specifically addresses the problem of light-leaking that occurs when tracing rays through discrete representations, and the inability of convolution methods to produce accurate self-shadowing effects.
Brett Keating, Nelson Max
Approximating the Location of Integrand Discontinuities for Penumbral Illumination with Area Light Sources
Abstract
The problem of computing soft shadows with area light sources has received considerable attention in computer graphics. In part, this is a difficult problem because the integral that defines the radiance at a point must take into account the visibility function. Most of the solutions proposed have been limited to polygonal environments, and require a full visibility determination preprocessing step. The result is typically a partitioning of the environment into regions that have a similar view of the light source. We propose a new approach that can be successfully applied to arbitrary environments. The approach is based on the observation that, in the presence of occluders, the primary difficulty in computing the integral that defines the contribution of an area light source, is that of determining the visible domain of the integrand. We extend a recent shadow algorithm for linear light sources in order to calculate a polygonal approximation to this visible domain. We demonstrate for an important class of shadowing problems, and in particular, for convex occluders, that the shape of the visible domain only needs to be roughly approximated by a polygonal boundary. We then use this boundary to subdivide an area light source into a small number of triangles that can be integrated efficiently using either a deterministic solution, or a low degree numerical cubature.
Marc J. Ouellette, Eugene Fiume
Reducing Memory Requirements for Interactive Radiosity Using Movement Prediction
Abstract
The line-space hierarchy is a very powerful approach for the efficient update of radiosity solutions according to geometry changes. However, it suffers from its enormous memory consumption when storing shafts for the entire scene. We propose a method for reducing the memory requirements of the line-space hierarchy by the dynamic management of shaft storage. We store shaft information only locally for those parts of the scene that are currently affected by the geometry change. When the dynamic object enters new regions, new shaft data has to be computed, but on the other hand we can get rid of outdated data ‘behind’ the dynamic object. Simple movement prediction schemes are applied, so that we can provide shaft data to the radiosity update process in time when needed. We show how storage management and pre-calculation of shafts can be efficiently performed in parallel to the radiosity update process itself.
Frank Schöffel, Andreas Pomi
Space-Time Hierarchical Radiosity
Abstract
This paper presents a new hierarchical simulation algorithm allowing the calculation of radiosity solutions for time-dependent scenes where all motion is known a priori. Such solutions could, for instance, be computed to simulate subtle lighting effects (indirect lighting) in animation systems, or to obtain high-quality synthetic image sequences to blend with live action video and film. We base our approach on a Space-Time hierarchy, adding a life span to hierarchical surface elements, and present an integrated formulation of Hierarchical Radiosity with this extended hierarchy. We discuss the expected benefits of the technique, review the challenges posed by the approach, and propose first solutions for these issues, most notably for the space-time refinement strategy. We show that a short animation sequence can be computed rapidly at the price of a sizeable memory cost. These results confirm the potential of the approach while helping to identify areas of promising future work.
Cyrille Damez, François Sillion
Interactive Rendering with Arbitrary BRDFs using Separable Approximations
Abstract
A separable decomposition of bidirectional reflectance distributions (BRDFs) is used to implement arbitrary reflectances from point sources on existing graphics hardware. Two-dimensional texture mapping and compositing operations are used to reconstruct samples of the BRDF at every pixel at interactive rates.
A change of variables, the Gram-Schmidt halfangle/difference vector parameterization, improves separability. Two decomposition algorithms are also presented. The singular value decomposition (SVD) minimizes RMS error. The normalized decomposition is fast and simple, using no more space than what is required for the final representation.
Jan Kautz, Michael D. McCool
An Illumination Model for a System of Isotropic Substrate- Isotropic Thin Film with Identical Rough Boundaries
Abstract
A new physically-based illumination model describing the interaction of light with a system composed of an isotropic substrate coated by an isotropic film with geometrically identical statistical rough boundaries (ITF) is presented. This model divides the intensity reflected from the system into three components: specular, directional-diffuse and uniform diffuse intensity. The formulas for the intensity reflected coherently (specular) and incoherently (directional-diffuse) from the system are derived within the framework of the scalar diffraction theory. Assuming that the slopes on the boundaries of the film are small, a first-order expansion of the reflection coefficient is used in the evaluation of the Helmholtz-Kirchhoff integral which allows to calculate the previous intensities. The consistency of the model is evaluated numerically and appraised visually by comparison with classic approximations.
Isabelle Icart, Didier Arquès
Rendering of Wet Materials
Abstract
The appearance of many natural materials is largely influenced by the environment in which they are situated. Capturing the effects of such environmental factors is essential for producing realistic synthetic images. In this work, we model the changes of appearance due to one such environmental factor, the presence of water or other liquids. Wet materials can look darker, brighter, or more specular depending on the type of material and the viewing conditions. These differences in appearance are caused by a combination of the presence of liquid on the surface and inside the material. To simulate both of these conditions we have developed an approach that combines a reflection model for surface water with subsurface scattering. We demonstrate our approach with a variety of example scenes, showcasing many characteristic appearances of wet materials.
Henrik Wann Jensen, Justin Legakis, Julie Dorsey
Rendering Inhomogeneous Surfaces with Radiosity
Abstract
Natural surfaces are often complex: they nearly always exhibit small scale imperfections such as dirt, dust, cracks, etc., as well as large scale structural elements, as for wickerwork, brick walls, textiles, pebbles, etc., that are generally too complex to be modeled explicitly. In this paper, we propose a new multi-scale periodic texture model adapted to the efficient simulation of the previously mentioned features. This new model combines notions of virtual ray tracing (that we have recently introduced) with bi-directional texture function, while it also considers self-shadowing and inter-reflections at texture scale. In a second step, the texture model is integrated into hierarchical radiosity with clustering. Therefore, an extension of radiosity techniques, currently limited to texture maps, bump maps and general (homogeneous) reflectance functions, is proposed. The final rendering consists of applying a second ray tracing pass, based on a gathering methodology adapted to the model. The method provides images at a significant lower computation and memory consumption cost than with “explicit” models in the case of periodic features (wickerwork, grids, pavements, etc.) for a similar visual quality.
L. Mostefaoui, J. M. Dischler, D. Ghazanfarpour
Face Cluster Radiosity
Abstract
An algorithm for simulating diffuse interreflection in complex three dimensional scenes is described. It combines techniques from hierarchical radiosity and multiresolution modelling. A new face clustering technique for automatically partitioning polygonal models is used. The face clusters produced group adjacent triangles with similar normal vectors. They are used during radiosity solution to represent the light reflected by a complex object at multiple levels of detail. Also, the radiosity method is reformulated in terms of vector irradiance and power. Together, face clustering and the vector formulation of radiosity permit large savings. Excessively fine levels of detail are not accessed by the algorithm during the bulk of the solution phase, greatly reducing its memory requirements relative to previous methods. Consequently, the costliest steps in the simulation can be made sub-linear in scene complexity. Using this algorithm, radiosity simulations on scenes of one million input polygons can be computed on a standard workstation.
Andrew J. Willmott, Paul S. Heckbert, Michael Garland
Effective Compression Techniques for Precomputed Visibility
Abstract
In rendering large models, it is important to identify the small subset of primitives that is visible from a given viewpoint One approach is to partition the viewpoint space into viewpoint cells, and then precompute a visibility table which explicitly records for each viewpoint cell whether or not each primitive is potentially visible. We propose two algorithms for compressing such visibility tables in order to produce compact and natural descriptions of potentially-visible sets. Alternatively, the algorithms can be thought of as techniques for clustering cells and clustering primitives according to visibility criteria The algorithms are tested on three types of scenes which have very different structures: a terrain model, a building model, and a world consisting of curved tunnels. The results show that the natural structure of each type of scene can automatically be exploited to achieve a compact representation of potentially visible sets.
Michiel van de Panne, A. James Stewart
Lighting Design: A Goal Based Approach Using Optimisation
Abstract
There is a need for reliable lighting design applications because available tools are limited and inappropriate for interactive or creative use. Architects and lighting designers need those applications to define, predict, test and validate lighting solutions for their problems. We present a new approach to the lighting design problem based on a methodology that includes the geometry of the scene, the properties of materials and the design goals. It is possible to obtain luminaire characteristics or other kind of results that maximise the attainment of the design goals, which may include different types of constraints or objectives (lighting, geometrical or others). The main goal, in our approach, is to improve the lighting design cycle. In this work we discuss the use of optimisation in lighting design, describe the implementation of the methodology, present real-world based examples and analyse in detail some of the complex technical problems associated and speculate on how to overcome them.
António Cardoso Costa, António Augusto Sousa, Fernando Nunes Ferreira
Interactive Virtual Relighting and Remodeling of Real Scenes
Abstract
Lighting design is often tedious due to the required physical manipulation of real light sources and objects. As an alternative, we present an interactive system to virtually modify the lighting and geometry of scenes with both real and synthetic objects, including mixed real/virtual lighting and shadows.
In our method, real scene geometry is first approximately reconstructed from photographs. Additional images are taken from a single viewpoint with a real light in different positions to estimate reflectance. A filtering process is used to compensate for inaccuracies, and per image reflectances are averaged to generate an approximate reflectance image for the given viewpoint, removing shadows in the process. This estimate is used to initialise a global illumination hierarchical radiosity system, representing real-world secondary illumination; the system is optimized for interactive updates. Direct illumination from lights is calculated separately using ray-casting and a table for efficient reuse of data where appropriate.
Our system allows interactive modification of light emission and object positions, all with mixed real/virtual illumination effects. Real objects can also be virtually removed using texture-filling algorithms for reflectance estimation.
Céline Loscos, Marie-Claude Frasson, George Drettakis, Bruce Walter, Xavier Granier, Pierre Poulin
Beyond Photorealism
Abstract
For around 30 years the computer graphics research community has pursued photorealism as though it were the ultimate form of visual expression. Yet, as an art form, photorealism is one of many abstractions that an artist might use to convey ideas, shape, structure, emotion and mood. In this paper we describe how techniques and wisdom learned from photorealistic computer graphics can be adapted and applied to a diverse range of alternative styles for visual expression.
Stuart Green
Backmatter
Metadaten
Titel
Rendering Techniques’ 99
herausgegeben von
Dr. Dani Lischinski
Dr. Greg Ward Larson
Copyright-Jahr
1999
Verlag
Springer Vienna
Electronic ISBN
978-3-7091-6809-7
Print ISBN
978-3-211-83382-7
DOI
https://doi.org/10.1007/978-3-7091-6809-7