Skip to main content

Über dieses Buch

This book contains the scientific papers presented at the SthEUROGRAPHICS Workshop on Virtual Environments '99, which st st was held in Vienna May 31 and June 1 . It was organized by the Institute of Computer Graphics of the Vienna University of Technology together with the Austrian Academy of Sciences and EUROGRAPHICS. The workshop brought together scientists from all over the world to present and discuss the latest scientific advances in the field of Virtual Environments. 31 papers where submitted for reviewing and 18 where selected to be presented at the workshop. Most of the top research institutions working in the area submitted papers and presented their latest results. These presentations were complemented by invited lectures from Stephen Feiner and Ron Azuma, two key researchers in the area of Augmented Reality. The book gives a good overview of the state of the art in Augmented Reality and Virtual Environment research. The special focus of the Workshop was Augmented Reality, reflecting a noticeable strong trend in the field of Virtual Environments. Augmented Reality tries to enrich real environments with virtual objects rather than replacing the real world with a virtual world. The main challenges include real time rendering, tracking, registration and occlusion of real and virtual objects, shading and lighting interaction, and interaction techniques in augmented environments. These problems are addressed by new research results documented in this book. Besides Augmented Reality, the papers collected here also address levels of detail, distributed environments, systems and applications, and interaction techniques.



Levels of Detail

Validity-Preserving Simplification of Very Complex Polyhedral Solids

In this paper we introduce the Discretized Polyhedra Simplification (DPS), a framework for polyhedra simplification using space decomposition models. The DPS is based on a new error measurement and provides a sound scheme for error-bounded, geometry and topology simplification while preserving the validity of the model. A method following this framework, Direct DPS, is presented and discussed. Direct DPS uses an octree for topology simplification and error control, and generates valid solid representations. Our method is also able to generate approximations which do not interpenetrate the original model, either being completely contained in the input solid or bounding it. Unlike most of the current methods, restricted to triangle meshes, our algorithm can deal and also produces faces with arbitrary complexity.
Carlos Andújar, Dolors Ayala, Pere Brunet

View-Dependent Topology Simplification

We propose a technique for performing view-dependent sim­plifications for level-of-detail-based renderings of complex models. Our method is based on exploiting frame-to-frame coherence and is tolerant of various commonly found degeneracies in real-life polygonal models. The algorithm proceeds by preprocessing the input dataset into a binary tree of vertex collapses. This tree is used at run time to generate the triangles for display. Dependencies to avoid mesh foldovers in manifold regions of the input object are stored in the tree in an implicit fashion. This obviates the need for any extra storage for dependency pointers and suggests a potential for application to external memory prefetching algo­rithms. We also propose a distance metric that can be used to unify the geometry and genus simplifications with the view-dependent parameters such as viewpoint, view-frustum, and local illumination.
Jihad El-Sana, Amitabh Varshney

Adaptive tessellation of connected primitives for interactive walkthroughs in complex industrial virtual environments

Geometrical primitives used in virtual environments are converted to an important amount triangles at rendering time. The meshes of the resulting simplifications usually introduce discontinuities between neighboring object. We extend a simple adaptive tessellation method which adapts the amount of triangles to the viewing conditions with connection information to ensure that the meshes of connected primitives remain continuous. An ergonomical study has validated this approach for applications using virtual environments.
M. Krus, P. Bourdot, A. Osorio, F. Guisnel, G. Thibault


An Optical Tracking System for VR/AR-Applications

In this paper, an optical tracking system is introduced for the use within Virtual and Augmented Reality applications. The system uses retroreflective markers which are attached to a special designed interaction device. The construction of the device allows us to gather six degrees of freedom. In order to achieve high tracking precision we introduce a calibration algorithm which results in sub-pixel accuracy and is therefore well applicable within Augmented Reality scenarios. Further the algorithm for calculating the pose of a rigid body is described. Finally, the optical tracking system is evaluated in regard to its accuracy.
Klaus Dorfmüller

The integration of optical and magnetic tracking for multi-user augmented reality

Multi-user augmented reality requires excellent registration for all users, which cannot be achieved by magnetic trackers alone. This paper presents a new approach combining magnetic and optical tracking. The magnetic tracker is used to coarsely predict the positions of landmarks in the camera image. This restricts the search area to a size which can be managed close to real-time. This new hybrid tracking system outperforms a calibrated magnetic tracker in terms of position, orientation, and jitter.
Thomas Auer, Stefan Brantner, Axel Pinz

An Optically Based Direct Manipulation Interface for Human-Computer Interaction in an Augmented World

Augmented reality (AR) constitutes a very powerful three-dimensional user interface for many “hands—on” application scenarios in which users cannot sit at a conventional desktop computer. To fully exploit the AR paradigm, the computer must not only augment the real world, it also has to accept feedback from it. Such feedback is typically collected via gesture languages, 3D pointers, or speech input - all tools which expect users to communicate with the computer about their work at a meta-level rather than just letting them pursue their task When the computer is capable of deducing progress directly from changes in the real world, the need for special abstract communication interfaces can be reduced or even eliminated. In this paper, we present an optical approach for analyzing and tracking users and the objects they work with. In contrast to emerging workbench and metaDESK approaches, our system can be set up in any room after quickly placing a few known optical targets in the scene. We present three demonstration scenarios to illustrate the overall concept and potential of our approach and then discuss the research issues involved.
G. Klinker, D. Stricker, D. Reiners

Rendering of Virtual Environments

Improving The Illumination Quality Of VRML 97 Walkthrough Via Intensive Texture Usage

In this paper, we introduce a pipeline, dedicated to global illumination and walkthrough of a VRML 97 scene. This pipeline use intensively and exclusively textures to represent light, and is entirely guided by them. We will show how reversing the classic rendering pipeline, allows to privilege high frequency information (direct illumination), and to accelerate the complete lighting process (global illumination). Finally, we will present the filtering and reconstruction methods used to improve the rendering visual quality.
Cyril Kardassevitch, Jean Pierre Jessel, Mathias Paulin, René Caubet

Fast Walkthroughs with Image Caches and Ray Casting

We present an output-sensitive rendering algorithm for accelerating walkthroughs of large, densely occluded virtual environments using a multistage Image Based Rendering Pipeline. In the first stage, objects within a certain distance are rendered using the traditional graphics pipeline, whereas the remaining scene is rendered by a pixel-based approach using an Image Cache, horizon estimation to avoid calculating sky pixels, and finally, ray casting. The time complexity of this approach does not depend on the total number of primitives in the scene. We have measured speedups of up to one order of magnitude.
Michael Wimmer, Markus Giegl, Dieter Schmalstieg

Using Virtual Environments to Enhance Visualization

Within the EU ESPRIT demonstrator project VIVRE, a commercial virtual environment system has been used to create a user-centred interaction environment for two commercial general-purpose data visualization systems. The project has exploited mechanisms provided by all three systems to incorporate new user-developed functionality. In addition to updating and navigating the visualization scene, this includes user control of the visualization application from within the virtual environment. The project is assessing the degree to which this extended interactive capability results in greater benefits for commercial users.
D. R. S. Boyd, J. R. Gallop, K. E. V. Palmen, R. T. Platon, C. D. Seelig

Distributed Environments

Semantic Behaviours in Collaborative Virtual Environments

Scripting facilities within a collaborative virtual environment (CVE) allows animation and behaviour to be added to otherwise static scenes. This paper describes the integration of the Tcl scripting language into an existing CVE, and describes the advantages gained by such a marriage. Further, we describe how high-level semantic behaviours can be readily introduced into cooperative applications. They benefit from the scripting language to provide an abstraction over application development and be exploited to drastically reduce network traffic.
Emmanuel Frécon, Gareth Smith

A Distributed Device Diagnostics System Utilizing Augmented Reality and 3D Audio

Augmented Reality brings technology developed for virtual environments into the real world. This approach can be used to provide instructions for routine maintenance and error diagnostics of technical devices. The Rockwell Science Center is developing a system that utilizes Augmented Reality techniques to provide the user with a form of “X-Ray Vision” into real objects. The system can overlay 3D rendered objects, animations, and text annotations onto the video image of a known object, registered to the object during camera motion. This allows the user to localize problems of the device with the actual device in his view. The user can query the status of device components using a speech recognition system. The response is given as an animation of the relevant device module and/or as auditory cues using spatialized 3D audio. The diagnostics system also allows the user to leave spoken annotations attached to device modules for other users to retrieve. The position of the user/camera relative to the device is tracked by a computer-vision-based tracking system especially developed for this purpose. The system is implemented on a distributed network of PCs, utilizing standard commercial off-the-shelf components (COTS).
Reinhold Behringer, Steven Chen, Venkataraman Sundareswaran, Kenneth Wang, Marius Vassiliou

Texture-based Volume Visualization for Multiple Users on the World Wide Web

We present a texture-based volume visualization tool, which permits remote access to radiological data and supports multi-user environments. The application uses JAVA and the Virtual Reality Modeling Language (VRML), thus it is platform-independent and able to use fast 3D graphics acceleration hardware of client machines. The application allows the shared viewing and manipulation of three-dimensional medical volume datasets in a heterogeneous network. Volume datasets are transferred from a server to different client machines and locally visualized using a JAVA-enabled web-browser. In order to reduce network traffic, a data reduction and compression scheme is proposed. The application allows view dependent and orthogonal clipping planes, which can be moved interactively. On the client side, the users are able to join a visualization session and to get the same view onto the volume dataset by synchronizing the viewpoint and any other visualization parameter. Interesting parts of the dataset are marked for other users by placing a tag into the visualization. In order to support collaborative work users communicate with a chat applet, which we provide, or by using any existing video conferencing tool.
Klaus Engel, Thomas Ertl

Systems and Applications

PVR An Architecture for Portable VR Applications

Virtual reality shows great promise as a research tool in computational science and engineering. However, since VR involves new interface styles, a great deal of implementation effort is required to develop VR applications.
In this paper we present PVR; an event-based architecture for portable VR applications. The goal of PVR is to provide a programming environment which facilitates the development of VR applications. PVR differentiates itself from other VR toolkits in two ways: First, it decouples the coordination and management of multiple data streams from actual data processing. This simplifies the programmer’s task of managing and synchronizing the data streams. Second, PVR strives for portability by shielding low-level device specific details. Application programmers can take full advantage of the underlying hardware while maintaining a single code base spanning a variety of input and output device configurations.
Robert van Liere, Jurriaan D. Mulder

Rapid Development of VRML Content via Geometric Programming

This paper aims to show that a functional design language can be used as a general-purpose VRML generator. The PLaSM language, which allows for algebraic computations with geometric shapes and maps, has been recently extended with non geometric attributes like colors, lights and textures. PLaSM is used in the paper both to develop some general-purpose tools, including Bézier manifolds of any dimension and degree, the n-th derivative of any parametric curve and “Bézier stripes” of small width, as well as to quickly implement a quite complex mountain landscape. Customized PLaSM applications may generate fully parameterized virtual worlds starting from small data files or streams.
A. Paoluzzi, S. Francesi, S. Portuesi, M. Vicentino

Augmented Reality, the other way around

This paper aims at showing that the notion of Augmented Reality has been developed in a biased way: mostly in destination to the operator, and at the level of his perceptions. This demonstration is achieved through a model describing a situation of tele-operation, on which we represent major cases of Augmented Reality encountered in recent applications. By taking advantage of the symmetry of the model, we are able to show how Augmented Reality can be seen “the other way around”, that is, in destination to the environment, and at the level of the operator’s actions.
Didier Verna, Alain Grumbach


Interaction Techniques on the Virtual Workbench

This paper evaluates interaction methods within the general framework of navigation, selection, and manipulation. It considers large display environments and, in particular, the virtual workbench, comparing this system to HMD and CAVE systems. The paper addresses three issues: (a) identifying the characteristics that set the workbench apart from other virtual environments; (b) determining types and examples of interaction techniques; (c) evaluating how these techniques perform on the workbench to determine which perform best. The evaluations are based on an extensive set of user observations. Also discussed are some problems that stereoscopic display coupled with interaction bring out.
Rogier van de Pol, William Ribarsky, Larry Hodges, Frits Post

A General Framework for Cooperative Manipulation in Virtual Environments

Whereas cooperation and collaboration have become two popular words in virtual reality, the problem of cooperative manipulation has been mainly left aside due to the great number of other challenges facing anyone trying to setup multi-user worlds. We define cooperative manipulation as a situation where two or more users interact on the same object in a concurrent but cooperative way. The focus of this paper is to describe an experiment whose goal was to experiment problems specific of cooperative manipulation setups. Those problems include synchronizing user’s input over the network, mapping user’s input into a meaningful 3-D movement thanks to what we call a model of activity and giving him relevant visual information. In this paper, we present a general framework able to take into account these problems.It is compatible with physically simulated objects and has been implemented using Java, VRML and a distributed approach.
David Margery, Bruno Arnaldi, Noël Plouzeau

Occlusion in Collaborative Augmented Environments

Augmented environments superimpose computer graphics on the real world. Such augmented environments are well suited for collaboration of multiple users. To improve the quality and consistency of the augmentation, the occlusion of real objects by computer-generated objects and vice versa has to be implemented. We present methods how this can be done for a tracked user’s body and other real objects and how irritating artifacts due to misalignments can be reduced. Our method is based on simulating the occlusion of virtual objects by a representation of the user modeled as kinematic chains of articulated solids. Registration and modeling errors of this model are being reduced by smoothing the border between virtual world and occluding real object. An implementation in our augmented environment and the resulting improvements are presented.
Anton Fuhrmann, Gerd Hesina, François Faure, Michael Gervautz


Weitere Informationen