Skip to main content
Top

2011 | Book

Virtual Realities

Dagstuhl Seminar 2008

Editors: Guido Brunnett, Sabine Coquillart, Greg Welch

Publisher: Springer Vienna

insite
SEARCH

About this book

The articles by well-known international experts intend to facilitate more elaborate expositions of the research presented at the seminar, and to collect and document the results of the various discussions, including ideas and open problems that were identified. Correspondingly the book will consist of two parts. Part I will consist of extended articles describing research presented at the seminar. This will include papers on tracking, motion capture, displays, cloth simulation, and applications. Part II will consist of articles that capture the results of breakout discussions, describe visions, or advocate particular positions. This will include discussions about system latency, 3D interaction, haptic interfaces, social gaming, perceptual issues, and the fictional "Holodeck".

Table of Contents

Frontmatter
Chapter 1. Proposals for Future Virtual Environment Software Platforms
Abstract
The past two decades have seen the development of a plethora of software solutions to support virtual environments. Many very capable software platforms, toolkits and libraries have been built, but the rate of development of new software continues to increase. There is very significant functional replication amongst these software, and there are few possibilities to migrate anything other than simple content from one piece of software to another. In this chapter we discuss why there are so many software solutions for virtual environments. We make some suggestions to software developers that might facilitate code re-use at the platform building stage, with the aim of moving towards platforms that support content re-use.
Anthony Steed
Chapter 2. Augmented Reality 2.0
Abstract
Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Camera-equipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.
Dieter Schmalstieg, Tobias Langlotz, Mark Billinghurst
Chapter 3. Experiential Fidelity: Leveraging the Mind to Improve the VR Experience
Abstract
Much of Virtual Reality (VR) is about creating environments that are believable. But though the visual and audio experiences we provide today are already of a rather high sensory fidelity, there is still something lacking; something hinders us from fully buying into the worlds we experience through VR technology. We introduce the notion of Experiential Fidelity, which is an attempt to create a deeper sense of presence by carefully designing the user experience. We suggest to guide the users’ frame of mind in a way that their expectations, attitude, and attention are aligned with the actual VR experience, and that the user’s own imagination is stimulated to complete the experience. This work was inspired by a collection of personal magic moments and factors that were named by leading researchers in VR. We present those magic moments and some thoughts on how we can tap into experiential fidelity. We propose to do this not through technological means, but rather through the careful use of suggestion and allusion. By priming the user’s mind prior to exposure to our virtual worlds, we can assist her in entering a mental state that is more willing to believe, even using the limited actual fidelity available today.
Steffi Beckhaus, Robert W. Lindeman
Chapter 4. Social Gaming and Learning Applications: A Driving Force for the Future of Virtual and Augmented Reality?
Abstract
Backed by a large consumer market, entertainment and education applications have spurred developments in the fields of real-time rendering and interactive computer graphics. Relying on Computer Graphics methodologies, Virtual Reality and Augmented Reality benefited indirectly from this; however, there is no large scale demand for VR and AR in gaming and learning. What are the shortcomings of current VR/AR technology that prevent a widespread use in these application areas? What advances in VR/AR will be necessary? And what might future “VR-enhanced” gaming and learning look like? Which role can and will Virtual Humans play? Concerning these questions, this article analyzes the current situation and provides an outlook on future developments. The focus is on social gaming and learning.
Ralf Dörner, Benjamin Lok, Wolfgang Broll
Chapter 5. [Virtual + 1] * Reality
Blending “Virtual” and “Normal” Reality to Enrich Our Experience
Abstract
Virtual Reality aims at creating an artificial environment that can be perceived as a substitute to a real setting. Much effort in research and development goes into the creation of virtual environments that in their majority are perceivable only by eyes and hands. The multisensory nature of our perception, however, allows and, arguably, also expects more than that. As long as we are not able to simulate and deliver a fully sensory believable virtual environment to a user, we could make use of the fully sensory, multi-modal nature of real objects to fill in for this deficiency. The idea is to purposefully integrate real artifacts into the application and interaction, instead of dismissing anything real as hindering the virtual experience. The term virtual reality – denoting the goal, not the technology – shifts from a core virtual reality to an “enriched” reality, technologically encompassing both the computer generated and the real, physical artifacts. Together, either simultaneously or in a hybrid way, real and virtual jointly provide stimuli that are perceived by users through their senses and are later formed into an experience by the user’s mind.
Steffi Beckhaus
Chapter 6. Action Capture: A VR-Based Method for Character Animation
Abstract
This contribution describes a Virtual Reality (VR) based method for character animation that extends conventional motion capture by not only tracking an actor’s movements but also his or her interactions with the objects of a virtual environment. Rather than merely replaying the actor’s movements, the idea is that virtual characters learn to imitate the actor’s goal-directed behavior while interacting with the virtual scene. Following Arbib’s equation action = movement + goal we call this approach Action Capture. For this, the VR user’s body movements are analyzed and transformed into a multi-layered action representation. Behavioral animation techniques are then applied to synthesize animations which closely resemble the demonstrated action sequences. As an advantage, captured actions can often be naturally applied to virtual characters of different sizes and body proportions, thus avoiding retargeting problems of motion capture.
Bernhard Jung, Heni Ben Amor, Guido Heumer, Arnd Vitzthum
Chapter 7. Cloth Simulation Based Motion Capture of Dressed Humans
Abstract
Commonly, marker based as well as markerless motion capture systems assume that the tracked person is wearing tightly fitting clothes. Unfortunately, this restriction cannot be satisfied in many situations and most preexisting video data does not adhere to it either. In this work we propose a graphics based vision approach for tracking humans markerlessly without making this assumption. Instead, a physically based simulation of the clothing the tracked person is wearing is used to guide the tracking algorithm.
Nils Hasler, Bodo Rosenhahn, Hans-Peter Seidel
Chapter 8. Remote 3D Medical Consultation
Abstract
Two-dimensional (2D) video-based telemedical consultation has been explored widely in the past 15–20 years. Two issues that seem to arise in most relevant case studies are the difficulty associated with obtaining the desired 2D camera views, and poor depth perception. To address these problems we are exploring the use of a small array of cameras to synthesize a spatially continuous range of dynamic three-dimensional (3D) views of a remote environment and events. The 3D views can be sent across wired or wireless networks to remote viewers with fixed displays or mobile devices such as a personal digital assistant (PDA). The viewpoints could be specified manually or automatically via user head or PDA tracking, giving the remote viewer virtual head- or hand-slaved (PDA-based) remote cameras for mono or stereo viewing. We call this idea remote 3D medical consultation (3DMC). In this article we motivate and explain the vision for 3D medical consultation; we describe the relevant computer vision/graphics, display, and networking research; we present a proof-of-concept prototype system; and we present some early experimental results supporting the general hypothesis that 3D remote medical consultation could offer benefits over conventional 2D televideo.
Greg Welch, Diane H. Sonnenwald, Henry Fuchs, Bruce Cairns, Ketan Mayer-Patel, Ruigang Yang, Andrei State, Herman Towles, Adrian Ilie, Srinivas Krishnan, Hanna M. Söderholm
Chapter 9. SEE MORE: Improving the Usage of Large Display Environments
Abstract
Truly seamless tiled displays and stereoscopic large high-resolution displays are among the top research challenges in the area of large displays. In this paper we approach both topics by adding an additional projector to a tiled display scenario as well as to a stereoscopic environment. In both cases, we have developed new focus+context screen approaches: a multiple foci plus context metaphor in the tiled display setup and a 2D+3D focus+context metaphor in the stereoscopic scenario.
Achim Ebert, Hans Hagen, Torsten Bierz, Matthias Deller, Peter-Scott Olech, Daniel Steffen, Sebastian Thelen
Chapter 10. Inner Sphere Trees and Their Application to Collision Detection
Abstract
We present a novel geometric data structure for approximate collision detection at haptic rates between rigid objects. Our data structure, which we call inner sphere trees, supports different kinds of queries, namely, proximity queries and the penetration volume, which is related to the water displacement of the overlapping region and, thus, corresponds to a physically motivated force. Moreover, we present a time-critical version of the penetration volume computation that is able to achieve very tight upper and lower bounds within a fixed budget of query time. The main idea is to bound the object from the inside with a bounding volume hierarchy, which can be constructed based on dense sphere packings. In order to build our new hierarchy, we propose to use an AI clustering algorithm, which we extend and adapt here. The results show performance at haptic rates both for proximity and penetration volume queries for models consisting of hundreds of thousands of polygons.
Rene Weller, Gabriel Zachmann
Chapter 11. The Value of Constraints for 3D User Interfaces
Abstract
User interfaces to three-dimensional environments are becoming more and more popular. Today this trend is fuelled through the introduction of social communication via virtual worlds, console and computer games, as well as 3D televisions.
We present a synopsis of the relevant abilities and restrictions introduced by both input and output technologies, as well as an overview of related human capabilities and limitations, including perceptual and cognitive issues.
Partially based on this, we present a set of guidelines for 3D user interfaces. These guidelines are intended for developers of interactive 3D systems, such as computer and console games, 3D modeling packages, augmented reality systems, computer aided design systems, and virtual environments. The guidelines promote techniques, such as using appropriate constraints, that have been shown to work well in these types of environments.
Wolfgang Stuerzlinger, Chadwick A. Wingrave
Chapter 12. Evaluation of a Scalable In-Situ Visualization System Approach in a Parallelized Computational Fluid Dynamics Application
Abstract
Current parallel supercomputers provide sufficient performance to simulate unsteady three-dimensional fluid dynamics in high resolution. However, the visualization of the huge amounts of result data cannot be handled by traditional methods, where post-processing modules are usually coupled to the raw data source, either by files or by data flow. To avoid significant bottlenecks of the storage and communication resources, efficient techniques for data extraction and preprocessing at the source have been realized in the parallel, network-distributed chain of our Distributed Simulation and Virtual Reality Environment(DSVR). Here the 3D data extraction is implemented as a parallel library (libDVRP) and can be done in-situ during the numerical simulations, which avoids the storage of raw data for visualization at all.
In this work we evaluate our current techniques of flow visualization via parallel generation of pathlines and volume visualization via parallel extraction of isosurfaces in a realistic scenario. The Parallelized Large-eddy Simulation Model(PALM) serves here as a typical example application of numerical simulation of unsteady flows. Our special attention we payed to the evaluation of the influence of the additional in-situ visualization on the parallel speed-up of PALM. Finally it can be shown that this influence is neglectable small for parallel runs with up to over 80 cores.
Sebastian Manten, Michael Vetter, Stephan Olbrich
Backmatter
Metadata
Title
Virtual Realities
Editors
Guido Brunnett
Sabine Coquillart
Greg Welch
Copyright Year
2011
Publisher
Springer Vienna
Electronic ISBN
978-3-211-99178-7
Print ISBN
978-3-211-99177-0
DOI
https://doi.org/10.1007/978-3-211-99178-7