Skip to main content

Über dieses Buch

This book constitutes the refereed proceedings of the 11th International Symposium on Smart Graphics, SG 2011, held in Bremen, Germany, in July 2011. The 10 revised full papers presented together with 12 short papers and 4 systems demonstrations were carefully reviewed and selected from numerous submissions covering a wide range of topics including view and camera control; three-dimensional modeling; visual information encoding; video projection; information visualization; interaction techniques; visual communication; and graphics and audio.



View and Camera Control

Smart Views in Smart Environments

Smart environments integrate a multitude of different device ensembles and aim to facilitate proactive assistance in multi-display scenarios. However, the integration of existing software, especially visualization systems, to take advantage of these novel capabilities is still a challenging task. In this paper we present a smart view management concept for an integration that combines and displays views of different systems in smart meeting rooms. Considering these varying requirements arising in such environments we provide a smart viewing management taking e.g. the dynamic user positions, view directions and even the semantics of views to be shown into account.
Axel Radloff, Martin Luboschik, Heidrun Schumann

Advanced Composition in Virtual Camera Control

Rapid increase in the quality of 3D content coupled with the evolution of hardware rendering techniques urges the development of camera control systems that enable the application of aesthetic rules and conventions from visual media such as film and television. One of the most important problems in cinematography is that of composition, the precise placement of elements in shot. Researchers already considered this problem, but mainly focused on basic compositional properties like size and framing. In this paper, we present a camera system that automatically configures the camera in order to satisfy advanced compositional rules. We have selected a number of those rules and specified rating functions for them, then using optimisation we find the best possible camera configuration. Finally, for better results, we use image processing methods to rate the satisfaction of rules in shot.
Rafid Abdullah, Marc Christie, Guy Schofield, Christophe Lino, Patrick Olivier

Towards Adaptive Virtual Camera Control in Computer Games

Automatic camera control aims to define a framework to control virtual camera movements in dynamic and unpredictable virtual environments while ensuring a set of desired visual properties. We investigate the relationship between camera placement and playing behaviour in games and build a user model of the camera behaviour that can be used to control camera movements based on player preferences. For this purpose, we collect eye gaze, camera and game-play data from subjects playing a 3D platform game, we cluster gaze and camera information to identify camera behaviour profiles and we employ machine learning to build predictive models of the virtual camera behaviour. The performance of the models on unseen data reveals accuracies above 70% for all the player behaviour types identified. The characteristics of the generated models, their limits and their use for creating adaptive automatic camera control in games is discussed.
Paolo Burelli, Georgios N. Yannakakis

Three-Dimensional Modeling

An Interactive Design System for Sphericon-Based Geometric Toys Using Conical Voxels

In this paper, we focus on a unique solid, named a “sphericon”, which has geometric properties that cause it to roll down a slope while swinging from side to side. We propose an interactive system for designing 3D objects with the same geometric characteristics as a sphericon. For high system efficiency, we used a conical voxel representation for defining these objects. The system allows the user to concentrate on the design while itself ensuring that the geometrical constraints of a sphericon are satisfied. The user can also preview the rolling motion of the object. To evaluate the effectiveness of the proposed system, we fabricated the designed models using a 3D printer, and confirmed that they rolled as smoothly as a standard sphericon.
Masaki Hirose, Jun Mitani, Yoshihiro Kanamori, Yukio Fukui

A Multi-touch System for 3D Modelling and Animation

3D modelling and animation software is typically operated via single-pointer input, imposing a serialised workflow that seems cumbersome in comparison to how humans manipulate objects in the real world. Research has brought forth new interaction techniques for modelling and animation that utilise input with more degrees of freedom or employ both hands to allow more parallel control, yet these are separate efforts across diverse input technologies and have not been applied to a usable system. We developed a 3D modelling and animation system for multi-touch interactive surfaces, as this technology offers parallel input with many degrees of freedom through one or both hands. It implements techniques for one-handed 3D navigation, 3D object manipulation, and time control. This includes mappings for layered or multi-track performance animation that allows the animation of different features across several passes or the modification of previously recorded motion. We show how these unimanual techniques can be combined for efficient bimanual control and propose techniques that specifically support the use of both hands for typical tasks in 3D editing. A study proved that even inexperienced users can successfully use our system for a more parallel and direct modelling or animation process.
Benjamin Walther-Franks, Marc Herrlich, Rainer Malaka

Visual Information Encoding

Illustrative Couinaud Segmentation for Ultrasound Liver Examinations

Couinaud segmentation is a widely used liver partitioning scheme for describing the spatial relation between diagnostically relevant anatomical and pathological features in the liver. In this paper, we propose a new methodology for effectively conveying these spatial relations during the ultrasound examinations. We visualize the two-dimensional ultrasound slice in the context of a three-dimensional Couinaud partitioning of the liver. The partitioning is described by planes in 3D reflecting the vascular tree anatomy, specified in the patient by the examiner using her natural interaction tool, i.e., the ultrasound transducer with positional tracking. A pre-defined generic liver model is adapted to the specified partitioning in order to provide a representation of the patient’s liver parenchyma. The specified Couinaud partitioning and parenchyma model approximation is then used to enhance the examination by providing visual aids to convey the relationships between the placement of the ultrasound plane and the partitioned liver. The 2D ultrasound slice is augmented with Couinaud partitioning intersection information and dynamic label placement. A linked 3D view shows the ultrasound slice, cutting the liver and displayed using fast exploded view rendering. The described visual augmentation has been characterized by the clinical personnel as very supportive during the examination procedure, and also as a good basis for pre-operative case discussions.
Ola Kristoffer Øye, Dag Magne Ulvang, Odd Helge Gilja, Helwig Hauser, Ivan Viola

Iconizer: A Framework to Identify and Create Effective Representations for Visual Information Encoding

The majority of visual communication today occurs by ways of spatial groupings, plots, graphs, data renderings, photographs and video frames. However, the degree of semantics encoded in these visual representations is still quite limited. The use of icons as a form of information encoding has been explored to a much lesser extent. In this paper we describe a framework that uses a dual domain approach involving natural language text processing and global image databases to help users identify icons suitable to visually encode abstract semantic concepts.
Supriya Garg, Tamara Berg, Klaus Mueller

A Zone-Based Approach for Placing Annotation Labels on Metro Maps

Hand-drawn metro map illustrations often employ both internal and external labels in a way that they can assign enough information such as textual and image annotations to each landmark. Nonetheless, automatically tailoring the aesthetic layout of both textual and image labels together is still a challenging task, due to the complicated shape of the labeling space available around the metro network. In this paper, we present a zone-based approach for placing such annotation labels so that we can fully enhance the aesthetic criteria of the label arrangement. Our algorithm begins by decomposing the map domain into three different zones where we can limit the position of each label according to its type. The optimal positions of labels of each type are evaluated by referring to the zone segmentation over the map. Finally, a new genetic-based approach is introduced to compute the optimal layout of such annotation labels, where the order in which the labels are embedded into the map is improved through the evolutional computation algorithm. We also equipped a semantic zoom functionality, so that we can freely change the position and scale of the metro map.
Hsiang-Yun Wu, Shigeo Takahashi, Chun-Cheng Lin, Hsu-Chun Yen

Video Projection

Using Mobile Projection to Support Guitar Learning

The guitar is one of the most widespread instruments amongst autodidacts, but even though a huge amount of learning material exists, it is still hard to learn especially without a guitar teacher. In this paper we propose an Augmented Reality concept that assists guitar students mastering their instrument using a mobile projector. With the projector mounted onto the headstock of the guitar, it is possible to project instructions directly onto the strings of the guitar. With that the user is easily able to realize where the fingers have to be placed on the fretboard (fingering) to play a certain chord or a tone sequence correctly.
Markus Löchtefeld, Sven Gehring, Ralf Jung, Antonio Krüger

Don’t Duck Your Head! Notes on Audience Experience in a Participatory Performance

By introducing the transdisciplinary political dance production Parcival XX-XI and exemplifying two participatory scenarios out of the play, we discuss the audience’s appreciation of interactive digital media usage within the traditional frame of theatre. In this context, we developed a short-guided interview to be conducted with members of the audience after each performance, planned as an on-going evaluation. Based on 15 interviews, we present four reasons of why the audience tends to (not) duck the head when asked to participate in Parcival XX-XI: Fear, fun, frustration and schadenfreude.
Gesa Friederichs-Büttner

Short Papers: Information Visualization

CorpusExplorer: Supporting a Deeper Understanding of Linguistic Corpora

Word trees are a common way of representing frequency information obtained by analyzing natural language data. This article explores their usage and possibilities, and addresses the development of an application to visualize the relative frequencies of 2-grams and 3-grams in Google’s ”English One Million” corpus using a two-sided word tree and sparklines to show usage trends through time. It also discusses how the raw data was processed and trimmed to speed up access to it.
Andrés Esteban, Roberto Therón

Glass Onion : Visual Reasoning with Recommendation Systems through 3D Mnemonic Metaphors

The Glass Onion is a project in its infancy. We aim to utilize the Recommendation Systems Model as a solution to oversaturation of data, and would like to explore the realm of personal relevancy through implementation of Information Recommendation Systems, and information visualization techniques through 3d graphic rendered metaphors. The Glass Onion project seeks to shed light on human association pathways, and through our interaction with a visual recommendation system, develop a personalized search and navigation method which may be used across multiple sets of data. We hope that by interacting with the Glass Onion 3D visualization recommendation system, guests will benefit from their own personal lens or onion, which can then be borrowed, rated, and utilized by others.
Mary-Anne (Zoe) Wallace

Visualizing Geospatial Co-authorship Data on a Multitouch Tabletop

This paper presents Muse, a visualization of institutional co-authorship of publications. The objective is to create an interactive visualization, which enables users to visually analyze collaboration between institutions based on publications. The easy to use multitouch interaction, and the size of the interactive surface invite users to explore the visualization in semi-public spaces.
Till Nagel, Erik Duval, Frank Heidmann

Short Papers: Interaction Techniques

ElasticSteer – Navigating Large 3D Information Spaces via Touch or Mouse

The representation of 2D data in 3D information spaces is becoming increasingly popular. Many different layout and interaction metaphors are in use, but it is unclear how these perform in comparison to each other and across different input devices. In this paper we present the ElasticSteer technique for navigation in 3D information spaces using relative gestures for mouse and multi-touch input. It realises steering control with visual feedback on direction and speed via a rubber band metaphor. ElasticSteer includes unconstrained and constrained navigation specifically designed for the wall, carousel or corridor visualisation metaphors. A study shows that ElasticSteer can be used successfully by novice users and performs comparably for mouse and multi-touch input.
Hidir Aras, Benjamin Walther-Franks, Marc Herrlich, Patrick Rodacker, Rainer Malaka

Proxy-Based Selection for Occluded and Dynamic Objects

We present a selection technique for 2D and 3D environments based on proxy objects designed to improve selection of occluded and dynamic objects. We explore the design space for proxies, of which we implemented the properties colour similarity and motion similarity and tested them in a user study. Our technique significantly increases selection precision but is slower than the reference selection technique, suggesting a mix of both to optimise speed versus error rate for real world applications.
Marc Herrlich, Benjamin Walther-Franks, Roland Schröder-Kroll, Jan Holthusen, Rainer Malaka

Integrated Rotation and Translation for 3D Manipulation on Multi-Touch Interactive Surfaces

In the domain of 2D graphical applications multi-touch input is already quite well understood and smoothly integrated translation and rotation of objects widely accepted as a standard interaction technique. However, in 3D VR, modeling, or animation applications, there are no such generally accepted interaction techniques for multi-touch displays featuring the same smooth and fluid interaction style. In this paper we present two novel techniques for integrated 6 degrees of freedom object manipulation on multi-touch displays. They are designed to transfer the smooth 2D interaction properties provided by multi-touch input to the 3D domain. One makes separation of rotation and translation easier, while the other strives for maximum integration of rotation and translation. We present a first user study showing that while both techniques can be used successfully for unimanual and bimanual integrated 3D rotation and translation, the more integrated technique is faster and easier to use.
Marc Herrlich, Benjamin Walther-Franks, Rainer Malaka

Left and Right Hand Distinction for Multi-touch Displays

In the physical world we use both hands in a very distinctive manner. Much research has been dedicated to transfer this principle to the digital realm, including multi-touch interactive surfaces. However, without the possibility to reliably distinguish between hands, interaction design is very limited. We present an approach for enhancing multi-touch systems based on diffuse illumination with left and right hand distinction. Using anatomical properties of the human hand we derive a simple empirical model and heuristics that, when fed into a decision tree classifier, enable real-time hand distinction for multi-touch applications.
Benjamin Walther-Franks, Marc Herrlich, Markus Aust, Rainer Malaka

Short Papers: Visual Communication

Visual Communication in Interactive Multimedia

Careful selection of graphical design can push the narrative strength of graphical projects by adjusting the visual statements to the content-wise statements. While many projects of computer science lack of consequent implementation of artistic principles, graphic designers tend to neglect user interaction and evaluation. In a recent project we therefore started a successful approach to combine both sides. In future work we plan to research on further integration of visual narration into interactive storytelling.
René Bühling, Michael Wißner, Elisabeth André

Communicative Images

This paper presents a novel approach to image processing: images are integrated with a dialogue interface that enables them to communicate with the user. This paradigm is supported by exploiting graphical ontologies and using intelligent modules that enable learning from dialogues and knowledge management. The Internet is used for retrieving information about the images as well as for solving more complex tasks in this online environment. Simple examples of the dialogues with the communicative images illustrate the basic idea.
Ivan Kopecek, Radek Oslejsek

A Survey on Factors Influencing the Use of News Graphics in Iranian Online Media

News Graphic is a kind of Infographic which reports the news visually. The difference between news graphic and infographic lies in the content and the speed with which they are presented. Although this kind of graphic is frequently used in media around the world, its use is limited in Iran. The present article aims to study the influential factors in the use of news graphics in Iranian media by means of descriptive methods (interviews and analysis). It has found five deterrent factors including high cost of producing news graphics, low familiarity of media managers with news graphics, limited experience and competence of the Iranian graphic designers in this field, technical and communicational limitations, and the problem of producing and supporting Persian graphical software due to lack of professional groups in creating such software.
Maryam Salimi, Amir Masoud Amir Mazaheri

Short Papers: Graphics and Audio

Palliating Visual Artifacts through Audio Rendering

In this paper, we present a pipeline for combining graphical rendering through an impostor-based level of detail (LOD) technique with audio rendering of an environment sound at different LODs. Two experiments were designed to investigate how parameters used to control the impostors and an additional audio modality can impact the visual detection of artifacts produced by the impostor-based LOD rendering technique. Results show that in general, simple stereo sound hardly impact the perception of image artifacts such as graphical discontinuities.
Hui Ding, Christian Jacquemin

A Phong-Based Concept for 3D-Audio Generation

Intelligent virtual objects gain more and more significance in the development of virtual worlds. Although this concept has high potential in generating all kinds of multimodal output, so far it is mostly used to enrich graphical properties. This paper proposes a framework, in which objects, enriched with information about their sound properties, are being processed to generate virtual sound sources. To create a sufficient surround sound experience not only single sounds but also environmental properties have to be considered. We introduce a concept, transferring features from the Phong lighting model to sound rendering.
Julia Fröhlich, Ipke Wachsmuth

System Demonstrations

pitchMap: A Mobile Interaction Prototype for Exploring Combinations of Maps and Images

While maps and images complement each other when combined in a 3d environment for virtual exploration, mobile interaction concepts for navigation in 3d space are challenging. Due to the lack of input devices, most of the interaction has to be realised on small sized touch screens. We present a prototype combining well-known interaction techniques using a discrete and a continuous pitch gesture.
Dirk Wenig, Rainer Malaka

Lg: A Computational Framework for Research in Sketch-Based Interfaces

We present Lg, a computational framework for the development and scientific evaluation of assistive technologies for sketch-based interfaces on the Mac OS X platform. Lg provides its own Python API that allows access to raw and refined sketch data, machine learning algorithms, and scientific analysis tools. While it has been designed with special attention to time series analysis tasks and machine learning applications, Lg is easily adaptable to perform different tasks in sketch processing.
Tobias Lensing, Lutz Dickmann

Elements of Consumption: An Abstract Visualization of Household Consumption

To promote sustainability consumers must be informed about their consumption behaviours. Ambient displays can be used as an eco-feedback technology to convey household consumption information. Elements of Consumption (EoC) demonstrates this by visualizing electricity, water, and natural gas consumption. EoC delivers three key components: (1) an abstract art piece, (2) a visual way to display data, and (3) to use an abstract art piece as a visual way to display data in order to persuade homeowners to conserve.
Stephen Makonin, Philippe Pasquier, Lyn Bartram

Hand Ivy : Hand Feature Detection for an Advanced Interactive Tabletop

This paper describes the interactive installation work sensibly expressing the close communication among human beings using hands. The communication referred herein means the intentional and symbolic interactions implemented to share meanings among human beings. In this work, the communication and communion with the other facing cross the table is mediated by the two hands put on it. At the end of the fingers on the table is generated the ivy directing toward the wall. Here, the ivy branches are inter-crossed to activate the wall, which intends to express the transition process from the wall of closed mind to the wall of communication.
Young-Mi Kim, Heesun Choi, Jong-Soo Choi


Weitere Informationen