main-content

Über dieses Buch

This book constitutes thoroughly revised and selected papers from the 14th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, VISIGRAPP 2019, held in Prague, Czech Republic, in February 2019. The 25 thoroughly revised and extended papers presented in this volume were carefully reviewed and selected from 395 submissions. The papers contribute to the understanding of relevant trends of current research on computer graphics; human computer interaction; information visualization; computer vision.

Inhaltsverzeichnis

Synthesis and Validation of Virtual Woodcuts Generated with Reaction-Diffusion

Abstract
Although woodcuts are a traditional artistic technique, in which a woodblock is carved and then printed into paper, few works have attempted to synthesize woodcuts in the context of Non-Photorealistic Rendering (NPR). We previously presented a woodcut synthesis mechanism based on Turing’s reaction-diffusion. In that work, an input image is preprocessed in order to gather information which will be used to control the reaction-diffusion processing. In this article, we expanded our previous work by the addition of noise to improve the appearance of wood and a higher degree of user control toward the final result. We also validated our results by comparison with actual woodcuts and performed a qualitative evaluation with users. This work expands the range of artistic styles which can be generated by NPR tools.

Synthesising Light Field Volume Visualisations Using Image Warping in Real-Time

Abstract
We extend our prior research on light field view synthesis for volume data presented in the conference proceedings of VISIGRAPP 2019 [13]. In that prior research, we identified the best Convolutional Neural Network, depth heuristic, and image warping technique to employ in our light field synthesis method. Our research demonstrated that applying backward image warping using a depth map estimated during volume rendering followed by a Convolutional Neural Network produced high quality results. In this body of work, we further address the generalisation of Convolutional Neural Network applied to different volumes and transfer functions from those trained upon. We show that the Convolutional Neural Network (CNN) fails to generalise on a large dataset of head magnetic resonance images. Additionally, we speed up our implementation to enable better timing comparisons while remaining functionally equivalent to our previous method. This produces a real-time application of light field synthesis for volume data and the results are of high quality for low-baseline light fields.
Seán K. Martin, Seán Bruton, David Ganter, Michael Manzke

Motion Capture Analysis and Reconstruction Using Spatial Keyframes

Abstract
Motion capturing is the preferred technique to create realistic animations for skeleton-based models. Capture sessions, however, are costly and the resulting motions are hard to analyze for posterior modification and reuse. In this paper we propose several tools to analyze and reconstruct motions using the concept of spatial keyframes, idealized by Igarashi et al. [19]. Captured motions are represented by curves on the plane obtained by multidimensional projection, allowing the animator to associate regions on that plane with regions in pose space so that representative poses can be located and harvested. The problem of reconstruction from representative poses is also investigated by conducting experiments that measure the error behavior with respect to different multidimensional projection and interpolation algorithms. In particular, we introduce a novel multidimensional projection optimization that minimizes reconstruction errors. These ideas are showcased in an interactive application that can be publicly accessed online.
Bernardo F. Costa, Claudio Esperança

Involving Hearing, Haptics and Kinesthetics into Non-visual Interaction Concepts for an Augmented Remote Tower Environment

Abstract
We investigated the contribution of specific HCI concepts to provide multimodal information to Air Traffic Controlers in the context of Remote Control Towers (i.e. when an airport is controlled from a distant location). We considered interactive spatial sound, tactile stimulation and body movements to design four different interaction and feedback modalities. Each of these modalities have been designed to provide specific solutions to typical Air Traffic Control identified use cases. Sixteen professional Air Traffic Controllers (ATCos) participated in the experiment, which was structured in four distinct scenarios. ATCos were immersed in an ecological setup, in which they were asked to control (i) one airport without augmentations modalities, (ii) two airports without augmentations, (iii) one airport with augmentations and (iv) two airports with augmentations. These experimental conditions constituted the four distinct experimental scenarios. Behavioral results shown a significant increase in overall participants’ performance when augmentation modalities were activated in remote control tower operations for one airport.
Maxime Reynal, Pietro Aricò, Jean-Paul Imbert, Christophe Hurter, Gianluca Borghini, Gianluca Di Flumeri, Nicolina Sciaraffa, Antonio Di Florio, Michela Terenzi, Ana Ferreira, Simone Pozzi, Viviana Betti, Matteo Marucci, Fabio Babiloni

Virtual Reality System for Ship Handling Simulations: A Case Study on Nautical Personnel Performance, Observed Behaviour, Sense of Presence and Sickness

Abstract
In this paper we introduce the virtual reality ship simulator we designed and the results of an experimental session, in which a manoeuvring task was proposed. In particular, we considered three conditions: (i) the kind of visualisation setup, i.e. a non-immersive system based on standard monitors and an immersive system using a head mounted display; (ii) the path users were instructed to follow, Elliptic or Eight shape path; (iii) the boat type, Slow and Fast. We analyzed three different aspects: performances, defined as the correctness of the followed path; cybersickness, assessed by the Simulator Sickness Questionnaire and physiological measurements (heart rate and skin conductance); sense of presence, determined through the Igroup Presence Questionnaire and the participants’ head rotation. In order to evaluate the proposed system from the point of view of experts, tests were conducted on 20 volunteer skilled users, specifically students of a naval academy. Results show that: (i) expert users are able to follow the predefined path in a quite accurate manner; (ii) both visualization systems do not introduce serious undesired effects or stress and the use of immersive virtual reality itself does not explain the increase of user malaise state; (iii) immersive virtual reality systems allow users to feel more involved and present in the simulation scenario; (iv) there are no appreciable differences with respect to the degree of knowledge of virtual reality systems, thus indicating that such simulator can be also used for training users without specific technological skills.
Chiara Bassano, Manuela Chessa, Luca Fengone, Luca Isgrò, Fabio Solari, Giovanni Spallarossa, Davide Tozzi, Aldo Zini

A Process Reference Model for UX

Abstract
We propose a process reference model for UX (UXPRM), which includes a description of the primary UX lifecycle processes within a UX lifecycle and a set of supporting UX methods. The primary UX lifecycle processes are refined into objectives, outcomes and base practices. The supporting UX methods are refined into related techniques, specific objectives and references to the related documentation available in the literature. The contribution of the proposed UXPRM is three-fold: conceptual, as it draws an accurate picture of the UX base practices; practical, as it is intended for both researchers and practitioners and customizable for different organizational settings; methodological, as it supports researchers and practitioners to make informed decisions while selecting UX methods and techniques. This is a first step towards the strategic planning of UX activities.
Suzanne Kieffer, Luka Rukonić, Vincent Kervyn de Meerendré, Jean Vanderdonckt

AAT Meets Virtual Reality

Abstract
Smoking is still one of the main causes of premature mortality and is associated with a variety of diseases. Nevertheless, a large part of the society smokes. Many of them do not seek treatment, and the path to smoking cessation is often abandoned. For this reason, we have developed a VR application that uses the AAT procedure, which has already achieved positive results in the detection and treatment of addiction disorders, as a basic procedure. We want to support the classical therapies by increasing the motivation of the affected patients through immersion, embodiment and game design elements. For this purpose, we have initially developed and evaluated a first demonstrator. Based on the results and findings, a completely revised VR application was programmed, which also eliminates the identified errors and problems of the first demonstrator. In addition, a mobile application will be developed to support the treatment. Our results show that the transfer of the AAT procedure into virtual reality, and thus into three-dimensional space, is possible and promising. We also found that three-dimensional stimuli should be preferred, since the interaction with them was more intuitive and entertaining for the participants. The benefits of game design elements in combination with the representation of interactions in the form of a hand with gripping animations also proved to be of great value, as this increased immersion, embodiment, and therefore motivation.
Tanja Joan Eiler, Armin Grünewald, Michael Wahl, Rainer Brück

Orthogonal Compaction: Turn-Regularity, Complete Extensions, and Their Common Concept

Abstract
The compaction problem in orthogonal graph drawing aims to construct efficient drawings on the orthogonal grid. The objective is to minimize the total edge length or area of a planar orthogonal grid drawing. However, any collisions, i.e. crossing edges, overlapping faces, or colliding vertices, must be avoided. The problem is NP-hard. Two common compaction methods are the turn-regularity approach by Bridgeman et al. [4] and the complete-extension approach by Klau and Mutzel [23]. Esser [14] has shown that both methods are equivalent and follow a common concept to avoid collisions.
We present both approaches and their common concept in detail. We introduce an algorithm to transform the turn-regularity formulation into the complete-extension formulation and vice versa in $$\mathcal {O}(n)$$ time, where n is the number of vertices.
Alexander M. Esser

A Model for the Progressive Visualization of Multidimensional Data Structure

Abstract
This paper presents a model for the progressive visualization and exploration of the structure of large datasets. That is, an abstraction on different components and relations which provide means for constructing a visual representation of a dataset’s structure, with continuous system feedback and enabled user interactions for computational steering, in spite of size. In this context, the structure of a dataset is regarded as the distance or neighborhood relationships among its data points. Size, on the other hand, is defined in terms of the number of data points. To prove the validity of the model, a proof-of-concept was developed as a Visual Analytics library for Apache Zeppelin and Apache Spark. Moreover, nine user studies where carried in order to assess the usability of the library. The results from the user studies show that the library is useful for visualizing and understanding the emerging cluster patterns, for identifying relevant features, and for estimating the number of clusters k.
Elio Ventocilla, Maria Riveiro

Visualization of Tree-Structured Data Using Web Service Composition

Abstract
This article reiterates on the recently presented hierarchy visualization service HiViSer and its API [51]. It illustrates its decomposition into modular services for data processing and visualization of tree-structured data. The decomposition is aligned to the common structure of visualization pipelines [48] and, in this way, facilitates attribution of the services’ capabilities. Suitable base resource types are proposed and their structure and relations as well as a subtyping concept for specifics in hierarchy visualization implementations are detailed. Moreover, state-of-the-art quality standards and techniques for self-documentation and discovery of components are incorporated. As a result, a blueprint for Web service design, architecture, modularization, and composition is presented, targeting fundamental visualization tasks of tree-structured data, i.e., gathering, processing, rendering, and provisioning. Finally, the applicability of the service components and the API is evaluated in the context of exemplary applications.
Willy Scheibel, Judith Hartmann, Daniel Limberger, Jürgen Döllner

Breaking the Curse of Visual Analytics: Accommodating Virtual Reality in the Visualization Pipeline

Abstract
Previous research has exposed the discrepancy between the subject of analysis (real world) and the actual data on which the analysis is performed (data world) as a critical weak spot in visual analysis pipelines. In this paper, we demonstrate how Virtual Reality (VR) can help to verify the correspondence of both worlds in the context of Information Visualization (InfoVis) and Visual Analytics (VA). Immersion allows the analyst to dive into the data world and collate it to familiar real-world scenarios. If the data world lacks crucial dimensions, then these are also missing in created virtual environments, which may draw the analyst’s attention to inconsistencies between the database and the subject of analysis. When situating VR in a generic visualization pipeline, we can confirm its basic equality compared to other mediums as well as possible benefits. To overcome the guarded stance of VR in InfoVis and VA, we present a structured analysis of arguments, exhibiting the circumstances that make VR a viable medium for visualizations. As a further contribution, we discuss how VR can aid in minimizing the gap between the data world and the real world and present a use case that demonstrates two solution approaches. Finally, we report on initial expert feedback attesting the applicability of our approach in a real-world scenario for crime scene investigation.
Matthias Kraus, Matthias Miller, Juri Buchmüller, Manuel Stein, Niklas Weiler, Daniel A. Keim, Mennatallah El-Assady

Designing a Visual Analytics System for Medication Error Screening and Detection

Abstract
Drug safety analysts at the U.S. Food & Drug Administration analyze medication error reports submitted to the Adverse Event Reporting System (FAERS) to detect and prevent detrimental errors from happening in the future. Currently this review process is time-consuming, involving manual extraction and sense-making of the key information from each report narrative. There is a need for a visual analytics approach that leverages both computational techniques and interactive visualizations to empower analysts to quickly gain insights from reports. To assist analysts responsible for identifying medication errors in these reports, we design an interactive Medication Error Visual analytics (MEV) system. In this paper, we describe the detailed study of the Pharmacovigilance at the FDA and the iterative design process that lead to the final design of MEV technology. MEV a multi-layer treemap based visualization system, guides analysts towards the most critical medication errors by displaying interactive reports distributions over multiple data attributes such as stages, causes and types of errors. A user study with ten drug safety analysts at the FDA confirms that screening and review tasks performed with MEV are perceived as being more efficient as well as easier than when using their existing tools. Expert subjective interviews highlight opportunities for improving MEV and the utilization of visual analytics techniques in general for analyzing critical FAERS reports at scale.
Tabassum Kakar, Xiao Qin, Cory M. Tapply, Oliver Spring, Derek Murphy, Daniel Yun, Elke A. Rundensteiner, Lane Harrison, Thang La, Sanjay K. Sahoo, Suranjan De

A Layered Approach to Lightweight Toolchaining in Visual Analytics

Abstract
The ongoing proliferation and differentiation of Visual Analytics to various application domains and usage scenarios has also resulted in a fragmentation of the software landscape for data analysis. Highly specialized tools are available that focus on one particular analysis task in one particular application domain. The interoperability of these tools, which are often research prototypes without support or proper documentation, is hardly ever considered outside of the toolset they were originally intended to work with. To nevertheless use and reuse them in other settings and together with other tools, so as to realize novel analysis procedures by using them in concert, we propose an approach for loosely coupling individual visual analytics tools together into toolchains. Our approach differs from existing such mechanisms by being lightweight in realizing a pairwise coupling between tools without a central broker, and by being layered into different aspects of such a coupling: the usage flow, the data flow, and the control flow. We present a model of this approach and showcase its usefulness with three different usage examples, each focusing on one of the layers.
Hans-Jörg Schulz, Martin Röhlig, Lars Nonnemann, Marius Hogräfer, Mario Aehnelt, Bodo Urban, Heidrun Schumann

Fast Approximate Light Field Volume Rendering: Using Volume Data to Improve Light Field Synthesis via Convolutional Neural Networks

Abstract
Volume visualization pipelines have the potential to be improved by the use of light field display technology, allowing enhanced perceptual qualities. However, these displays will require a significant increase in pixels to be rendered at interactive rates. Volume rendering makes use of ray-tracing techniques, which makes this resolution increase challenging for modest hardware. We demonstrate in this work an approach to synthesize the majority of the viewpoints in the light field using a small set of rendered viewpoints via a convolutional neural network. We show that synthesis performance can be further improved by allowing the network access to the volume data itself. To perform this efficiently, we propose a range of approaches and evaluate them against two datasets collected for this task. These approaches all improve synthesis performance and avoid the use of expensive 3D convolutional operations. With this approach, we improve light field volume rendering times by a factor of 8 for our test case.
Seán Bruton, David Ganter, Michael Manzke

A Reproducibility Study for Visual MRSI Data Analytics

Abstract
Magnetic Resonance Spectroscopy Imaging (MRSI) is a spectral imaging method that measures per voxel spectral information of chemical resonance, from which metabolite concentrations can be computed. In recent work, we proposed a system that uses coordinated views between image-space visualizations and visual representations of the spectral (or feature) space. Coordinated interaction allowed us to analyze all metabolite concentrations together instead of focusing only at single metabolites at a time [8]. In this paper, we want to relate our findings to different results reported in the literature. MRSI is particularly useful for tumor classification and measuring its infiltration of healthy tissue. We compare the metabolite compositions obtained in the various tissues of our data against the compositions reported by other brain tumor studies using a visual analytics approach. It visualizes the similarities in a plot obtained using dimensionality reduction methods. We test our data against various sources to test the reproducibility of the findings.

A Self-regulating Spatio-Temporal Filter for Volumetric Video Point Clouds

Abstract
The following work presents a self-regulating filter that is capable of performing accurate upsampling of dynamic point cloud data sequences captured using wide-baseline multi-view camera setups. This is achieved by using two-way temporal projection of edge-aware upsampled point clouds while imposing coherence and noise filtering via a windowed, self-regulating noise filter. We use a state of the art Spatio-Temporal Edge-Aware scene flow estimation to accurately model the motion of points across a sequence and then, leveraging the spatio-temporal inconsistency of unstructured noise, we perform a weighted Hausdorff distance-based noise filter over a given window. Our results demonstrate that this approach produces temporally coherent, upsampled point clouds while mitigating both additive and unstructured noise. In addition to filtering noise, the algorithm is able to greatly reduce intermittent loss of pertinent geometry. The system performs well in dynamic real world scenarios with both stationary and non-stationary cameras as well as synthetically rendered environments for baseline study.
Matthew Moynihan, Rafael Pagés, Aljosa Smolic

Modeling Trajectories for 3D Motion Analysis

Abstract
3D motion analysis by projecting trajectories on manifolds in a given video can be useful in different applications. In this work, we use two manifolds, Grassmann and Special Orthogonal group SO(3), to analyse accurately complex motions by projecting only skeleton data while dealing with rotation invariance. First, we project the skeleton sequence on the Grassmann manifold to model the human motion as a trajectory. Then, we introduce the second manifold SO(3) in order to consider the rotation that was ignored by the Grassmann manifold on the matched couples on this manifold. Our objective is to find the best weighted linear combination between distances in Grassmann and SO(3) manifolds according to the nature of the input motion. To validate the proposed 3D motion analysis method, we applied it in the framework of action recognition, re-identification and sport performance evaluation. Experiments on three public datasets for 3D human action recognition (G3D-Gaming, UTD-MHAD multimodal action and Florence3D-Action), on two public datasets for re-identification (IAS-Lab RGBD-ID and BIWI-Lab RGBD-ID) and on one recent dataset for throwing motion of handball players (H3DD), proved the effectiveness of the proposed method.
Amani Elaoud, Walid Barhoumi, Hassen Drira, Ezzeddine Zagrouba

Quantitative Comparison of Affine Feature Detectors Based on Quadcopter Images

Abstract
Affine correspondences are in the focus of work in many research groups nowadays. Components of projective geometry, e.g. homography or fundamental matrix, can be recovered more accurately when not only point but affine correspondences are exploited. This paper quantitatively compares state-of-the-art affine covariant feature detectors based-on outdoor images taken by a quadcopter-mounted camera. Accurate Ground Truth (GT) data can be calculated from the restricted flight path of the quadcopter. The GT data consist of not only affine transformation but feature locations as well. Quantitative comparison and in-depth analysis of the affine covariant feature detectors are also presented.
Zoltán Pusztai, Gergő Gál, Levente Hajder

An MRF Optimisation Framework for Full 3D Reconstruction of Scenes with Complex Reflectance

Abstract
The ability to digitise real objects is fundamental in applications such as film post-production, cultural heritage preservation and video game development. While many existing modelling techniques achieve impressive results, they are often reliant on assumptions such as prior knowledge of the scene’s surface reflectance. This considerably restricts the range of scenes that can be reconstructed, as these assumptions are often violated in practice. One technique that allows surface reconstruction regardless of the scene’s reflectance model is Helmholtz Stereopsis (HS). However, to date, research on HS has mostly been limited to 2.5D scene reconstruction. In this paper, a framework is introduced to perform full 3D HS using Markov Random Field (MRF) optimisation for the first time. The paper introduces two complementary techniques. The first approach computes multiple 2.5D reconstructions from a small number of viewpoints and fuses these together to obtain a complete model, while the second approach directly reasons in the 3D domain by performing a volumetric MRF optimisation. Both approaches are based on optimising an energy function combining an HS confidence measure and normal consistency across the reconstructed surface. The two approaches are evaluated on both synthetic and real scenes, measuring the accuracy and completeness obtained. Further, the effect of noise on modelling accuracy is experimentally evaluated on the synthetic dataset. Both techniques achieve sub-millimetre accuracy and exhibit robustness to noise. In particular, the method based on full 3D optimisation is shown to significantly outperform the other approach.

Robustifying Direct VO to Large Baseline Motions

Abstract
While Direct Visual Odometry (VO) methods have been shown to outperform feature-based ones in terms of accuracy and processing time, their optimization is sensitive to the initialization pose typically seeded from heuristic motion models. In real-life applications, the motion of a hand-held or head-mounted camera is predominantly erratic, thereby violating the motion models used, causing large baselines between the initializing pose and the actual pose, which in turn negatively impacts the VO performance.
As the camera transitions from a leisure device to a viable sensor, robustifying Direct VO to real-life scenarios becomes of utmost importance. In that pursuit, we propose FDMO, a hybrid VO that makes use of Indirect residuals to seed the Direct pose estimation process. Two variations of FDMO are presented: one that only intervenes when failure in the Direct optimization is detected, and another that performs both Indirect and Direct optimizations on every frame. Various efficiencies are introduced to both the feature detector and the Indirect mapping process, resulting in a computationally efficient approach. Finally, An experimental procedure designed to test the resilience of VO to large baseline motions is used to validate the success of the proposed approach.
Georges Younes, Daniel Asmar, John Zelek

Localization and Grading of Building Roof Damages in High-Resolution Aerial Images

Abstract
According to the United States National Centers for Environmental Information (NCEI), 2017 was one of the most expensive year of losses due to numerous weather and climate disaster events. To reduce the expenditures handling insurance claims and interactive adjustment of losses, automatic methods analyzing post-disaster images of large areas are increasingly being employed. In our work, roof damage analysis was carried out from high-resolution aerial images captured after a devastating hurricane. We compared the performance of a conventional (Random Forest) classifier, which operates on superpixels and relies on sophisticated, hand-crafted features, with two Convolutional Neural Networks (CNN) for semantic image segmentation, namely, SegNet and DeepLabV3+. The results vary greatly, depending on the complexity of the roof shapes. In case of homogeneous shapes, the results of all three methods are comparable and promising. For complex roof structures the results show that the CNN based approaches perform slightly better than the conventional classifier; the performance of the latter one is, however, most predictable depending on the amount of training data and most successful in the case this amount is low. On the building level, all three classifiers perform comparable well. However, an important prerequisite for accurate damage grading of each roof is its correct delineation. To achieve it, a procedure on multi-modal registration has been developed and summarized in this work. It allows adjusting freely available GIS data with actual image data and it showed a robust performance even in case of severely destroyed buildings.
Melanie Böge, Dimitri Bulatov, Lukas Lucks

Semantic Image Completion Through an Adversarial Strategy

Abstract
Image completion or image inpainting is the task of filling in missing regions of an image. When those areas are large and the missing information is unique such that the information and redundancy available in the image is not useful to guide the completion, the task becomes even more challenging. This paper proposes an automatic semantic inpainting method able to reconstruct corrupted information of an image by semantically interpreting the image itself. It is based on an adversarial strategy followed by an energy-based completion algorithm. First, the data latent space is learned by training a modified Wasserstein generative adversarial network. Second, the learned semantic information is combined with a novel optimization loss able to recover missing regions conditioned by the available information. Moreover, we present an application in the context of face inpainting, where our method is used to generate a new face by integrating desired facial attributes or expressions from a reference face. This is achieved by slightly modifying the objective energy. Quantitative and qualitative top-tier results show the power and realism of the presented method.
Patricia Vitoria, Joan Sintes, Coloma Ballester

An Enhanced Louvain Based Image Segmentation Approach Using Color Properties and Histogram of Oriented Gradients

Abstract
Segmentation techniques based on community detection algorithms generally have an over-segmentation problem. This paper then propose a new algorithm to agglomerate near homogeneous regions based on texture and color features. More specifically, our strategy relies on the use of a community detection on graphs algorithm (used as a clustering approach) where the over-segmentation problem is managed by merging similar regions in which the similarity is computed with Histogram of Oriented Gradients (named as HOG) and Mean and Standard deviation of color properties as features. In order to assess the performances of our proposed algorithm, we used three public datasets (Berkeley Segmentation Dataset (BSDS300 and BSDS500) and the Microsoft Research Cambridge Object Recognition Image Database (MSRC)). Our experiments show that the proposed method produces sizable segmentation and outperforms almost all the other methods from the literature, in terms of accuracy and comparative metrics scores.
Thanh-Khoa Nguyen, Jean-Loup Guillaume, Mickael Coustaty

Vehicle Activity Recognition Using DCNN

Abstract
This paper presents a novel Deep Convolutional Neural Network (DCNN) method for vehicle activity classification. We extend our previous approach to be able to classify a larger number of vehicle trajectories in a single network. We also highlight the flexibility of our approach in integrating further scenarios to our classifier. Firstly, a spatiotemporal calculus method is used to encode the relative movement between vehicles as a trajectory of QTC states. We then map the encoded trajectory to a 2D matrix using the one-hot vector mapping, this preserves the important positional data and order for each QTC state. To do this we associate the QTC sequences with pixels to form a 2D image texture. Afterwards, we adapted trained CNN architecture into our vehicles activity recognition task. Two separate types of driving data sets are used to evaluate our method. We demonstrate that the proposed method out-performs existing techniques. Along with the proposed approach we created a new dataset of vehicles interactions. Although the focus of this paper is on the automated analysis of vehicle interactions, the proposed technique is general and can be applied for pairwise analysis for moving objects.
Alaa AlZoubi, David Nam

Quantifying Deformation in Aegean Sealing Practices

Abstract
In Bronze Aegean society, seals played an important role by authenticating, securing and marking. The study of the seals and their engraved motifs provides valuable insight into the social and political organization and administration of Aegean societies. A key research question is the determination of authorship and origin. Given several sets of similar impressions with a wide geographical distribution on Crete, and even beyond the island, the question arises as to whether all of them originated from the same seal and thus the same seal user. Current archaeological practice focuses on manually and qualitatively distinguishing visual features. In this work, we quantitatively evaluate and highlight visual differences between sets of seal impressions, enabling archaeological research to focus on measurable differences. Our data are plasticine and latex casts of original seal impressions acquired with a structured-light 3D scanner. Surface curvature of 3D meshes is computed with Multi-Scale Integral Invariants (MSII) and rendered into 2D images. Then, visual feature descriptors are extracted and used in a two-stage registration process. A rough rigid fit is followed by non-rigid fine-tuning on basis of thin-plate splines (TPS). We compute and visualize all pairwise differences in a set of seal impressions, making outliers easily visible showing significantly different impressions. To validate our approach, we construct a-priori synthetic deformations between impressions that our method reverses. Our method and its parameters is evaluated on the resulting difference. For testing real-world applicability, we manufactured two sets of physical seal impressions, with a-priori known manufactured differences, against which our method is tested.
Bartosz Bogacz, Sarah Finlayson, Diamantis Panagiotopoulos, Hubert Mara

Backmatter

Weitere Informationen