skip to main content
10.1145/1168149acmotherconferencesBook PagePublication PageschiConference Proceedingsconference-collections
BELIV '06: Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization
ACM2006 Proceeding
Publisher:
  • Association for Computing Machinery
  • New York
  • NY
  • United States
Conference:
AVI06: The International Conference on Advanced Visual Interfaces Venice Italy 23 May 2006
ISBN:
978-1-59593-562-5
Published:
23 May 2006

Bibliometrics
Skip Abstract Section
Abstract

Information visualization systems can be very complex and require evaluation efforts targeted at the component level, the system level, and the work environment level. Some components can be evaluated with metrics that can be observed or computed (e.g. speed, accuracy, scalability), while others require empirical user evaluation to determine their benefits while used by humans.Controlled experiments remain the workhorse of evaluation but there is a growing sense in the community that information visualization systems need new methods of evaluation, from longitudinal field studies, insight based evaluation and other metrics adapted to the perceptual aspects of visualization as well as the exploratory nature of discovery. While the overall growth of information visualization is accelerating, the growth of techniques for the evaluation of systems has been relatively slow. That is true for both usability studies and intrinsic quality metrics. Usability studies still tend to be addressed in an ad hoc manner, focusing on particular systems, addressing only time and errors issues, and failing to produce reusable and robust results. Intrinsic quality metrics are even more rare and immature while it is vital defining and assessing them.The aim of the workshop is to collect and discuss innovative ideas on infovis evaluation methods. That includes new ways of conducting user studies, definition and assessment of infovis effectiveness through the formal characterization of perceptual and cognitive tasks and insights, definition of quality criteria and metrics. Case study and survey papers are also part of the workshop since they present interesting general guidelines, practical advices, and lessons learned.

Skip Table Of Content Section
SESSION: Challenges with controlled studies
Article
An explorative analysis of user evaluation studies in information visualisation

This paper presents an analysis of user studies from a review of papers describing new visualisation applications and uses these to highlight various issues related to the evaluation of visualisations. We first consider some of the reasons why the ...

Article
Evaluating information visualisations

As more experience is being gained with the evaluation of information visualisation interfaces, weaknesses in current evaluation practice are coming to the fore.This position paper presents an overview of currently used evaluation methods, followed by a ...

SESSION: Lessons learned from case studies
Article
Evaluating visual table data understanding

In this paper, we focus on evaluating how information visualization supports exploration for visual table data. We present a controlled experiment designed to evaluate how the layout of table data affects the user understanding and his exploration ...

Article
Methods for the evaluation of an interactive InfoVis tool supporting exploratory reasoning processes

Developing Information Visualization (InfoVis) techniques for complex knowledge domains makes it necessary to apply alternative methods of evaluation. In the evaluation of Gravi++ we used several methods and studied different user groups. We developed a ...

Article
Evaluating information visualization applications with focus groups: the CourseVis experience

This paper reports our experience of evaluating an application that uses visualization approaches to support instructors in Web based distance education. The evaluation took place in three stages: a focus group, an experimental study, and a semi-...

SESSION: Methodologies: novel approaches and metrics
Article
Strategies for evaluating information visualization tools: multi-dimensional in-depth long-term case studies

After an historical review of evaluation methods, we describe an emerging research method called Multi-dimensional In-depth Long-term Case studies (MILCs) which seems well adapted to study the creative activities that users of information visualization ...

Article
Metrics for analyzing rich session histories

To be most useful, evaluation metrics should be based on detailed observation and effective analysis of a full spectrum of system use. Because observation is costly, ideally we want a system to provide in-depth data collection with allied analyses of ...

Article
Visual quality metrics

The definition and usage of quality metrics for Information Visualization techniques is still an immature field. Several proposals are available but a common view and understanding of this issue is still missing. This paper attempts a first step toward ...

SESSION: Methodologies: heuristics for information visualization
Article
Systematic inspection of information visualization systems

Recently, several information visualization (IV) tools have been produced and there is a growing number of commercial products. To contribute to a widespread adoption of IV tools, it is indispensable that these tools are effective, efficient and ...

Article
Heuristics for information visualization evaluation

Heuristic evaluation is a well known discount evaluation technique in human-computer interaction (HCI) but has not been utilized in information visualization (InfoVis) to the same extent. While several sets of heuristics have been used or proposed for ...

SESSION: Developing benchmarks datasets and tasks
Article
Shakespeare's complete works as a benchmark for evaluating multiscale document navigation techniques

In this paper, we describe an experimental platform dedicated to the comparative evaluation of multiscale electronic-document navigation techniques. One noteworthy characteristic of our platform is that it allows the user not only to translate the ...

Article
Threat stream data generator: creating the known unknowns for test and evaluation of visual analytics tools

We present the Threat Stream Data Generator, an approach and tool for creating synthetic data sets for the test and evaluation of visual analytics tools and environments. We have focused on working with information analysts to understand the ...

Article
A taxonomy of tasks for guiding the evaluation of multidimensional visualizations

The design of multidimensional visualization techniques is based on the assumption that a graphical representation of a large dataset can give more insight to a user, by providing him/her a more intuitive support in the process of exploiting data. When ...

Article
Task taxonomy for graph visualization

Our goal is to define a list of tasks for graph visualization that has enough detail and specificity to be useful to: 1) designers who want to improve their system and 2) to evaluators who want to compare graph visualization systems. In this paper, we ...

Article
Just how dense are dense graphs in the real world?: a methodological note

This methodological note focuses on the edge density of real world examples of networks. The edge density is a parameter of interest typically when putting up user studies in an effort to prove the robustness or superiority of a novel graph ...

Contributors
  • Northeastern University
  • University of Maryland, College Park
  • Sapienza University of Rome
  1. Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization

    Recommendations

    Acceptance Rates

    Overall Acceptance Rate45of64submissions,70%
    YearSubmittedAcceptedRate
    BELIV '14302377%
    BELIV '10181267%
    BELIV '08161063%
    Overall644570%