Information visualization systems can be very complex and require evaluation efforts targeted at the component level, the system level, and the work environment level. Some components can be evaluated with metrics that can be observed or computed (e.g. speed, accuracy, scalability), while others require empirical user evaluation to determine their benefits while used by humans.Controlled experiments remain the workhorse of evaluation but there is a growing sense in the community that information visualization systems need new methods of evaluation, from longitudinal field studies, insight based evaluation and other metrics adapted to the perceptual aspects of visualization as well as the exploratory nature of discovery. While the overall growth of information visualization is accelerating, the growth of techniques for the evaluation of systems has been relatively slow. That is true for both usability studies and intrinsic quality metrics. Usability studies still tend to be addressed in an ad hoc manner, focusing on particular systems, addressing only time and errors issues, and failing to produce reusable and robust results. Intrinsic quality metrics are even more rare and immature while it is vital defining and assessing them.The aim of the workshop is to collect and discuss innovative ideas on infovis evaluation methods. That includes new ways of conducting user studies, definition and assessment of infovis effectiveness through the formal characterization of perceptual and cognitive tasks and insights, definition of quality criteria and metrics. Case study and survey papers are also part of the workshop since they present interesting general guidelines, practical advices, and lessons learned.
Proceeding Downloads
An explorative analysis of user evaluation studies in information visualisation
This paper presents an analysis of user studies from a review of papers describing new visualisation applications and uses these to highlight various issues related to the evaluation of visualisations. We first consider some of the reasons why the ...
Evaluating information visualisations
As more experience is being gained with the evaluation of information visualisation interfaces, weaknesses in current evaluation practice are coming to the fore.This position paper presents an overview of currently used evaluation methods, followed by a ...
Evaluating visual table data understanding
In this paper, we focus on evaluating how information visualization supports exploration for visual table data. We present a controlled experiment designed to evaluate how the layout of table data affects the user understanding and his exploration ...
Methods for the evaluation of an interactive InfoVis tool supporting exploratory reasoning processes
Developing Information Visualization (InfoVis) techniques for complex knowledge domains makes it necessary to apply alternative methods of evaluation. In the evaluation of Gravi++ we used several methods and studied different user groups. We developed a ...
Evaluating information visualization applications with focus groups: the CourseVis experience
This paper reports our experience of evaluating an application that uses visualization approaches to support instructors in Web based distance education. The evaluation took place in three stages: a focus group, an experimental study, and a semi-...
Strategies for evaluating information visualization tools: multi-dimensional in-depth long-term case studies
After an historical review of evaluation methods, we describe an emerging research method called Multi-dimensional In-depth Long-term Case studies (MILCs) which seems well adapted to study the creative activities that users of information visualization ...
Metrics for analyzing rich session histories
To be most useful, evaluation metrics should be based on detailed observation and effective analysis of a full spectrum of system use. Because observation is costly, ideally we want a system to provide in-depth data collection with allied analyses of ...
Visual quality metrics
The definition and usage of quality metrics for Information Visualization techniques is still an immature field. Several proposals are available but a common view and understanding of this issue is still missing. This paper attempts a first step toward ...
Systematic inspection of information visualization systems
Recently, several information visualization (IV) tools have been produced and there is a growing number of commercial products. To contribute to a widespread adoption of IV tools, it is indispensable that these tools are effective, efficient and ...
Heuristics for information visualization evaluation
Heuristic evaluation is a well known discount evaluation technique in human-computer interaction (HCI) but has not been utilized in information visualization (InfoVis) to the same extent. While several sets of heuristics have been used or proposed for ...
Shakespeare's complete works as a benchmark for evaluating multiscale document navigation techniques
In this paper, we describe an experimental platform dedicated to the comparative evaluation of multiscale electronic-document navigation techniques. One noteworthy characteristic of our platform is that it allows the user not only to translate the ...
Threat stream data generator: creating the known unknowns for test and evaluation of visual analytics tools
We present the Threat Stream Data Generator, an approach and tool for creating synthetic data sets for the test and evaluation of visual analytics tools and environments. We have focused on working with information analysts to understand the ...
A taxonomy of tasks for guiding the evaluation of multidimensional visualizations
The design of multidimensional visualization techniques is based on the assumption that a graphical representation of a large dataset can give more insight to a user, by providing him/her a more intuitive support in the process of exploiting data. When ...
Task taxonomy for graph visualization
Our goal is to define a list of tasks for graph visualization that has enough detail and specificity to be useful to: 1) designers who want to improve their system and 2) to evaluators who want to compare graph visualization systems. In this paper, we ...
Just how dense are dense graphs in the real world?: a methodological note
This methodological note focuses on the edge density of real world examples of networks. The edge density is a parameter of interest typically when putting up user studies in an effort to prove the robustness or superiority of a novel graph ...
Cited By
- Chen Q, Chen N, Shuai W, Wu G, Xu Z, Tong H and Cao N (2024). Calliope-Net: Automatic Generation of Graph Data Facts via Annotated Node-Link Diagrams, IEEE Transactions on Visualization and Computer Graphics, 30:1, (562-572), Online publication date: 1-Jan-2024.
-
Wong P, Kao D, Hao M, Chen C, Lee E, Gupta A, Darvill D, Dill J, Shaw C and Woodbury R (2013). The CZSaw notes case study IS&T/SPIE Electronic Imaging, 10.1117/12.2041318, , (901706), Online publication date: 23-Dec-2013.
-
Wong P, Kao D, Hao M, Chen C, Sousa Santos B and Dias P (2013). Evaluation in visualization: some issues and best practices IS&T/SPIE Electronic Imaging, 10.1117/12.2038259, , (90170O), Online publication date: 23-Dec-2013.
-
Jänicke H, Weidner T, Chung D, Laramee R, Townsend P and Chen M (2011). Visual Reconstructability as a Quality Metric for Flow Visualization, Computer Graphics Forum, 10.1111/j.1467-8659.2011.01927.x, 30:3, (781-790), Online publication date: 1-Jun-2011.
- Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization