skip to main content
10.1145/2858036.2858063acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article
Best Paper

The Effect of Visual Appearance on the Performance of Continuous Sliders and Visual Analogue Scales

Published:07 May 2016Publication History

ABSTRACT

Sliders and Visual Analogue Scales (VASs) are input mechanisms which allow users to specify a value within a predefined range. At a minimum, sliders and VASs typically consist of a line with the extreme values labeled. Additional decorations such as labels and tick marks can be added to give information about the gradations along the scale and allow for more precise and repeatable selections. There is a rich history of research about the effect of labelling in discrete scales (i.e., Likert scales), however the effect of decorations on continuous scales has not been rigorously explored. In this paper we perform a 2,000 user, 250,000 trial online experiment to study the effects of slider appearance, and find that decorations along the slider considerably bias the distribution of responses received. Using two separate experimental tasks, the trade-offs between bias, accuracy, and speed-of-use are explored and design recommendations for optimal slider implementations are proposed.

Skip Supplemental Material Section

Supplemental Material

pn0240-file3.mp4

mp4

32.6 MB

References

  1. Gunnar Borg and Elisabet Borg. 1991. A General Psychophysical Scale of Blackness and Its Possibilities as a Test of Rating Behaviour. Department of Psychology, Stockholm University.Google ScholarGoogle Scholar
  2. Lei Chang. 1994. A psychometric evaluation of 4-point and 6-point Likert-type scales in relation to reliability and validity. Applied psychological measurement 18, 3: 205-215.Google ScholarGoogle Scholar
  3. C. Connolly and T. Fleiss. 1997. A study of efficiency and accuracy in the transformation from RGB to CIELAB color space. IEEE Transactions on Image Processing 6, 7: 1046-1048. http://doi.org/10.1109/83.597279 Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Mick P. Couper, Michael W. Traugott, and Mark J. Lamias. 2001. Web survey design and administration. Public opinion quarterly 65, 2: 230-253.Google ScholarGoogle Scholar
  5. Geoff Cumming. 2013. The New Statistics Why and How. Psychological Science: 0956797613504966. http://doi.org/10.1177/0956797613504966Google ScholarGoogle Scholar
  6. Geoff Cumming and Sue Finch. 2005. Inference by eye: confidence intervals and how to read pictures of data. The American Psychologist 60, 2: 170-180. http://doi.org/10.1037/0003-066X.60.2.170Google ScholarGoogle ScholarCross RefCross Ref
  7. Thomas J. DiCiccio and Bradley Efron. 1996. Bootstrap confidence intervals. Statistical Science 11, 3: 189-228. http://doi.org/10.1214/ss/1032280214Google ScholarGoogle ScholarCross RefCross Ref
  8. Pierre Dragicevic, Fanny Chevalier, and Stephane Huot. 2014. Running an HCI Experiment in Multiple Parallel Universes. CHI '14 Extended Abstracts on Human Factors in Computing Systems, ACM, 607- 618. http://doi.org/10.1145/2559206.2578881 Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. John E. Hunter Frank L. Schmidt. 1997. Eight common but false objections to the discontinuation of significance testing in the analysis of research data.Google ScholarGoogle Scholar
  10. M. Freyd. 1923. The Graphic Rating Scale. Journal of Educational Psychology 14, 2: 83-102. http://doi.org/10.1037/h0074329Google ScholarGoogle ScholarCross RefCross Ref
  11. Frederik Funke and Ulf-Dietrich Reips. 2012. Why Semantic Differentials in Web-Based Research Should Be Made from Visual Analogue Scales and Not from 5-Point Scales. Field Methods 24, 3: 310-327. http://doi.org/10.1177/1525822X12444061Google ScholarGoogle ScholarCross RefCross Ref
  12. Joachim Gerich. 2007. Visual analogue scales for mode-independent measurement in self-administered questionnaires. Behavior Research Methods 39, 4: 985-992. http://doi.org/10.3758/BF03192994Google ScholarGoogle ScholarCross RefCross Ref
  13. Eric Gilbert. 2014. What if We Ask a Different Question?: Social Inferences Create Product Ratings Faster. Proceedings of the 32Nd Annual ACM Conference on Human Factors in Computing Systems, ACM, 2759-2762. http://doi.org/10.1145/2556288.2557081 Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Michael Gleicher, Michael Correll, Christine Nothelfer, and Steven Franconeri. 2013. Perception of Average Value in Multiclass Scatterplots. IEEE Transactions on Visualization and Computer Graphics 19, 12: 2316-2325. http://doi.org/10.1109/TVCG.2013.183 Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. G. Ljunggren G Neely. 1992. Comparison between the Visual Analogue Scale (VAS) and the Category Ratio Scale (CR-10) for the evaluation of leg exertion. International journal of sports medicine 13, 2: 133-6. http://doi.org/10.1055/s-2007--1021244Google ScholarGoogle Scholar
  16. Mary HS Hayes and Donald G. Patterson. 1921. Experimental development of the graphic rating method. Psychol Bull 18, 1: 98-9.Google ScholarGoogle Scholar
  17. Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 203-212. http://doi.org/10.1145/1753326.1753357 Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Marianne Jensen Hjermstad, Peter M. Fayers, Dagny F. Haugen, et al. 2011. Studies Comparing Numerical Rating Scales, Verbal Rating Scales, and Visual Analogue Scales for Assessment of Pain Intensity in Adults: A Systematic Literature Review. Journal of Pain and Symptom Management 41, 6: 1073-1093. http://doi.org/10.1016/j.jpainsymman.2010.08.016Google ScholarGoogle ScholarCross RefCross Ref
  19. E. C. Huskisson. 1974. Measurement of pain. The Lancet 304, 7889: 1127-1131.Google ScholarGoogle Scholar
  20. Don A. Dillman Jolene D. Smyth. 2006. Effects of Using Visual Design Principles to Group Response Options in Web Surveys. International Journal of Internet Science.Google ScholarGoogle Scholar
  21. Aniket Kittur, Ed H. Chi, and Bongwon Suh. 2008. Crowdsourcing User Studies with Mechanical Turk. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, ACM, 453-456. http://doi.org/10.1145/1357054.1357127 Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. R. Likert. 1932. A technique for the measurement of attitudes. Archives of Psychology 22 140: 55.Google ScholarGoogle Scholar
  23. Dawn M. Marsh-Richard, Erin S. Hatzis, Charles W. Mathias, Nicholas Venditti, and Donald M. Dougherty. 2009. Adaptive Visual Analog Scales (AVAS): A Modifiable Software Program for the Creation, Administration, and Scoring of Visual Analog Scales. Behavior research methods 41, 1: 99-106. http://doi.org/10.3758/BRM.41.1.99Google ScholarGoogle Scholar
  24. Heather M. McCormack, David J. de L Horne, and Simon Sheather. 1988. Clinical applications of visual analogue scales: a critical review. Psychological medicine 18, 04: 1007-1019.Google ScholarGoogle Scholar
  25. Hendrik Müller, Aaron Sedley, and Elizabeth Ferrall-Nunge. 2014. Designing Unbiased Surveys for HCI Research. CHI '14 Extended Abstracts on Human Factors in Computing Systems, ACM, 1027-1028. http://doi.org/10.1145/2559206.2567822 Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Gregory Neely. 1995. Properties of a category ratio scale (CR-10) and the visual analogue scale (VAS): a comparison with magnitude estimation, line production, and category scaling. Dept. of Psychology, Stockholm University.Google ScholarGoogle Scholar
  27. Bing Pan, Arch G. Woodside, and Fang Meng. 2014. How Contextual Cues Impact Response and Conversion Rates of Online Surveys. Journal of Travel Research 53, 1: 58-68. http://doi.org/10.1177/0047287513484195Google ScholarGoogle ScholarCross RefCross Ref
  28. Agnès Paul-Dauphin, Francis Guillemin, Jean-Marc Virion, and Serge Briançon. 1999. Bias and Precision in Visual Analogue Scales: A Randomized Controlled Trial. American Journal of Epidemiology 150, 10: 1117-1127. http://doi.org/10.1093/oxfordjournals.aje.a009937Google ScholarGoogle ScholarCross RefCross Ref
  29. Philip M. Podsakoff, Scott B. MacKenzie, and Nathan P. Podsakoff. 2012. Sources of method bias in social science research and recommendations on how to control it. Annual Review of Psychology 63: 539-569. http://doi.org/10.1146/annurev-psych-120710--100452Google ScholarGoogle ScholarCross RefCross Ref
  30. Ulf-Dietrich Reips. 2010. Design and formatting in Internet-based research. In Advanced methods for conducting online behavioral research, S. D. Gosling and J. A. Johnson (eds.). American Psychological Association, Washington, DC, US, 29-43.Google ScholarGoogle Scholar
  31. Ulf-Dietrich Reips and Frederik Funke. 2008. Intervallevel measurement with visual analogue scales in Internet-based research: VAS Generator. Behavior Research Methods 40, 3: 699-704. http://doi.org/10.3758/BRM.40.3.699Google ScholarGoogle ScholarCross RefCross Ref
  32. Joel Ross, Lilly Irani, M. Six Silberman, Andrew Zaldivar, and Bill Tomlinson. 2010. Who Are the Crowdworkers?: Shifting Demographics in Mechanical Turk. CHI '10 Extended Abstracts on Human Factors in Computing Systems, ACM, 2863-2872. http://doi.org/10.1145/1753846.1753873 Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Paul van Schaik and Jonathan Ling. 2003. Using online surveys to measure three key constructs of the quality of human-computer interaction in web sites: psychometric properties and implications. International Journal of Human-Computer Studies 59, 5: 545-567. http://doi.org/10.1016/S10715819(03)00078--8 Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. J. Scott and E. C. Huskisson. 1976. Graphic representation of pain. Pain 2, 2: 175-184.Google ScholarGoogle ScholarCross RefCross Ref
  35. Daniel R. Smith and Bruce N. Walker. 2002. Tickmarks, axes, and labels: The effects of adding context to auditory graphs. International Conference on Auditory Display (ICAD). Retrieved March 31, 2015 from https://smartech.gatech.edu/handle/1853/51392Google ScholarGoogle Scholar
  36. George W. Torrance, David Feeny, and William Furlong. 2001. Visual Analog Scales Do They Have a Role in the Measurement of Preferences for Health States? Medical Decision Making 21, 4: 329-334.Google ScholarGoogle ScholarCross RefCross Ref
  37. Roger Tourangeau, Mick P. Couper, and Frederick Conrad. 2004. Spacing, Position, and Order Interpretive Heuristics for Visual Features of Survey Questions. Public Opinion Quarterly 68, 3: 368-393. http://doi.org/10.1093/poq/nfh035Google ScholarGoogle ScholarCross RefCross Ref
  38. Li-Jen Weng. 2004. Impact of the Number of Response Categories and Anchor Labels on Coefficient Alpha and Test-Retest Reliability. Educational and Psychological Measurement 64, 6: 956-972. http://doi.org/10.1177/0013164404268674Google ScholarGoogle ScholarCross RefCross Ref
  39. SurveyMonkey: Free online survey software & questionnaire tool. Retrieved March 30, 2015 from https://www.surveymonkey.com/Google ScholarGoogle Scholar
  40. Google Forms - create and analyze surveys, for free. Retrieved March 30, 2015 from http://www.google.ca/forms/about/Google ScholarGoogle Scholar
  41. Amazon Mechanical Turk - Welcome. Retrieved March 30, 2015 from https://www.mturk.com/mturk/welcomeGoogle ScholarGoogle Scholar
  42. Beyond Significance Testing: Statistics Reform in the Behavioral Sciences, Second Edition. http://www.apa.org. Retrieved March 25, 2015 from http://www.apa.org/pubs/books/4316151.aspxGoogle ScholarGoogle Scholar

Index Terms

  1. The Effect of Visual Appearance on the Performance of Continuous Sliders and Visual Analogue Scales

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      CHI '16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems
      May 2016
      6108 pages
      ISBN:9781450333627
      DOI:10.1145/2858036

      Copyright © 2016 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 7 May 2016

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      CHI '16 Paper Acceptance Rate565of2,435submissions,23%Overall Acceptance Rate6,199of26,314submissions,24%

      Upcoming Conference

      CHI '24
      CHI Conference on Human Factors in Computing Systems
      May 11 - 16, 2024
      Honolulu , HI , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader