Elsevier

Displays

Volume 32, Issue 5, December 2011, Pages 261-267
Displays

Direct-touch vs. mouse input for navigation modes of the web map

https://doi.org/10.1016/j.displa.2011.05.004Get rights and content

Abstract

Nowadays the web map (E-map) is becoming a widely-used wayfinding tool. However, when it is operated with a different input device, its performance will be affected. To investigate its functional performances in various navigation modes, two input devices were employed, i.e., the mouse and the touch screen. Meanwhile, the map websites over the Internet were searched and examined, and three dominant navigation modes in current use were identified: (1) continuous control and continuous display (CCCD), (2) discrete control and continuous display (DCCD), and (3) discrete control and discrete display (DCDD). Then, the experimental interfaces were designed and simulated tests were separately conducted with the mouse and the touch screen to evaluate the performance results. In this research, 36 volunteers participated in the experiment, whose task completion times and user interface actions (total number of clicks on arrow keys) were analyzed through a two-way analysis of variance (ANOVA) to determine the six types of operational performance. It was finally discovered that, in all of the navigation modes, the mouse performed remarkably better than the touch screen in terms of task completion time (F2,70 = 3.28, p < .001). Moreover, the participants did much better in the CCCD mode than in the other modes whether they used the mouse or the touch screen. The findings will be utilized by our research team as the stepping stone to the development of a navigation mode compatible with both the mouse and the touch screen; besides, they will serve as a reference when the web map is further studied and practically designed.

Highlights

► Three dominant navigation modes CCCD, DCCD, and DCDD were identified from the map websites. ► In all of the navigation modes, the mouse performed remarkably better than the touch screen. ► The CCCD mode performs better than the other two modes in terms of both task completion times and user interface actions.

Introduction

With the new operating systems Win7 and iPhone OS introduced to the market, it is expected that the user interface which previously relied on the mouse as the input device will gradually opt for the touch screen. As is confirmed by some studies, the touch screen is characterized by intuitive input, which makes it much easier for the user to learn and operate the device [1]. Up to the present, the touch screen has been widely used in kiosks, ticket machines, automatic teller machines (ATM), and so forth. Thanks to the features of the touch screen, kiosks are used by the public more and more frequently [2]. Kiosks are mainly intended to provide convenient and instant services, such as web maps, cash withdrawals, museum sitemaps, and self-service gas stations [3], [4]. Surrounded by a new environment, the general public will usually turn to a kiosk, accessing the web map to get familiar with the vicinity. Therefore, the usability and functionality of the web map have been highlighted as an important issue.

As the web map is browsed, its navigation, or how it is presented, is a key factor influencing the user’s viewing and operation [5]. A well-designed navigation technique can successfully lead the user to browse the information space of the webpage; furthermore, the user can explore its content by activating various functions [6]. As is indicated by some researches, if the user is unfamiliar with the conceptual model, he/she will be inclined to commit operational errors. As a result, he/she will easily suffer a sense of frustration and take less interest in the web map [7]. In view of the above, when designing the web map, the designer is confronted with a momentous task: whether the navigation will effectively provide the user with correct cognitive guidance and feedback.

For most people, the mouse is the most common input device. Consequently, nearly all user interfaces, including web maps, base their navigation on the mouse and are designed as well as operated correspondingly. If the mouse is replaced by other input devices, like the touch screen, the user may have some difficulty in operating and suffer lower efficiency. In the past, some researches were made to compare the functionality of different input devices [8], [9], [10], [11], [12], [13]. Nevertheless, there were few studies which applied those input devices to in-field, simulated tests on the navigation modes of web maps.

In the first phase of this research, the tasks were intended to understand how the navigation modes of web maps were used concurrently. For that purpose, the map websites were searched, the available web maps were operated and examined, and, based on the previous studies, the navigation modes adopted by most of the web maps were identified. After that, the tasks in the second phase were launched. The collected navigation modes were analyzed, the experimental interfaces were designed, the simulated tests were conducted with two input devices (i.e., the mouse and the touch screen), and finally the operational performances were compared. The operational performances consisted of task completion times and user interface actions, which resulted from testing the web maps. This research was mainly aimed at two targets explained below. (1) In each of the navigation modes, the mouse and the touch screen were used as the input device separately, and then their performance results were compared. (2) With the mouse and the touch screen used, the navigation modes were tested, and the performance results of different navigation modes were compared. In the near future, the touch screen is likely to become the mainstream of the input devices. In such a case, this research will present a deep insight into how input devices and navigation modes influence the operational performance of the web map, as will be discussed in the following chapters.

Input devices, including touch screens, mice, styli, touchpads, pointing sticks and joysticks, function as the communication media between users and machines. At present, the mouse remains the most common tool among them. In the past, there were a lot of relevant studies comparing and analyzing different input devices. With four different input devices (the mouse, joystick, step keys and text keys) employed, a study was conducted to compare their text input performances. The findings indicated that the mouse performed better than the other three input devices in terms of positioning time, error rate, and moving speed [8]. Besides, to enhance the usability of the input device, a newly-designed Fluid DTMouse was proposed by some researchers, which would improve the switch from one fixed mode to another, ensure the stability of the cursor, and enable the user to input accurately [9].

The touch screen has become the dominant trend of input devices, boasting the following advantages. (1) As its control interface overlays the monitor, there is no need for such extra devices as the mouse, which needs a space-occupying carrier and operating environment. (2) Compared with other mobile input devices, the touch screen is much more robust and durable [2]. Despite a great number of advantages, the touch screen is not completely superior to the mouse in terms of operational performance. In a study, the mouse was compared with the touch screen in the single-touch mode, with the objects being 1, 4, 16, and 32 pixels per side. It was discovered that, when target ranging was more than 4 pixels, the selection time needed by the mouse was the same as that needed by the touch screen. However, when target ranging was less than 4 pixels, the selection time needed by the mouse was shorter than that needed by the touch screen [10]. While the touch screen did worse than the mouse in the single-touch mode, the former did better than the latter in the double-touch or multi-touch mode [11], [12]. To keep up with the advantages of the touch screen, a new user interface is being developed which will be operated in the multi-touch mode and enable the user to select the smaller target easily through the menu [13].

Navigation can be described as the task of determining position within the information space and finding the course to the envisaged information and other related information. Navigation is made up of two elements; one is wayfinding, which means a cognitive decision-making process, and the other is travel, which means moving from one place to another. While navigating in the real or virtual world, people collect information constantly, make plans and move from place to place. Therefore, in the process of navigation, wayfinding and travel are inseparable [14]. Wayfinding means that in a large-scale virtual environment, the user can move from the present position to another with the aid of some familiar landmarks [15].

Generally speaking, people’s spatial knowledge is founded on their daily living environment, including cities and buildings. Such knowledge provides people with both wayfinding guidance and directional guidance so that suitable spatial behaviors may be performed [16]. Therefore, wayfinding is regarded by some researchers as intelligent navigation. In such a perspective, wayfinding is seen just as the cognitive element of navigation. To put it briefly, the strategic and tactical elements are responsible for the navigational behavior. Regardless of the different definitions, it is held by most researchers that wayfinding is problematic for users of large-scale virtual environments [15], [17], [18], [19]. Under such circumstances, navigation is intended to help the user explore information spaces that are too large to be conveniently displayed in a single window [6]. With the aid of the user interface, the information space can be browsed by the user. The user interface (UI) is intended to provide the user with various functions, such as moving the visible range onto the information space to view the selected part. The UI components of the web map include such interactive elements as icons, buttons, and menus [20]. As for the spatial navigation of the two-dimensional (2D) map, its operation functions are mainly composed of panning, zooming, scrolling, and moving [6], [20]. After the presentation modes of the overview and details are analyzed, it is discovered that panning, zooming, and scrolling not only enable the user to view the overview and details in the information space but also offer interface operations on different levels [6], [21].

Depending on different control functions, panning and zooming fall into either continuous or discrete control. Continuous control means that the user must undergo every step of translocation before reaching the destination. In contrast, discrete control means that the user can immediately jump to the newly-emerging zoom level or position within the information space [6], [21], [22]. Generally speaking, if the user is familiar with the structure and content of the particular information space, discrete control will work faster than continuous control. On the contrary, if the user is unfamiliar with the information space, continuous control will do better than discrete control [22]. As a rule, when the user turns to the information space for navigational assistance, the destination is not known in advance. Consequently, he/she has to start from the starting point, pass all the transitional points, and reach the destination, thus completing the search task. As for the navigational control functions, most of the previous studies were centered on operations, i.e., zooming, panning and moving. However, the effects of continuous control or discrete control on zooming, panning and moving were not explored [14], [15], [20], [21]. In view of this, the main concern of this research is directed toward the navigational control functions. Namely, its purpose is to determine the effect of different control functions on navigational performance. Meanwhile, different input devices may affect the performance of the navigational control functions. Therefore, the interaction between the input device and the navigational control function will be further studied.

Section snippets

Investigation of map websites

In July, 2008, the major search engines, i.e., Google and Yahoo, were employed by the authors, the keyword “web map” was entered, and both Chinese and English map websites were searched. Based on the search result, the websites ranking among top 80 were arranged in descending order of relevancy. Thereafter, the web maps which adopted either continuous control or discrete control were singled out. Those websites which had been out of service or had an unstable connection speed were rejected.

Methodology

This research was aimed at evaluating the operational performances in the three different navigation modes combined with two different input devices, i.e., the mouse and the touch screen. As for the strategies and tasks of wayfinding, it was discovered by previous studies that the wayfinding performance varies with the task difficulty and wayfinding strategy. Moreover, wayfinding performance decreases as an environment’s complexity increases [23], [24]. The use of different types of technology

Result

In this research, two different input devices, the mouse and the touch screen, were employed by the participant to conduct simulated tests in three navigation modes, i.e., CCCD, DCCD, and DCDD. The operational performances, namely, task completion times and user interface actions, were collected and compared.

As is shown in Table 4, task completion times and user interface actions were compared through the two-way ANOVA. Regarding task completion times, the interaction between the input device

Discussion

This research was aimed at operating the mouse and the touch screen separately in the three different navigation modes of web maps and exploring the performance difference in task completion times and user interface actions. Regarding task completion times, it was discovered that the touch screen took more time than the mouse. That result agreed with the conclusion reached by other researchers, who conducted the single-touch operation test and reported that the touch screen did worse than the

Conclusion

With the experimental interface of the web map simulated, two input devices were used and the simulated tests in three navigation modes were conducted to investigate the difference in their functional performances. The main findings are presented as follows:

  • (1)

    As is shown by the experiment on the navigation modes of the web maps, the CCCD mode proves to be the most desirable, for the user is kept well aware of the moving direction. Contrarily, the DCDD mode displays the map image in an

References (36)

  • W. Cartwright et al.

    Geospatial information visualization user interface issues

    Cartogr. Geogr. Inform. Sci.

    (2001)
  • C. Gutwin et al.

    Interacting with big interfaces on small screens: a comparison of fisheye, zoom, and panning techniques

    (2004)
  • A. Neumann

    Navigation in space, time and topic

    (2005)
  • D. Norman

    The Psychology of Everyday Things

    (1988)
  • S. Card et al.

    Evaluation of mouse, rate-controlled isometric joystick, step keys, and text keys for text selection on a CRT

    Ergonomics

    (1978)
  • A. Esenther et al.

    Fluid DTMouse: better mouse support for touch-based interactions

    (2006)
  • C. Forlines et al.

    Direct-touch vs. mouse input for tabletop displays

    (2007)
  • K. Kin et al.

    Determining the benefits of direct-touch, bimanual, and multifinger input on a multitouch workstation

    Proceedings of Graphics Interface 2009

    (2009)
  • Cited by (21)

    • Interface and interaction design: How mobile touch devices foster cognitive offloading

      2020, Computers in Human Behavior
      Citation Excerpt :

      These results support the notion of Karat et al. (1986) that using touch-based controls requires less cognitive processing and is more automated. Regarding the contrary findings, namely the inferiority of touch-based controls compared to mouse-based controls, Wu et al. (2011) observed that participants performed tasks slower with touchscreens compared to mouse-based controls when interacting with different navigation tools. Further, in realistic office tasks, participants showed a slower task processing with touch-based controls compared to mouse-based controls (Mack & Montaniz, 1991).

    • Visual-haptic feedback interaction in automotive touchscreens

      2012, Displays
      Citation Excerpt :

      These trends reflect the increased functionality present in today’s technology and the flexibility afforded by a touchscreen interface. Touchscreens offer practical benefits in use [4]: input is direct as the display and input are co-located and the simple mode of interaction, pointing, is familiar even to novice users [5]. A major drawback with ‘traditional’ touchscreens however is the lack of haptic feedback, both tactile and kinaesthetic, provided by the screen’s surface.

    • Comparison of Usability and Immersion Between Touch-Based and Mouse-Based Interaction: A Study of Online Exhibitions

      2022, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    • MOTIONS: Exploring Human-Information Interactions beyond Clicks

      2020, Proceedings - International Conference of the Chilean Computer Science Society, SCCC
    • From manual driving to automated driving: A review of 10 years of AutoUI

      2019, Proceedings - 11th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2019
    View all citing articles on Scopus
    View full text