skip to main content
article
Free Access

Expanded design procedures for learnable, usable interfaces (panel session)

Published:01 April 1985Publication History
Skip Abstract Section

Abstract

Designers of interactive computer systems have begun to incorporate a number of good techniques in the design process to insure that the system will be easy to learn and easy to use. Though not all design projects use all the steps recommended, the steps are well known:

  • Define the tasks the user has to perform,

  • Know the capabilities of the user,

  • Gather relevant hardware/software constraints,

  • From guidelines, design a first prototype,

  • Test the prototype with users,

  • Iterate changes in the design and repeat the tests until the deadline is reached.

In our experience designing a new interface, steps 1 and 4 were the ones that were the most difficult and step 5 was the one that took extra time to plan well. We had difficulty defining what would go into a new task, and from broad guidelines, we had to develop one specific implementation for our tasks. Furthermore, so that in each test we would learn something of value for future designs, we knew that we wanted to test pairs of prototypes that differed in only one feature. Choosing which single feature to alter in each pair required careful planning. In what follows, I describe each of these difficulties more fully and show how we approached each in our environment.

Normally, a task is defined as a computer-based analog of an existing task, such as wordprocessing being the computer-based analog of typing. Since we had to build an interface for an entirely new task, we had to invent how the user would think about the task. We had to invent the objects on which the user would operate and then the actions that would be performed on those objects. We had to specify the mental representation in the absence of previous similar tasks.

In our case, we were designing the interface for a communications manager to designate the path to be taken for routing 800-calls to their final destination as a function of time of day, day of week, holidays, percentage distribution, etc. From the large set of known formal representations of data, e.g. lists, pictures, tables, hierarchies, and networks, we found three that seemed to capture the information sufficient for our task. We found that a hierarchy (tree structure), a restricted programming language in which there were only IF-THEN-ELSEs and definitions, and a long form to be filled out with all possible ordered combinations of the desired features, were all sufficient representations. We then asked potential users in casual interviews which format they found easiest to understand. It was immediately clear even from a relatively small number of subjects that the tree representation was preferred.

The second aspect of defining the task involved specifying what actions the user would take on this representation. Since in all interfaces, users have to move about, select an item to work on, enter information, delete information, and change modes (from data entry to command, typically), we looked for these kinds of actions in our task. The actions immediately fell into place, with commands being generated for moving about a tree, entering nodes and branches, etc.

After gathering information on who the end users were and what hardware constraints we had, we designed our first prototype. This was our next most involved chore. Our broad guidelines said that we should:

  • Present information on the computer in a representation as close as possible to the user's mental representation.

  • Minimize the long-term and short-term memory loads (e.g. make retrieval of commands and codes easy, give the user clues about where he or she is in a complicated procedure or data structure).

  • Construct a dialog that holds to natural conversational conventions (e.g., make pauses predictable, acknowledge long delays, use English imperative structure in the command syntax).

Our initial design on paper was fairly easy to construct. We followed that, however, with an important analysis step before we built our first prototype. For each part of the design, we constructed an alternative design that seemed to fit within the same constraints and within the guidelines. That is, we identified the essential components of our interface: the representation of the data, the organization of the command sector, the reminders, and the specific command implementations such as how to move around the data representation. For example, in the command sector there are alternative ways to arrange the commands for display: they could be grouped by similar function so that all “move” commands were clustered and all “entry” commands were clustered, etc, or they could be grouped into common sequences, such as those that people naturally follow in initially entering the nodes and branches of the tree structures. Once each component had an alternative, we debated the merits of each. Our first prototype, then, was the result of this first paper design plus the alterations that were generated by this analysis procedure.

The next step entailed testing our design with real users. Since we wanted to test our prototypes so that we learned something useful for our next assignment, we chose to test two prototypes at a time. If we were to learn something from the test, then only one component could differ between the two prototypes. The difficulty arose in deciding which component was to be tested in each pair. For this task, we went back to our initial component-by-component debate about the prototype. For each of the components and its alternative, we scored the choice on three dimensions:

That is, first, for some alternatives, the better choice was predictable. For example, displaying command names was known to be more helpful than not displaying them. Testing this alternative would not teach us very much. Second, we needed to choose some alternatives early, so that the developers could begin immediately with some preliminary work. For example, our developers needed to know early whether the data would be displayed as a form or a tree so they could set up appropriate data structures. And third, some alternatives would appear again in future design projects. For example, all projects require some way of moving about the data but few deal directly with trees. Knowledge gained now about the movement function would pay off in the future whereas how to display trees may not.

Once we prioritized our alternatives on these dimensions, we were able to choose the alternative for the first prototype test. After the test, we found other ideas to incorporate in the next iteration, but went through the same analysis procedure, listing the components, debating alternatives, and prioritizing those to be tested in the next iteration.

In summary, the procedure we followed in designing and testing our prototypes was standard in overall form, flowing from defining the task, user, and constraints; building prototypes; and testing them with users. We differed, however, in three of our steps. We spent important initial time considering the best task representation to display to the user. We analyzed the individual components of our first prototype, generating a design for actual implementation that was more defensibly good than our first paper design.

And, in our iterative testing procedure, we selected pairs of prototypes for test, the pairs differing on only one component of the design. The component for testing was selected according to whether the test would teach us something, whether it was important to decide early in the development process, and whether the component would appear again in designs we encountered in the future. These expanded steps in the design process not only added to our confidence that our early design was reasonably good, but also gave us the data and theory with which to convince others, notably developers and project managers, of the merit of our design. And, the process taught us something of use for our next design project.

Index Terms

  1. Expanded design procedures for learnable, usable interfaces (panel session)

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    • Published in

      cover image ACM SIGCHI Bulletin
      ACM SIGCHI Bulletin  Volume 16, Issue 4
      April 1985
      201 pages
      ISSN:0736-6906
      DOI:10.1145/1165385
      Issue’s Table of Contents
      • cover image ACM Conferences
        CHI '85: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
        April 1985
        231 pages
        ISBN:0897911490
        DOI:10.1145/317456

      Copyright © 1985 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 1 April 1985

      Check for updates

      Qualifiers

      • article

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader