skip to main content
10.1145/3020165.3020188acmconferencesArticle/Chapter ViewAbstractPublication PagesirConference Proceedingsconference-collections
research-article
Best Paper

User Behaviour and Task Characteristics: A Field Study of Daily Information Behaviour

Authors Info & Claims
Published:07 March 2017Publication History

ABSTRACT

Previous studies investigating task based search often take the form of lab studies or large scale log analysis. In lab studies, users typically perform a designed task under a controlled environment, which may not reflect their natural behaviour. While log analysis allows the observation of users' natural search behaviour, often strong assumptions need to be made in order to associate the unobserved underlying user tasks with log signals.

We describe a field study during which we log participants' daily search and browsing activities for 5 days, and users are asked to self-annotate their search logs with the tasks they conducted as well as to describe the task characteristics according to a conceptual task classification scheme. This provides us with a more realistic and comprehensive view on how user tasks are associated with logged interactions than seen in previous log- or lab-based studies; and allows us to explore the complex interactions between task characteristics and their presence in naturalistic tasks which has not been studied previously.

We find a higher number of queries, longer timespan, as well as more task switches than reported in previous log based studies; and 41% of our tasks are zero-query tasks implying that large amounts of user task activities remain unobserved when only focused on query logs. Further, tasks sharing similar descriptions can vary greatly in their characteristics, suggesting that when supporting users with their tasks, it is important to know not only the task they are engaged with but also the context of the user in the task.

References

  1. A. Aula, R. M. Khan, and Z. Guan. How does search behavior change as search becomes more difficult? In SIGCHI'10, pages 35--44. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. A. H. Awadallah, R. W. White, P. Pantel, S. T. Dumais, and Y.-M. Wang. Supporting complex search tasks. In CIKM'14, pages 1--10, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. A. Broder. A taxonomy of web search. In ACM Sigir forum, volume 36, pages 3--10. ACM, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. M. Bron, J. Van Gorp, F. Nack, L. B. Baltussen, and M. de Rijke. Aggregated search interface preferences in multi-session search tasks. In SIGIR'13, pages 123--132. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. K. Byström and P. Hansen. Conceptual framework for tasks in information studies. JASIST, 56 (10): 1050--1061, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. K. Byström and K. Järvelin. Task complexity affects information seeking and use. IP&M, 31 (2): 191--213, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. K. Church, M. Cherubini, and N. Oliver. A large-scale study of daily information needs captured in situ. ACM TOCHI, 21 (2): 10, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. S. Dumais. Task-based search: a search engine perspective. In NSF Workshop on Task-Based Search, page 1, 2013.Google ScholarGoogle Scholar
  9. C. Eickhoff, J. Teevan, R. White, and S. Dumais. Lessons from the journey: a query log analysis of within-session learning. In WSDM'14, pages 223--232. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. B. J. Frey and D. Dueck. Clustering by passing messages between data points. science, 315 (5814): 972--976, 2007.Google ScholarGoogle Scholar
  11. E. Greifeneder. The effects of distraction on task completion scores in a natural environment test setting. JASIST, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. J. Gwizdka and I. Spence. What can searching behavior tell us about the difficulty of information tasks. A study of Web navigation. ASIST, 6, 2006.Google ScholarGoogle Scholar
  13. M. Hagen, J. Gomoll, A. Beyer, and B. Stein. From search session detection to search mission detection. In OAIR'13, pages 85--92, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. J. He, M. Bron, and A. P. de Vries. Characterizing stages of a multi-session complex search task through direct and indirect query modifications. In SIGIR'13, pages 897--900. ACM, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. K. Järvelin and P. Ingwersen. Information seeking research needs extension towards tasks and technology. Information Research, 10 (1), 2004.Google ScholarGoogle Scholar
  16. R. Jones and K. L. Klinkner. Beyond the session timeout: automatic hierarchical segmentation of search topics in query logs. In CIKM'08, pages 699--708. ACM, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. M. Kellar, C. Watters, and M. Shepherd. A field study characterizing web-based information-seeking tasks. JASIST, 58 (7): 999--1018, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. D. Kelly and N. J. Belkin. Display time as implicit feedback: understanding task effects. In SIGIR'04, pages 377--384. ACM, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. D. Kelly, J. Arguello, A. Edwards, and W.-c. Wu. Development and evaluation of search tasks for iir experiments using a cognitive complexity framework. In ICTIR'15, pages 101--110. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. A. Kotov, P. N. Bennett, R. W. White, S. T. Dumais, and J. Teevan. Modeling and analysis of cross-session search tasks. In SIGIR'11, pages 5--14. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. L. Li, H. Deng, A. Dong, Y. Chang, and H. Zha. Identifying and labeling search tasks via query-based hawkes processes. In SIGKDD'14, pages 731--740. ACM, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Y. Li and N. J. Belkin. A faceted approach to conceptualizing tasks in information seeking. IP&M, 44 (6): 1822--1837, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. J. Liu, M. J. Cole, C. Liu, R. Bierig, J. Gwizdka, N. J. Belkin, J. Zhang, and X. Zhang. Search behaviors in different task types. In JCDL'10, pages 69--78. ACM, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. C. Lucchese, S. Orlando, R. Perego, F. Silvestri, and G. Tolomei. Identifying task-based sessions in search engine query logs. In WSDM'11, pages 277--286. ACM, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Odijk, R. W. White, A. Hassan Awadallah, and S. T. Dumais. Struggling and success in web search. In CIKM'15, pages 1551--1560. ACM, 2015. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. D. M. Russell, D. Tang, M. Kellar, and R. Jeffries. Task behaviors during web search: The difficulty of assigning labels. In System Sciences, 2009. HICSS'09., pages 1--5. IEEE, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. C. L. Smith. Conditions of trust for completely-remote methods: A proposal for collaboration. In HCIR, 2012.Google ScholarGoogle Scholar
  28. A. Spink, M. Park, B. J. Jansen, and J. Pedersen. Multitasking during web search sessions. IP&M, 42 (1): 264--275, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. P. Vakkari. Task-based information searching. Annual review of information science and technology, 37 (1): 413--464, 2003.Google ScholarGoogle Scholar
  30. P. Vakkari, M. Pennanen, and S. Serola. Changes of search terms and tactics while writing a research proposal: A longitudinal case study. IP&M, 39 (3): 445--463, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. R. W. White and D. Kelly. A study on the effects of personalization and task information on implicit feedback performance. In CIKM'06, pages 297--306. ACM, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. G. Zuccon, T. Leelanupab, S. Whiting, E. Yilmaz, J. M. Jose, and L. Azzopardi. Crowdsourcing interactions: using crowd sourcing for evaluating interactive information retrieval systems. Information retrieval, 16 (2): 267--305, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. User Behaviour and Task Characteristics: A Field Study of Daily Information Behaviour

      Recommendations

      Reviews

      Emerson Amirhosein Azarbakht

      Daily search and browsing activity logs of users, when self-annotated (in contrast to third-party annotations), can give us insights about the users' information-seeking behavior. The central idea of the paper is that important information (that is, ground truth) gets lost when log analysis annotation is done by third-party/external annotators (for example, which interactions belong to which task). The idea was to have the users annotate their search logs themselves. The goals of this exploratory experiment-based research are stated as follows: "RQ1: From a user annotated search log with records of rich types of interactions, do we observe similar statistics of task sessions as those observed from an expert annotated `query-only' log " "RQ2: How do task characteristics relate to each other And how do these characteristics co-occur within actual web user tasks " A browser extension was used to capture 23 users' browser-based searching and browsing activities, for five days. The users annotated their own logs every day, and had the liberty to delete/exclude content they wished. Additionally, the users picked their top five to ten tasks and described those tasks' 12 predefined characteristics (table 1), for example, frequency, complexity, length, collaboration level, and satisfaction. The users' task-based information-seeking behavior logs were broken down into three levels of granularity: complex task; subtask (logical session, that is, consecutive queries belonging to the same task); and physical session (all user queries or activities within a time window). Then, some descriptive statistics for each task were calculated and compared to related literature, for example, mean/median number of queries, timespan, and task boundaries. Furthermore, the 12 user-described task characteristics were clustered and the authors applied correlation analysis, presented in table 4. The paper adds arbitrary interpretations of the found correlations. The idea of studying user-annotated task-based information-seeking behavior in its natural habitat is arguably better than externally annotated studies. The related work section is presented concisely and with great attention to nuances in the three major related works that are discussed and referenced throughout the paper. The methodology section is described precisely in excruciating detail, and refers back to the three related works, which could be discussed in the discussion section rather than in the methodology section. The major weaknesses of the paper are 1) over-dependency on previous work (because of the choice of RQ1) and constant comparisons; and 2) the unsubstantiated accuracy, rather than precision, of the findings and their interpretation. The correlations are interpreted in a causal sense, rather than as mere associations between two variables, which could be spurious. Additionally, each and every such causal interpretation can be interpreted in the reverse direction, using the exact same data correlation coefficients. It seems that postmortem explanations are appended and the interpretations are arbitrarily chosen based on intuition. Online Computing Reviews Service

      Access critical reviews of Computing literature here

      Become a reviewer for Computing Reviews.

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CHIIR '17: Proceedings of the 2017 Conference on Conference Human Information Interaction and Retrieval
        March 2017
        454 pages
        ISBN:9781450346771
        DOI:10.1145/3020165
        • Conference Chairs:
        • Ragnar Nordlie,
        • Nils Pharo,
        • Program Chairs:
        • Luanne Freund,
        • Birger Larsen,
        • Dan Russel

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 7 March 2017

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        CHIIR '17 Paper Acceptance Rate10of48submissions,21%Overall Acceptance Rate55of163submissions,34%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader