2.1 Challenges in the field
Action teams (Sundstrom
1999) or extreme work teams (Jones and Hinds
2002) in the security domain work highly interdependent and collaborative by nature. Still, effective collaboration in this field seems to be difficult to realize. (Berlin and Carlström
2011) study why collaboration often is minimised at an accident scene. Based on observations and semi-structured interviews, they discover that collaboration is often considered as ideal rather than something that is really carried out. As major reasons for only limited forms of collaboration, they identify information asymmetry, uncertainty and lack of incentives. (Smith et al.
2008) are of the opinion that it is difficult to consider crime scene examination from a team perspective, as usually several different teams from different organisations need to work together. The work is then centred around the collection of information and evidence in consultation with different people. The work effectiveness relies very much on the efficiency of each individual team, the communication of results and the coordination among the teams.
In the security domain, operational units rely on quick and adequate access and exchange of accurate context-related information (Lin et al.
2004). Quality information can help members of the operational units to resolve problems (Brown
2001) and to facilitate or maintain situational awareness (Straus et al.
2010). There is a mismatch between the information needs of operational units and the ability of ICT to provide the information (Manning
1996; Sawyer and Tapia
2005). Such a mismatch can impact the performance of teams and can ultimately save or cost lives (Jones and Hinds
2002). Bharosa et al. (
2010) discuss challenges and obstacles in sharing and coordinating information during multi-agency disaster response. They consider challenges from an inter- and intra-organisational perspective, as well as the perspective of individuals. Major challenges are identified as conflicting role structures, mismatch between goals and independent projects, focus on vertical information sharing, information overload, inability to determine what should be shared or the prioritization of own problems. Bharosa et al. (
2010) further identify factors to influence information sharing and coordination such as improving interaction and familiarity of other roles, knowledge of other agencies’ operations or the information and system quality. Reuter et al. (
2014) examine mobile collaboration practices in crisis management at an inter-organizational level. Their study shows that new informal communication practices with current technology, i.e. mobile phones, needs to be derived. Mobile phone calls help to include remote actors in the situation assessment, but that verbal communication alone is not enough to facilitate situational awareness. Furthermore, challenges with regard to information flow during crisis management occur (Militello et al.
2007). Based on case studies, Militello et al. (
2007) identify asymmetric knowledge and experience, barriers to maintaining mutual awareness, and uneven workload distribution and disrupted communication as major challenges. For each of the challenges different recommendations are presented. To overcome asymmetric knowledge, they suggest providing communication tools and training with their usage. To improve mutual awareness, they propose the use of shared displays. To address uneven workload, they suggest to more clearly assign roles and to make their responsibilities known across organisations. The latter is also stressed by (Drabek and McEntire
2002).
There are some further issues analysed in police teamwork, which are related to our study. Streefkerk et al. (
2008) noticed that police officers often have no overview of availability and location of other team members. As a result, police officers often do not know which of their colleagues are available to handle an incident and incidents may go unattended. Motivated by this observation, they consider team awareness as the major challenge for police team tasks.
The above discussion shows that, though collaboration of different organisational units is desired, several challenges need to be addressed. Among the major challenges are information asymmetry among the different organisational units, the efficiency as well as limits of verbal communication, the knowledge of the responsibilities of the different organisation and finally the situational awareness of the different team members.
2.2 The role of (situational) awareness and information in team collaboration
Human factors research into individual situational awareness originated from the study of military aviation, where pilots interact with highly dynamic, information-rich environments. A widely adopted definition of individual situational awareness (SA) is “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” (Endsley
1995). SA includes thus the understanding and comprehension of a given environment and situation, as context for one’s own actions. In this view, SA is seen as a cognitive product of information-processing (Salmon et al.
2009). The concept of SA has been used in several other domains such as energy distribution, nuclear power plant operational maintenance, process control, maritime, or tele-operations (Salmon et al.
2008). Still, several researchers argue that a universally accepted definition of the SA is yet to emerge (Salmon et al.
2008).
In CSCW research, awareness is similarly an ambiguous term. In general, awareness refers to actors’ taking heed of the context of their joint effort (Schmidt
2002). Awareness in this understanding can be distinguished from notions of attention or focus by its secondary nature. Awareness cannot be provided, as the alignment and integration of actions occurs seemingly without effort. For achieving this seamless way of collaboration, actors seem to both actively display and monitor each other’s actions (Schmidt
2002). In this understanding, awareness is understood as an on-going interpretation of representations (Chalmers
2002). Even though it seems to be more a question of observing and showing certain modalities of action, information sharing is crucial to develop awareness, as it allows teams to manage the process of collaborative working, and to coordinate group or team activities (Dourish and Bellotti
1992). Awareness information therefore plays a mediating role for collaboration and creating shared understanding (Gerosa et al.
2004). However, several different types of awareness can be distinguished (Schmidt
2002): general awareness (Gaver
1991), collaboration awareness (Lauwers et al.
1990), peripheral awareness (Benford et al.
2001; Gaver
1992), background awareness (Bly et al.
1993), passive awareness (Dourish and Bellotti
1992), reciprocal awareness (Fish et al.
1990), mutual awareness (Benford et al.
1994), workspace awareness (Gutwin and Greenberg
2002).
Workspace awareness is defined “as the up-to-the-moment understanding of another person’s interaction with the shared workspace” (Gutwin and Greenberg
2002). Workspace awareness can be considered as a specialized kind of SA that involves a shared workspace and the task of collaboration (Gutwin and Greenberg
2002). Though workspace awareness cannot be compared with the high information load and or highly dynamic situations for which the concept of SA is researched, both concepts share important characteristics. For workspace awareness and SA, people need to gather information from the environment, understand what the gathered information is about and predict what this means for the future. Shared visual spaces provide SA and facilitate conversational grounding (Fussell et al.
2000,
2003). In collaborative environments, visual information about team members and objects of shared interest can support successful collaboration and enables greater SA (Gergle et al.
2013). SA is thus crucial for fluid, natural and successful collaboration to adjust, align and integrate personal activities to the activities of other – distributed – actors (Gutwin and Greenberg
2002).
Many studies show that the quality of communication or information sharing has a relation with team performance (Artman
2000; Pascual et al.
1999; Stammers and Hallam
1985). Artman (
2000) showed that for the development of SA in a team, it is preferable that information is provided sequentially in order to allow time for every team member to develop their own SA. Pascual et al. (
1999) highlight the importance of regularly updating each other in a team, to develop a shared understanding of a situation. As a solution, they propose the coordination of the updates as being an important task of a team leader. Furthermore, Stammers and Hallam (
1985) indicate the need to align the organization of a team, especially with regard to information input and output, to the complexity of the task.
Team effectiveness is often reflected by the degree in which team members engage in processes for sharing information (Bowers et al.
1998), while being engaged within both verbal and non-verbal communication. Poor SA is often associated with accidents and incidents, and with reduced effectiveness of a mission (Taylor and Selcon
1994). In face-to-face interactions, it seems to be relatively easy to develop SA of other actor’s actions. For distributed actors, this becomes more difficult. Technology used might diminish the information one actor perceives, compared to a face-to-face situation, as it is more difficult to perceive other actors’ body language. When technology is used, the artefacts provided are a source of SA, too. Especially the change of an existing artefact gives off information (Gutwin and Greenberg
2002). Therefore when using AR technology, it is necessary to investigate how it may be used to gain a deeper understanding in supporting the development of SA for distributed actors in the security domain and what kind of artefacts to provide.
Most of the work in the security domain is conducted within teams. People in teams need to act reciprocally; they are interdependent to other team members and share one working environment. To better understand SA within teams, Endsley (
1995) introduces the concept of team SA which is defined as “the degree to which every team member possesses the situation awareness required for his or her responsibilities” (Endsley
1995). According to Endsley and Robertson (
2000), successful team performance requires that individual team members have a good SA on their specific task and that good team SA is dependent on team members understanding the meaning of the exchanged information in the team. Endsley and Robertson (
2000) further suggest team performance is linked to shared goals, the interdependence of team member actions and the division of labour between team members. Human factors research further identified the concepts of shared SA as “the degree to which team members have the same SA on shared SA requirements” (Endsley and Jones
2001) and distributed SA which is defined as “SA in teams in which members are separated by distance, time and/or obstacles” (Endsley
2015). Endsley (
2015) further points out that despite being distributed “the SA needs of the team members are the same as when they are collocated, but are made much more difficult to achieve”. This distributed SA concept needs to be contrasted with a more systemic understanding of distributed SA, which views “team SA not as a shared understanding of the situation, but rather as an entity that is separate from team members and is in fact a characteristic of the system itself” (Salmon et al.
2008). The latter understanding of distributed SA assigns SA not only to human actors but also technological artefacts (Stanton et al.
2006). With that it contradicts Endsley’s assumption that SA is a uniquely cognitive construct by taking a world view on SA (Salmon et al.
2008).
In summary, supporting SA can improve collaboration as it enables actors to adjust, align and integrate own activities with those of other distributed actors. In this relation, shared visual spaces and visual information further enable supporting successful collaboration and SA. It is an open question whether Augmented Reality is able to provide visual information in such a way that it also supports successful collaboration and SA. To determine this, it is necessary to gain more understanding of SA for teams in the security domain. In the following, we distinguish between individual SA and team SA. However, we do not follow Endsley and Jones (
2001) in their understanding of shared SA that requires “shared mental models” as this ends up in a tautology that defines cooperative work by a shared goal and assigns this to actors by assessing whether they all act in concert (Schmidt
2011).
2.3 AR systems addressing related challenges
AR systems support distributed collaboration processes in various application domains. To explore the effect of AR systems on collaboration, studies compared classical communication systems with the new support provided by AR. Wang and Dunston (
2011) present an AR-based system for remote collaboration and face-to-face co-located collaboration in the scenario of detecting design errors. Both approaches are studied and compared to a traditional paper-based drawing review method, pointing to the advantage of mixed-reality for remote collaboration tasks.
Schnier et al. (
2011) focus on studying the issues around establishing the joint attention toward the same object or referent in a physically co-located collaborative AR system. The experiments involve pairs of users seated face-to-face at a table in a shared physical environment. Each user is equipped with an HMD. Users can grasp physical objects, each having attached an AR visual marker, and pass them from one user to the other during a collaborative design task. The study reveals the difficulties in coordinating participants’ foci of attention. The authors advocate that establishing coordination and joint attention could benefit from adequate support for a participant to access the co-participant’s visual orientation in space.
Gu et al. (
2011) conduct a study on the impact of 3D virtual representations and the use of tangible user interfaces using AR technology. The results indicate that the change from a physically co-located working environment to a virtual co-located scenario encourages the AR users to smoothly move between working on the same tasks and working on different tasks or different aspects of the design process. The findings emphasize the capability of 3D virtual worlds to support awareness during remote collaboration, with no major compromises for the communication and representation.
Dong et al. (
2013) present ARVita, an advanced collaborative AR tool with problem solving capabilities to be applied in classroom and in professional practice. In ARVita, multiple users with HMDs sit around a table, where they interact with and visualize dynamic simulations of engineering processes, which are overlaid on the surface of the table. The table-based media allows for natural collaboration among people to quickly exchange ideas, using the AR-based support, which providing better means for collaborative learning and discussion.
The effect of AR systems on collaboration is in some cases studied using a game-oriented approach. Wichert (
2002) Wichert (
2002) describes a mobile collaborative AR system that uses web technologies. In the collaborative environment, several users wearing HMDs can play a 3D Tetris-like game. The players can be located in the same room but also in different locations. The game setup provides support for studying the two types of AR-based collaboration: the co-located collaborative interaction with skilled workers, each having a different view of the AR world and the indirect interaction with a remote expert that has the same view as the skilled worker. This early paper identifies shared visualization for the remote expert, common and private information exchange, representation of interaction results, the use of colour, arrows and numbers, as key components of an AR system that simulates the collaboration of skilled workers to a remote teacher.
Datcu et al. (
2013) present an AR-based collaborative game relying on free-hand interaction. Here, the game is used to study the effect of AR when supporting complex problem solving between physically co-located and virtually co-located participants. Within the game, the goal of jointly building a tower of coloured blocks represents an approximation of a shared task. Individual expertise is modelled as the possibility to move blocks of a distinct colour and shared expertise is modelled by the possibility of all players to move blocks of the same colour.
Procyk et al. (
2014) propose a shared geocaching system that allows players to see remote locations while holding conversations. The study points to the value of mobile video chat support as an enhancement of shared geocaching experiences. Furthermore, the authors highlight the role of the asymmetrical experiences and information exchange as important factors to improve parallel experiences of users who are engaged in remote common activities.
The way information is presented within AR has a strong influence on the shared understanding of a problem and the current situation as well as any solution to follow. Ferrise et al. (
2013) use AR to teach maintenance operations by combining instruction manuals with simulation. Here, a skilled remote operator guides a trainee that is equipped with AR technology. The operator can visualize instructions in AR on how the operations should be correctly performed, by superimposing visual representations on the real world product. Shvil, an AR system for collaborative land navigation, overlays visual information related to the explorer onto a scaled physical 3D printout of the terrain, at the physical location of the overseer (Li et al.
2014). The collaboration process between the overseer and local explorer provides live updates on the current location and the path to follow by the field explorer.
Nilsson et al. (
2009) present an AR collaboration system that supports placing and modifying event and organization-specific symbols on a shared digital map associated to a crisis management scenario. Even though the task of creating a shared situational picture scored well with the paper map standard, the AR-based collaboration allows users to better focus on the task in a less-cluttered joint work environment. Team cognition is supported by providing information for joint work, gesturing and joint manipulation of symbols.
Gurevich et al. (
2012) propose TeleAdvisor, a remote assistance hands free assembly that enables a remote helper to give directions to a local user by voice and by projecting information directly in the physical environment of the local worker. A tele-operated robotic arm having attached a pico-projector and a video camera, directs the remote user towards the point of need, and emphasizes graphically with rectangles, the remote’s view to the local. The results highlight the remote’s ability to control the robotic arm to fully understand the work environment. The findings show that a remote helper prefers to generate graphical representations in the form of free sketch annotations and pointers. They further indicate that text and icon-based annotations are not used at all during the collaborative work sessions.
Alem et al. (
2011) propose ReMoTe, a remote guiding system that integrates non-mediated hand gesture communication in the mining industry. In ReMoTe, an expert remotely assists a worker using hands to point to certain locations and to show specific manual procedures. The expert hands are shown to the local worker in the form of virtual hand projections indicating the correct hand actions. The system implements a panoramic view over the local user’s workspace, to enhance the remote users ability to maintain an overall awareness of the local’s activity and workspace.
Streefkerk et al. (
2013) find remote’s annotations usable and intuitive, concluding that such virtual tags can speed up the trace collection process, and can reduce time for documentation during collaborative work sessions in forensic investigations. Virtual tags are appreciated to increase the user awareness over the crime scene and are found to decrease the initial orientation requirements at the scene. Furthermore, the study of Domova et al. (
2014) shows that instantly synchronized snapshots and annotations in form of pointers and overlaying drawings, lead to a general acceptance of the system and provided more efficient means of conveying spatial information. This resulted in lower frustration and better communication between the field worker and remote expert. The described AR system improves situational awareness by offering a wide field of view, shared visual space, tracking the attention focus of the other participant, and the support for gesturing within shared visual space. A more expressive and arguably more intuitive interaction with the scene is proposed by a tablet-based system, that incorporates a touchscreen interface through which a remote user can navigate a physical environment and create world-aligned annotations (Gauglitz et al.
2014a,
b).
The above discussion provides several examples for the use of AR to support collaboration among users in various domains. The examples provided vary in several aspects. Users are either physically or virtually co-located. They use free hand or tangible interaction with physical objects. In some cases, users are static. In others, users are mobile. Finally, some AR systems make use of HMDs while others rely on different visualization devices. Common to all examples, is the underlying idea to provide information in AR and thereby improve awareness and collaboration.
Based on the considerations above, an AR system in the security domain needs to support virtual annotations for local and remote users to create shared situational awareness in physically distributed security units (Nilsson et al.
2009). Due to the nature and the intensity of activities in the security domain, an AR system further needs to rely on an egocentric vision provided by cameras in the HMD cameras rather than on vision from external sensors and on-site projection. Following (Gurevich et al.
2012), an AR system needs to offer annotation tools for remote and local users in combination with marker-less tracking for natural interaction experiences. In contrast to the presented approaches that rely on tablet computing devices, an AR system for the security domain needs to use HMDs, as thereby information can be provided in the direct sight of the users and users can keep their hands free (Wille et al.
2013). Finally, compared to (Domova et al.
2014) an AR system needs supports asymmetry in media (Voida et al.
2008) and asymmetry in experiences (Procyk et al.
2014) to allow remote users temporarily decouple from a local user’s video stream and focus on details in the provided view.