Skip to main content
Top
Published in: Computer Supported Cooperative Work (CSCW) 6/2015

Open Access 01-12-2015

Providing Information on the Spot: Using Augmented Reality for Situational Awareness in the Security Domain

Authors: Stephan Lukosch, Heide Lukosch, Dragoş Datcu, Marina Cidota

Published in: Computer Supported Cooperative Work (CSCW) | Issue 6/2015

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

For operational units in the security domain that work together in teams, it is important to quickly and adequately exchange context-related information to ensure well-working collaboration. Currently, most information exchange is based on oral communication. This paper reports on different scenarios from the security domain in which augmented reality (AR) techniques are used to support such information exchange. The scenarios have been designed with a User Centred Design approach, in order to make the scenarios as realistic as possible. To support these scenarios, an AR system has been developed and evaluated in two rounds. In the first round, the usability and feasibility of the AR support has been evaluated with experts from different operational units in the security domain. The second evaluation round then focussed on the effect of AR on collaboration and situational awareness within the expert teams. With regard to the usability and feasibility of AR, the evaluation shows that the scenarios are well defined and the AR system can successfully support information exchange in teams operating in the security domain. The second evaluation round showed that AR can especially improve the situational awareness of remote colleagues not physically present at a scene.

1 Introduction

Operational units in the security domain can be considered as action or performing teams (Sundstrom 1999). Sundstrom (1999) describes such teams of highly trained professionals as often facing complex and time-limited assignments with audiences, adversaries or challenging environments, all while being regularly confronted with unpredictable behaviour that requires a quick and effective response. Action teams can further be considered as extreme work teams that are highly interdependent, whose performance can save or cost lives (Jones and Hinds 2002). Action teams are dependent on external support from inside and outside their organization (Sundstrom 1999). For operational teams in the security domain, the external support needs to provide relevant and up-to-date information to facilitate and maintain situational awareness (Straus et al. 2010). A lack of situational awareness is identified as one of the major challenges for supporting mobile collaboration in emergencies (Reuter et al. 2014). However, there is a disparity between the information needs of operational units and the ability of current ICT to provide the information (Manning 1996; Sawyer and Tapia 2005).
In the security domain, operational units rely on quick and adequate access and exchange of accurate context-related information (Lin et al. 2004). Quality information can help members of the operational units to resolve problems (Brown 2001). This is important for such units, as information processing and distribution needs to happen under time pressure. Decisions or choices taken based on provided information, generally have a high impact on the further course of the operations and normally cannot be undone. Usually, operational units that work together in teams exchange information orally. The communication is often standardized, in order to avoid critical mistakes in comprehension (Leonard et al. 2004). Nevertheless, oral communication, especially under time pressure, can be understood and interpreted differently by the different team members (Van Knippenberg et al. 2004). Furthermore, there might be unequal information distribution amongst team members, as is seen in other crisis scenario’s (Militello et al. 2007). As a result, incorrect decisions or choices may be taken, putting the security of the operational units at risk as well as the lives of potentially affected civilians.
Successful communication relies on a foundation of mutual knowledge or common ground (Gergle et al. 2013). Shared visual spaces facilitate and support conversational grounding (Fussell et al. 2000, 2003; Kraut et al. 2003) and thus the development of a common ground (Gergle et al. 2013). Additionally, visual information in the shared visual spaces further facilitates the creation of situational awareness, which in combination with the conversational grounding, improves collaborative task performance (Gergle et al. 2013). Situational awareness (SA) develops when individuals, involved within a certain situation, look around, gather information about the situation, make inferences, test their inferences, and draw further inferences from the results (Endsley 1995). To this effect, collaboration and situational awareness do not stand apart from each other. Workspace awareness, i.e. understanding of another person’s interaction with a shared workspace, is considered as a specialized kind of situational awareness (Gutwin and Greenberg 2002). For workspace awareness and SA, people need to gather information from the environment, understand what the gathered information is about and predict what this means for the future. Provided awareness information plays a mediating role for collaboration and creating shared understanding for stakeholders (Gerosa et al. 2004).
Brown (2001) considers information technology in general as a critical support structure for operational units in the security domain, as it supports storing, forwarding, retrieving and distributing organizational information. Information technology, such as shared displays, has the potential to aid in information sharing and a more even distribution of workload (Militello et al. 2007). A study on mobile collaboration support for emergencies revealed that remote team members would not only like to see the situation on site, but also be able to provide information to the local team members to establish SA (Reuter et al. 2014). In our study, we explore whether the visual information in AR impacts the collaboration quality and individual situational awareness of team members in the security domain.
AR systems allow users to see the real world, with virtual objects superimposed upon, or composited with the real world (Azuma 1997; Azuma et al. 2001) where virtual objects are computer graphic objects that exist in essence or effect, but not formally or actually (Milgram and Kishino 1994). AR systems are not limited to the use of Head-Mounted Devices (HMDs) and mainly have to combine real and virtual objects as previously described, be interactive in real-time and register objects within 3D (Azuma 1997). AR systems can be used to establish a common ground during cross-organisational collaboration in dynamic tasks (Nilsson et al. 2009). They can further be used to establish the experience of being practically co-located by means of simulated presence. For example, AR systems have been used to allow experts to spatially collaborate with others at any location in the world, without traveling and thereby creating the experience of being virtually co-located, e.g. in the field of crime scene investigation (Poelman et al. 2012). AR systems have also been used to increase social presence in video-based communication (Almeida et al. 2012) or to help in complex assembly tasks (Huang et al. 2013). Such new approaches create new collaborative experiences and allow distributed users to collaborate on spatial tasks, create a shared understanding and establish a common ground.
This paper reports on the evaluation of an AR system that is being developed to promote information exchange as well as situational awareness for teams within the security domain. In the security domain, it is important that team members can focus on the situation at hand and at the same time have their hands available to work on their current task. For that reason, the presented AR system relies on the use of HMDs rather than handheld devices. Although HMDs can cause additional strain for the user, information can be provided in the direct sight of the users and users can keep their hands free (Wille et al. 2013). By adopting an end-user centred approach (Harteveld 2011), different scenarios for using AR to exchange information have been identified together with experts from different operational units in the security domain, i.e. the Dutch police, the Netherlands Forensic Institute (NFI) and the fire brigade of the port of Rotterdam. An AR system supporting these scenarios has been developed. The evaluation was carried out in two rounds. Experts from the operational units in the security domain participated in each evaluation. The first evaluation round focused on the feasibility and usability of the AR system for the different operational units (Datcu et al. 2014). Based on the lessons learned, the AR technology has been developed further. A second evaluation round then focused on the effect of AR on collaboration and situational awareness. Both evaluation rounds included scenarios that had been developed closely together with the target group, and a combination of different evaluation methods, like questionnaires, observations, and de-briefing sessions. This combined approach lead to deep insights in the usability and effect of AR technology on collaboration and situational awareness of teams working in the security domain.
The remainder of this paper is organized as follows: the second section presents related work on challenges for collaboration in the safety domain, (situational) awareness and AR systems supporting collaboration. In section three, a usability study is presented, including scenario identification and design. The study on collaboration and situational awareness is content of section four. In section five, we draw our conclusions and look forward towards future work.

2 Problem description and contribution of the study

2.1 Challenges in the field

Action teams (Sundstrom 1999) or extreme work teams (Jones and Hinds 2002) in the security domain work highly interdependent and collaborative by nature. Still, effective collaboration in this field seems to be difficult to realize. (Berlin and Carlström 2011) study why collaboration often is minimised at an accident scene. Based on observations and semi-structured interviews, they discover that collaboration is often considered as ideal rather than something that is really carried out. As major reasons for only limited forms of collaboration, they identify information asymmetry, uncertainty and lack of incentives. (Smith et al. 2008) are of the opinion that it is difficult to consider crime scene examination from a team perspective, as usually several different teams from different organisations need to work together. The work is then centred around the collection of information and evidence in consultation with different people. The work effectiveness relies very much on the efficiency of each individual team, the communication of results and the coordination among the teams.
In the security domain, operational units rely on quick and adequate access and exchange of accurate context-related information (Lin et al. 2004). Quality information can help members of the operational units to resolve problems (Brown 2001) and to facilitate or maintain situational awareness (Straus et al. 2010). There is a mismatch between the information needs of operational units and the ability of ICT to provide the information (Manning 1996; Sawyer and Tapia 2005). Such a mismatch can impact the performance of teams and can ultimately save or cost lives (Jones and Hinds 2002). Bharosa et al. (2010) discuss challenges and obstacles in sharing and coordinating information during multi-agency disaster response. They consider challenges from an inter- and intra-organisational perspective, as well as the perspective of individuals. Major challenges are identified as conflicting role structures, mismatch between goals and independent projects, focus on vertical information sharing, information overload, inability to determine what should be shared or the prioritization of own problems. Bharosa et al. (2010) further identify factors to influence information sharing and coordination such as improving interaction and familiarity of other roles, knowledge of other agencies’ operations or the information and system quality. Reuter et al. (2014) examine mobile collaboration practices in crisis management at an inter-organizational level. Their study shows that new informal communication practices with current technology, i.e. mobile phones, needs to be derived. Mobile phone calls help to include remote actors in the situation assessment, but that verbal communication alone is not enough to facilitate situational awareness. Furthermore, challenges with regard to information flow during crisis management occur (Militello et al. 2007). Based on case studies, Militello et al. (2007) identify asymmetric knowledge and experience, barriers to maintaining mutual awareness, and uneven workload distribution and disrupted communication as major challenges. For each of the challenges different recommendations are presented. To overcome asymmetric knowledge, they suggest providing communication tools and training with their usage. To improve mutual awareness, they propose the use of shared displays. To address uneven workload, they suggest to more clearly assign roles and to make their responsibilities known across organisations. The latter is also stressed by (Drabek and McEntire 2002).
There are some further issues analysed in police teamwork, which are related to our study. Streefkerk et al. (2008) noticed that police officers often have no overview of availability and location of other team members. As a result, police officers often do not know which of their colleagues are available to handle an incident and incidents may go unattended. Motivated by this observation, they consider team awareness as the major challenge for police team tasks.
The above discussion shows that, though collaboration of different organisational units is desired, several challenges need to be addressed. Among the major challenges are information asymmetry among the different organisational units, the efficiency as well as limits of verbal communication, the knowledge of the responsibilities of the different organisation and finally the situational awareness of the different team members.

2.2 The role of (situational) awareness and information in team collaboration

Human factors research into individual situational awareness originated from the study of military aviation, where pilots interact with highly dynamic, information-rich environments. A widely adopted definition of individual situational awareness (SA) is “the perception of the elements in the environment within a volume of time and space, the comprehension of their meaning, and the projection of their status in the near future” (Endsley 1995). SA includes thus the understanding and comprehension of a given environment and situation, as context for one’s own actions. In this view, SA is seen as a cognitive product of information-processing (Salmon et al. 2009). The concept of SA has been used in several other domains such as energy distribution, nuclear power plant operational maintenance, process control, maritime, or tele-operations (Salmon et al. 2008). Still, several researchers argue that a universally accepted definition of the SA is yet to emerge (Salmon et al. 2008).
In CSCW research, awareness is similarly an ambiguous term. In general, awareness refers to actors’ taking heed of the context of their joint effort (Schmidt 2002). Awareness in this understanding can be distinguished from notions of attention or focus by its secondary nature. Awareness cannot be provided, as the alignment and integration of actions occurs seemingly without effort. For achieving this seamless way of collaboration, actors seem to both actively display and monitor each other’s actions (Schmidt 2002). In this understanding, awareness is understood as an on-going interpretation of representations (Chalmers 2002). Even though it seems to be more a question of observing and showing certain modalities of action, information sharing is crucial to develop awareness, as it allows teams to manage the process of collaborative working, and to coordinate group or team activities (Dourish and Bellotti 1992). Awareness information therefore plays a mediating role for collaboration and creating shared understanding (Gerosa et al. 2004). However, several different types of awareness can be distinguished (Schmidt 2002): general awareness (Gaver 1991), collaboration awareness (Lauwers et al. 1990), peripheral awareness (Benford et al. 2001; Gaver 1992), background awareness (Bly et al. 1993), passive awareness (Dourish and Bellotti 1992), reciprocal awareness (Fish et al. 1990), mutual awareness (Benford et al. 1994), workspace awareness (Gutwin and Greenberg 2002).
Workspace awareness is defined “as the up-to-the-moment understanding of another person’s interaction with the shared workspace” (Gutwin and Greenberg 2002). Workspace awareness can be considered as a specialized kind of SA that involves a shared workspace and the task of collaboration (Gutwin and Greenberg 2002). Though workspace awareness cannot be compared with the high information load and or highly dynamic situations for which the concept of SA is researched, both concepts share important characteristics. For workspace awareness and SA, people need to gather information from the environment, understand what the gathered information is about and predict what this means for the future. Shared visual spaces provide SA and facilitate conversational grounding (Fussell et al. 2000, 2003). In collaborative environments, visual information about team members and objects of shared interest can support successful collaboration and enables greater SA (Gergle et al. 2013). SA is thus crucial for fluid, natural and successful collaboration to adjust, align and integrate personal activities to the activities of other – distributed – actors (Gutwin and Greenberg 2002).
Many studies show that the quality of communication or information sharing has a relation with team performance (Artman 2000; Pascual et al. 1999; Stammers and Hallam 1985). Artman (2000) showed that for the development of SA in a team, it is preferable that information is provided sequentially in order to allow time for every team member to develop their own SA. Pascual et al. (1999) highlight the importance of regularly updating each other in a team, to develop a shared understanding of a situation. As a solution, they propose the coordination of the updates as being an important task of a team leader. Furthermore, Stammers and Hallam (1985) indicate the need to align the organization of a team, especially with regard to information input and output, to the complexity of the task.
Team effectiveness is often reflected by the degree in which team members engage in processes for sharing information (Bowers et al. 1998), while being engaged within both verbal and non-verbal communication. Poor SA is often associated with accidents and incidents, and with reduced effectiveness of a mission (Taylor and Selcon 1994). In face-to-face interactions, it seems to be relatively easy to develop SA of other actor’s actions. For distributed actors, this becomes more difficult. Technology used might diminish the information one actor perceives, compared to a face-to-face situation, as it is more difficult to perceive other actors’ body language. When technology is used, the artefacts provided are a source of SA, too. Especially the change of an existing artefact gives off information (Gutwin and Greenberg 2002). Therefore when using AR technology, it is necessary to investigate how it may be used to gain a deeper understanding in supporting the development of SA for distributed actors in the security domain and what kind of artefacts to provide.
Most of the work in the security domain is conducted within teams. People in teams need to act reciprocally; they are interdependent to other team members and share one working environment. To better understand SA within teams, Endsley (1995) introduces the concept of team SA which is defined as “the degree to which every team member possesses the situation awareness required for his or her responsibilities” (Endsley 1995). According to Endsley and Robertson (2000), successful team performance requires that individual team members have a good SA on their specific task and that good team SA is dependent on team members understanding the meaning of the exchanged information in the team. Endsley and Robertson (2000) further suggest team performance is linked to shared goals, the interdependence of team member actions and the division of labour between team members. Human factors research further identified the concepts of shared SA as “the degree to which team members have the same SA on shared SA requirements” (Endsley and Jones 2001) and distributed SA which is defined as “SA in teams in which members are separated by distance, time and/or obstacles” (Endsley 2015). Endsley (2015) further points out that despite being distributed “the SA needs of the team members are the same as when they are collocated, but are made much more difficult to achieve”. This distributed SA concept needs to be contrasted with a more systemic understanding of distributed SA, which views “team SA not as a shared understanding of the situation, but rather as an entity that is separate from team members and is in fact a characteristic of the system itself” (Salmon et al. 2008). The latter understanding of distributed SA assigns SA not only to human actors but also technological artefacts (Stanton et al. 2006). With that it contradicts Endsley’s assumption that SA is a uniquely cognitive construct by taking a world view on SA (Salmon et al. 2008).
In summary, supporting SA can improve collaboration as it enables actors to adjust, align and integrate own activities with those of other distributed actors. In this relation, shared visual spaces and visual information further enable supporting successful collaboration and SA. It is an open question whether Augmented Reality is able to provide visual information in such a way that it also supports successful collaboration and SA. To determine this, it is necessary to gain more understanding of SA for teams in the security domain. In the following, we distinguish between individual SA and team SA. However, we do not follow Endsley and Jones (2001) in their understanding of shared SA that requires “shared mental models” as this ends up in a tautology that defines cooperative work by a shared goal and assigns this to actors by assessing whether they all act in concert (Schmidt 2011).

2.3 AR systems addressing related challenges

AR systems support distributed collaboration processes in various application domains. To explore the effect of AR systems on collaboration, studies compared classical communication systems with the new support provided by AR. Wang and Dunston (2011) present an AR-based system for remote collaboration and face-to-face co-located collaboration in the scenario of detecting design errors. Both approaches are studied and compared to a traditional paper-based drawing review method, pointing to the advantage of mixed-reality for remote collaboration tasks.
Schnier et al. (2011) focus on studying the issues around establishing the joint attention toward the same object or referent in a physically co-located collaborative AR system. The experiments involve pairs of users seated face-to-face at a table in a shared physical environment. Each user is equipped with an HMD. Users can grasp physical objects, each having attached an AR visual marker, and pass them from one user to the other during a collaborative design task. The study reveals the difficulties in coordinating participants’ foci of attention. The authors advocate that establishing coordination and joint attention could benefit from adequate support for a participant to access the co-participant’s visual orientation in space.
Gu et al. (2011) conduct a study on the impact of 3D virtual representations and the use of tangible user interfaces using AR technology. The results indicate that the change from a physically co-located working environment to a virtual co-located scenario encourages the AR users to smoothly move between working on the same tasks and working on different tasks or different aspects of the design process. The findings emphasize the capability of 3D virtual worlds to support awareness during remote collaboration, with no major compromises for the communication and representation.
Dong et al. (2013) present ARVita, an advanced collaborative AR tool with problem solving capabilities to be applied in classroom and in professional practice. In ARVita, multiple users with HMDs sit around a table, where they interact with and visualize dynamic simulations of engineering processes, which are overlaid on the surface of the table. The table-based media allows for natural collaboration among people to quickly exchange ideas, using the AR-based support, which providing better means for collaborative learning and discussion.
The effect of AR systems on collaboration is in some cases studied using a game-oriented approach. Wichert (2002) Wichert (2002) describes a mobile collaborative AR system that uses web technologies. In the collaborative environment, several users wearing HMDs can play a 3D Tetris-like game. The players can be located in the same room but also in different locations. The game setup provides support for studying the two types of AR-based collaboration: the co-located collaborative interaction with skilled workers, each having a different view of the AR world and the indirect interaction with a remote expert that has the same view as the skilled worker. This early paper identifies shared visualization for the remote expert, common and private information exchange, representation of interaction results, the use of colour, arrows and numbers, as key components of an AR system that simulates the collaboration of skilled workers to a remote teacher.
Datcu et al. (2013) present an AR-based collaborative game relying on free-hand interaction. Here, the game is used to study the effect of AR when supporting complex problem solving between physically co-located and virtually co-located participants. Within the game, the goal of jointly building a tower of coloured blocks represents an approximation of a shared task. Individual expertise is modelled as the possibility to move blocks of a distinct colour and shared expertise is modelled by the possibility of all players to move blocks of the same colour.
Procyk et al. (2014) propose a shared geocaching system that allows players to see remote locations while holding conversations. The study points to the value of mobile video chat support as an enhancement of shared geocaching experiences. Furthermore, the authors highlight the role of the asymmetrical experiences and information exchange as important factors to improve parallel experiences of users who are engaged in remote common activities.
The way information is presented within AR has a strong influence on the shared understanding of a problem and the current situation as well as any solution to follow. Ferrise et al. (2013) use AR to teach maintenance operations by combining instruction manuals with simulation. Here, a skilled remote operator guides a trainee that is equipped with AR technology. The operator can visualize instructions in AR on how the operations should be correctly performed, by superimposing visual representations on the real world product. Shvil, an AR system for collaborative land navigation, overlays visual information related to the explorer onto a scaled physical 3D printout of the terrain, at the physical location of the overseer (Li et al. 2014). The collaboration process between the overseer and local explorer provides live updates on the current location and the path to follow by the field explorer.
Nilsson et al. (2009) present an AR collaboration system that supports placing and modifying event and organization-specific symbols on a shared digital map associated to a crisis management scenario. Even though the task of creating a shared situational picture scored well with the paper map standard, the AR-based collaboration allows users to better focus on the task in a less-cluttered joint work environment. Team cognition is supported by providing information for joint work, gesturing and joint manipulation of symbols.
Gurevich et al. (2012) propose TeleAdvisor, a remote assistance hands free assembly that enables a remote helper to give directions to a local user by voice and by projecting information directly in the physical environment of the local worker. A tele-operated robotic arm having attached a pico-projector and a video camera, directs the remote user towards the point of need, and emphasizes graphically with rectangles, the remote’s view to the local. The results highlight the remote’s ability to control the robotic arm to fully understand the work environment. The findings show that a remote helper prefers to generate graphical representations in the form of free sketch annotations and pointers. They further indicate that text and icon-based annotations are not used at all during the collaborative work sessions.
Alem et al. (2011) propose ReMoTe, a remote guiding system that integrates non-mediated hand gesture communication in the mining industry. In ReMoTe, an expert remotely assists a worker using hands to point to certain locations and to show specific manual procedures. The expert hands are shown to the local worker in the form of virtual hand projections indicating the correct hand actions. The system implements a panoramic view over the local user’s workspace, to enhance the remote users ability to maintain an overall awareness of the local’s activity and workspace.
Streefkerk et al. (2013) find remote’s annotations usable and intuitive, concluding that such virtual tags can speed up the trace collection process, and can reduce time for documentation during collaborative work sessions in forensic investigations. Virtual tags are appreciated to increase the user awareness over the crime scene and are found to decrease the initial orientation requirements at the scene. Furthermore, the study of Domova et al. (2014) shows that instantly synchronized snapshots and annotations in form of pointers and overlaying drawings, lead to a general acceptance of the system and provided more efficient means of conveying spatial information. This resulted in lower frustration and better communication between the field worker and remote expert. The described AR system improves situational awareness by offering a wide field of view, shared visual space, tracking the attention focus of the other participant, and the support for gesturing within shared visual space. A more expressive and arguably more intuitive interaction with the scene is proposed by a tablet-based system, that incorporates a touchscreen interface through which a remote user can navigate a physical environment and create world-aligned annotations (Gauglitz et al. 2014a, b).
The above discussion provides several examples for the use of AR to support collaboration among users in various domains. The examples provided vary in several aspects. Users are either physically or virtually co-located. They use free hand or tangible interaction with physical objects. In some cases, users are static. In others, users are mobile. Finally, some AR systems make use of HMDs while others rely on different visualization devices. Common to all examples, is the underlying idea to provide information in AR and thereby improve awareness and collaboration.
Based on the considerations above, an AR system in the security domain needs to support virtual annotations for local and remote users to create shared situational awareness in physically distributed security units (Nilsson et al. 2009). Due to the nature and the intensity of activities in the security domain, an AR system further needs to rely on an egocentric vision provided by cameras in the HMD cameras rather than on vision from external sensors and on-site projection. Following (Gurevich et al. 2012), an AR system needs to offer annotation tools for remote and local users in combination with marker-less tracking for natural interaction experiences. In contrast to the presented approaches that rely on tablet computing devices, an AR system for the security domain needs to use HMDs, as thereby information can be provided in the direct sight of the users and users can keep their hands free (Wille et al. 2013). Finally, compared to (Domova et al. 2014) an AR system needs supports asymmetry in media (Voida et al. 2008) and asymmetry in experiences (Procyk et al. 2014) to allow remote users temporarily decouple from a local user’s video stream and focus on details in the provided view.

3 Usability study

This section describes two different studies. The first study focuses on the usability and feasibility of an AR system in the security domain (Datcu et al. 2014). The second study builds upon the findings of this study and reports on the effect of an AR system on team SA and collaboration. With this step-by-step approach, we explore how AR can be used in distributed teams in general, and secondly to show how this set-up is applicable to foster team SA. The studies are conducted with future users from the security domain in highly realistic scenarios.

3.1 Scenario design

In order to test the AR technology and to gather insights into its usability for real fieldwork in emergency teams, it is important to develop highly realistic scenarios. Scenarios provide hands-on experiences with real-life problem solving tasks (Niehaus and Riedl 2009) in safe experimental environments. With such scenarios, realistic situations can be simulated in order to gather deep insights (Schön 1983).
Scenarios show aspects of games, involving play based on certain rules, take place within a defined location, are limited in time, and follow specific rules (Brandt 2006). Earlier design experiences with operational units in the security domain (Lukosch et al. 2014) show that by using the Triadic Game Design (TGD) philosophy (Harteveld 2011) playful, meaningful and realistic scenarios can be identified. TGD (Harteveld 2011) is an end-user oriented design approach, distinguishing three equally important components: Play, Meaning, and Reality. TGD emphasizes that all three aspects have to be balanced within a design in order to develop a valid, meaningful, and engaging game experience.
During a half-day workshop, in which 12 members of 4 different operational units participated, 3 different scenarios have been identified. The TGD philosophy was used as a guideline for the workshops. The three elements Play (P), Meaning (M), and Reality (R), have been addressed while defining the scenarios. Together with the experts from the security domain, we held a structured brainstorm session, in which we first defined the necessary elements of reality (R) needed for the test scenarios. It was soon clear that highly realistic scenarios with a similar amount of stress and a realistic story line would be needed in order to explore the feasibility of the AR technology. Thus, the reality aspect addresses all circumstances that are derived from real life situations of emergency teams, such as realistic communication means, physical attributes at the scene and clothing worn during the test. Secondly, the meaning (M) aspect of the scenarios was addressed by defining clear measures of the usability of the AR technology as the aim of this study. Thirdly, within the play (P) aspect, we formulated which kind of actions and decisions are possible and required within the scenario, but also which procedures and protocols would define the ‘rules’ of the scenario.
The scenarios focus on tasks for individual operational units. Their main purpose is to introduce the AR system as well as evaluate its feasibility and usability. In all 3 scenarios, the AR technology is used to establish virtual co-location. Virtual co-location entails that people are virtually present at any place of the world and interact with others that are physically present in another location by using AR techniques. Figure 1 illustrates virtual co-location of two policemen. A local policeman wearing an HMD (see Figure 1 (left)) is connected to a remote colleague (see Figure 1 (right)). By streaming the video captured from the camera in the HMD, the remote colleague can see what the local policeman is seeing and provide additional information on the situation in the display of the HMD to the local colleague. In the scenarios, interaction is thus limited to oral communication and the remote colleague providing additional information on the situation. The following sections describe the three scenarios identified and indicate the different elements Play (P), Meaning (M), and Reality (R) of the TGD philosophy.

3.1.1 VIP protection

A policeman, equipped with a head mounted device (HMD) investigates a ‘safe house’ in which a witness needs to be safely accommodated (R). This policeman shares the local view as recorded from a camera in the HMD with a remote colleague (R). While the local policeman investigates the safe house, the remote colleague has the task to highlight suspect objects in the house and point out possible emergency exits by augmenting the view of the local policeman. The environment can be augmented by placing geometric shapes, text or arrows in 3D. The local policeman has to support the remote colleague in investigating the house (M). For the scenario, the training location needs to be prepared with suspicious objects, e.g. a suitcase, that can be identified. Additionally, audio communication among the policemen needs to be established (R).

3.1.2 Forensic investigation

A forensic investigator arrives at a severe crime scene. Wearing an HMD, the investigator shares the local view with a remote colleague (R). The remote colleague has the task to point the local colleague to possible evidence, take pictures of evidence, support the preparation of 3D laser scans, and mark areas at the scene that are to be avoided. For that purpose, the remote colleague can augment the view of the local investigator with virtual laser scanning stickers, text, resizable geometric shapes, arrows as well as text (P). During the scenario the local investigator has the task to replace the virtual laser scanning stickers with real ones, stay clear of marked areas and support the remote colleague in investigating the scene (M). For the scenario, the training location needs to be prepared with mockup blood patterns, mockup evidence, e.g. a gun or knife, as well as evidence that is to be avoided, e.g. a mockup dead body. Furthermore, it is necessary to establish an audio communication among the investigators (R).

3.1.3 Domestic violence

A team of 2 policemen arrives at a scene of domestic violence (R). One of the policemen wears an HMD and shares the local view with a remote colleague. The remote colleague can provide instructions, provide information on the case and present persons, take pictures and highlight possible evidence. For that purpose, the remote colleague can augment the view of the local policeman with virtual index cards, showing the necessary information, resizable geometric shapes, arrows as well as text. For the index cards, the remote policeman can indicate different urgency levels by surrounding the index cards with either a green, yellow or red frame (P). The local policeman wearing the HMD needs to talk to people present at the scene, follow the instructions of and support the remote colleague in investigating the scene, as well as orally share received information with the second local colleague (M). For the scenario, the training location needs to be prepared with possible evidence, such as a broken vase, knife or a gun. Additionally, two actors need to play the case of domestic violence and an audio communication among the policemen needs to be established (R).

3.2 Participants

Eleven policemen and inspectors from 3 national Dutch security institutions participated in the usability study, playing roles in the 3 scenarios: VIP protection (see Section 0), forensic investigation (see Section 3.1.2), and domestic violence (see Section 3.1.3). 4 of the participants were involved in the design of the scenarios. The rest of the participants were chosen at random and their availability on the day of the experiment. None of the participants has used our AR system before. All experiments took place indoors at a real training environment, belonging to the Dutch police in Leusden, The Netherlands. For each experiment, 2 participants are required: the local person that wears the HMD and the remote person in front of a laptop. The local and remote persons are situated in different physical locations (but in the same house) and are connected via a local network.

3.3 Materials

In order to investigate the usability of AR support for security teams, each participant filled in a questionnaire (see Table 1) after the experiment. The questionnaire consists of 16 closed and 8 open questions on the usability of the system, as well as its ability in regards to information exchange. The questionnaire is also based on the TGD approach and has already been used in studies conducted in the game design field (Bekebrede 2010; Harteveld 2011). TGD is also used as background for the survey to investigate whether the three aspects of reality, meaning and play could have been addressed with this set-up of a test scenario, and whether the AR technology was able to support a well-balanced scenario. Endsley’s conceptualization of situational awareness (Endsley 1988) was used to explore the aspects of situational awareness as starting point for the second study. An interview round concluded the evaluation.
Table 1
Questionnaire on the usability of the AR.
https://static-content.springer.com/image/art%3A10.1007%2Fs10606-015-9235-4/MediaObjects/10606_2015_9235_Tab1_HTML.gif

3.4 Procedure

All participants of the experiment were given an oral briefing on the goal of the experiment. Each participant knew the designed scenarios due to earlier written communication or participation in the design session of the scenarios. The VIP protection scenario was played a total of 4 times with the participants, alternating the roles of the local and remote colleague. The forensic investigation scenario was played 2 times. Again, the participants changed their role from one round to the other. Finally, the domestic violence scenario was played twice with the participants alternating their roles.
Following the description of the scenarios, only the remote participant was able to manipulate the virtual content through a classical 2D user interface, while the local could only see it. For each of the 3 scenarios, the user interface offered different functionality for the remote user.

3.5 Distributed Collaborative Augmented Reality Environment (DECLARE)

We have developed a framework named DECLARE (DistributEd CoLlaborative Augmented Reality Environment). DECLARE is based on a centralized architecture for data communication, to support virtual co-location of users. DECLARE consists of four major components (see Figure 2):
1.
Local user AR support: A local user wears an optical see-through HMD. The video captured by the HMD camera is sent to the other components of DECLARE. Augmented content is displayed via the 3D user interface in the 3D display of the HMD.
 
2.
Remote user AR support: The user interface for remote users runs on a desktop computer or laptop. A remote user interacts with DECLARE by using a keyboard and a standard mouse device.
 
3.
Localization and mapping: The localization and mapping component is based on an implementation of RDSLAM (Robust Dynamic Simultaneously Localization And Mapping) (Tan et al. 2013) provided by the developers of RDSLAM.
 
4.
Shared memory space: All DECLARE components communicate through a shared memory space. For the video stream from a local user, a synchronization mechanism is implemented in the shared memory, ensuring that the same video frame is played for the local user, remote user as well as the localization and mapping component simultaneously. If one component disconnects temporarily, the video synchronization is automatically done for the next work session. Updates from the HMD camera position, orientation and the manual annotations made by the users, are aligned in time and space with the video stream and its content.
 

3.5.1 Localization and mapping

A key component of DECLARE is based on RDSLAM which is a real-time monocular SLAM (Simultaneously Localization And Mapping) system that can robustly work in dynamic environments (Tan et al. 2013). In DECLARE, the RDSLAM component can run on a dedicated separate computer or on the computer of one of the users, either the remote or local.
The RDSLAM component receives the video frames from the local user’s HMD camera. In order to perform mapping and tracking of the physical environment of the local user, an initialization phase is required by RDSLAM. The beginning and the end of the initialization are set by the remote user by pressing twice the spacebar. The local user has to move the camera of the HMD horizontally, from left to right, and during this process a 3D coordinate system is set, relative to which all the coordinates of the tracked points will be computed.
Based on the video frames, the RDSLAM algorithm computes for each frame the parameters of the camera’s position and orientation together with a sparse cloud of 3D tracked points. In each frame, there may be also invalid points (a point may become invalid due to occlusion or to illumination or position variation). If their number increases too much, RDSLAM does not recognize the scene and the message CAMERA LOST appears on the screen. In such situations, the local user can move back to a previous position, until the current frame is recognized again. The tracked points are essential for DECLARE as they connect the augmented world to the physical world and make it possible to superimpose virtual objects on the real world.

3.5.2 Remote user AR support

The remote user receives the video captured from the camera in the local user’s HMD and can view the video via a desktop computer or laptop. Using a classical 2D graphical user interface, with a menu of buttons positioned in the left part of the screen (see Figure 3, 4 and 5), the remote user can perform different actions in the shared virtual space:
1.
Taking pictures with the HMD camera
 
2.
Placing virtual objects that are fixed in one position in the user interface
 
3.
Placing virtual objects that are superimposed on the real world using tracking points provided by the RDSLAM component
 
Fraser et al. (1999) showed that indicating the field of view for distributed users in a virtual reality environment, supports localization and coordination of tasks. For that purpose, the transparent rectangle in the middle of the image (see Figure 3, 4 and 5) represents the field of view of the currently used HMD (see Figure 3). Virtual objects in this transparent area are visible to the local users wearing the HMD. Thereby, the transparent area makes the remote user aware of which virtual objects can currently be seen by the user wearing the HMD. It further supports the communication of local and remote user on the virtual content.
Taking pictures with the HMD camera
Taking pictures is available for all three scenarios. By pressing the camera symbol in the user interface, a remote user can take the picture with the camera in the HMD worn by the local user. The picture is taken immediately once the button is pressed and stored within the shared memory space of DECLARE for later review.
Placing virtual objects fixed in one position in the user interface
This functionality allows remote users to present a local user with information on the current situation. The information is displayed in a fixed position in the user interface of a local user. In the domestic violence scenario, remote users can display information on the location, the procedures to follow or the persons living in the apartment (see Figure 3). In the VIP protection scenario, remote users can display a time counter that counts down the seconds left until a certain task should be accomplished. The time counter is shown as a text message that is updated every second, in a fixed position in the upper part of the transparent rectangle. In all scenarios, remote users can place textual messages in the user interface. This is to alert the local user or ask for specific actions (see Figure 5).
Placing virtual objects superimposed on the real world
Placing virtual objects superimposed on the real world is a feature available in the remote user’s interface for all three scenarios. Virtual objects can be placed by selecting them from the menu in the left part of the screen and placing them with a mouse click. Selected objects can be resized by pressing↑ or ↓ keys or may be deleted by pressing the DEL key.
The coordinates of the mouse click are sent to the RDSLAM component and the closest tracking point recognized by the RDSLAM algorithm is used to spatially place the virtual object. The yellow points that can be seen in the remote user’s view (see Figure 3, 4 and 5) represent the current frame tracking points that allow the remote expert to place different virtual objects in the shared space. A virtual object is actually placed in the position of the tracking point, of which projection on the screen is the closest to the position where the remote user clicked with the mouse. These points are only visible in the view of the remote user as support when placing a virtual object.
By pressing the F key, a remote user can freeze the image and decouple the view from the live video streaming. By pressing U, a remote user can unfreeze the image and view the video stream as provided by the HMD camera of the local user. Such freezing of the video stream, further facilitates remote users to place virtual content without having the local user focusing on a specific part of the real world scene.
In each of the three scenarios, remote users can place different virtual objects. Table 2 gives an overview of the available virtual objects per scenario.
Table 2
Available virtual objects per scenario for placement in the real world.
 
3D sphere
3D block
3D arrow
Laser scanning marker
Text notes
VIP Protection
X
X
X
 
X
Forensic investigation
X
X
X
X
X
Domestic violence
X
X
X
 
X
In each scenario, thus, 3D spheres and blocks can be used to mark areas that, e.g., need to be avoided (see Figure 4). 3D arrows (see Figure 4) are available to point to specific points of interest. Text notes can be added to ask for certain actions in relation to an object (see Figure 4) or give more general advice (see Figure 5).
In the forensic investigation scenario, a remote user can further add symbols indicating areas for laser scanning. Figure 5, e.g., shows such laser scanning sticker as circles with red and white triangles.

3.5.3 Local user AR support

The local user wears an optical see-through HMD. The video captured by the HMD camera is sent to the other components of DECLARE. When the local user views a part of the local environment that has been augmented by the remote user, or when the remote user decides to provide additional information, the 3D user interface renders the corresponding content and displays it in the 3D display of the HMD. The graphical rendering is adapted for the optical see-through HMD from META (see Figure 6).

3.6 Results

This section reports on the results of the interviews and the data from the evaluation questionnaire (see Table 1) filled in by the participants at the experiment. The AR system as introduced above, was used as supporting means for the distributed security teams. Table 3 presents the medians (Mdn) and interquartile ranges (IQR) per scenario, on each Likert item. In the following, we discuss the feedback of the participants per scenario.
Table 3
Results (medians and interquartile ranges) on Likert items, for each scenario.
 
[4.1]
[4.2]
[4.3]
[4.4]
[4.5]
[4.6]
[4.7]
[4.8]
[4.9]
[4.10]
[4.11]
[4.12]
[4.13]
[4.14]
[4.15]
[4.16]
VIP:
1.00
1.00
4.00
3.00
2.00
2.50
3.00
2.00
2.00
3.50
2.50
2.50
2.00
1.50
1.00
1.50
1.00
1.00
2.50
1.50
1.00
1.00
1.00
1.50
1.75
2.00
3.00
2.00
2.00
1.00
1.00
1.00
Forensic
1.50
1.50
3.50
1.50
2.00
2.00
2.50
1.00
2.00
4.00
3.00
2.50
3.00
1.50
2.50
2.50
1.00
1.00
1.00
1.00
2.00
2.00
3.00
0.00
2.00
2.00
2.00
1.00
2.00
1.00
1.00
1.00
Domestic Violence
2.00
2.00
2.00
3.00
2.00
2.00
2.00
3.00
2.00
1.00
1.00
3.00
3.00
2.00
3.00
2.00
3.00
0.75
0.75
1.50
0.75
0.75
0.00
0.75
0.75
0.00
0.00
0.00
2.00
1.50
1.50
0.75

3.6.1 VIP protection

The participants of this scenario indicated that the provided AR system can improve the communication in the team (Q4.15, Mdn = 1, IQR = 1), that the scenario prepares well for future assignments (Q4.16, Mdn = 1.50, IQR = 1.00) and that they would like to use more AR scenarios for training purposes (Q4.14, Mdn = 1.50, IQR = 1.00). They further asked for even more possibilities to interact with virtual content in the scenarios (Q4.13, Mdn = 2.00, IQR = 2.00).
Three participants judged the scenario as useful for the development of team situational awareness. The AR equipment in the backpack and the cables had a negative impact on mobility during the experiment. The occasional information overload and the quality of the AR overlay (too dark) were factors with light negative impact during the experiment.

3.6.2 Forensic investigation

In case of the forensic investigation, the participants mentioned that the scenario was exciting and attractively built (Q4.4, Mdn = 1.50, IQR = 1.00), it had a clear objective (Q4.1, Mdn = 1.50, IQR = 1.00) and provided clear instructions and explanations (Q4.8, Mdn = 1.50, IQR = 1.00). The participants further stated that they would like to use more AR scenarios for training purposes (Q4.14, Mdn = 1.50, IQR = 1.00) and that the AR system was easy to use (Q4.8, Mdn = 1.00, IQR = 0.00)
The scenario facilitated the exchange of information within the team and that even more objects and scenarios could be considered for investigation using AR technology. They also mentioned that the scenario helped them to build up a common ground regarding the situation. They further considered the AR system as suitable for enabling collaboration among distributed users. Considering the AR equipment, one major problem was caused by the mask being worn over the mouth, which lead to fogging of the HMD.

3.6.3 Domestic violence

The participants in this scenario indicated that the flow of actions and the orders given during the experiment relate to important tasks for their daily work (Q4.3, Mdn = 2.00, IQR = 0.75). During the experiment sessions, there were no significant technical errors (Q4.10, Mdn = 1.00, IQR = 0.00). If any errors occurred, they were resolved quickly (Q4.11, Mdn = 1.00, IQR = 0.00). The participants further stated that the scenario was realistic for the objectives (Q4.5, Mdn = 2.00, IQR = 0.75), the virtual information was well recognizable (Q4.6, Mdn = 2.00, IQR = 0.75) and that the information was displayed at the right time (Q4.6, Mdn = 2.00, IQR = 0.00).
The information delivery protocol with AR technology and the contextual information such as on-the-spot person profiles, information about objects, the visibility and timing of AR indications, are perceived as very good aspects of the scenario. The participants stated that these possibilities have a positive impact on the development of team situational awareness. The restricted mobility for the local policeman was considered as a critical issue for the feasibility of AR in real operations. Occasionally, the AR content was too overwhelming and hindered the focus on the current activity.

3.7 Discussion

In summary, the participants of the first test appreciated the shared visualization, the communication, the directions of the external supervisor and the person profile pictures being delivered on the spot. The evaluation of the answers indicates that the scenarios were clear and attractively built, with clear instructions and explanations given beforehand. The location and the setup which included weapons, real handcuffs, visual representations of blood patterns and injuries (on a mannequin in the forensic scenario) contributed to the realism of the scenarios. Table 4 presents in short the overall findings of the usability study.
Table 4
Overall results of the usability study.
Positive aspects
Negative aspects
• shared visualization
• some actions being slower than in real operations (scenario)
• communication
• lower mobility of the local (technology)
• directions of the external supervisor
• temporary loss of visual tracking which was caused by a very high pace of the tasks (technology)
• person profile and data delivered on the spot
• occasional wrong calibration (technology)
• situational awareness to improve the common operation picture
• mask being worn over the mouth leads to
• virtual information is easily recognizable and displayed at the right time
fogging of the HMD (technology)
In most cases, the virtual information was easily recognizable and displayed at the right time. The ability of the AR system to add information to a real situation and to support collaboration among distributed users, showed positive effects on communication and team SA. With the TGD approach, we were able to create realistic circumstances to test the feasibility and usability of AR technology in the security domain. The limitations of the technology, mainly the use of the heavy backpack, showed how important the relationship to the real work environment was for the participants. Additionally, while AR technology was appreciated by most of the participants for easily sharing information within and amongst teams, they also reported on information overload introduced by the technology.
Thus, the first test showed limitations of the AR technology, mainly because of the immobility of the system and its user. The test results lead to an improvement of the AR system towards a wireless connection. Furthermore, a free hand user interface was introduced. With these improvements, we set up a second study, moving a step further in the direction of exploring the use of AR for the development of team SA.

4 Study on collaboration and team situational awareness

4.1 Scenario design

Like in the first study, TGD influenced the design of the scenarios for the study on AR technology to foster collaboration and SA within and between emergency units. The scenarios were developed in a similar workshop as described above (see 3.1). During a half-day workshop, in which 6 members of the Dutch Police, the Netherlands Forensic Institute (NFI), and the fire brigade of the port of Rotterdam participated, 2 different scenarios have been identified.
The following sections describe the 2 identified scenarios. Compared to the earlier described 3 scenarios, the following scenarios were designed in order to evaluate the effect of the AR system on collaboration and situational awareness in the different teams (police, fire department and forensics). For that purpose, the scenarios are designed in such way that they can be played in two conditions: (1) with AR support for virtual co-location and (2) when using standard equipment following standard procedures.

4.1.1 Discovery of an ecstasy lab

A team of 2 policemen is informed about a situation via phone and arrive at an apartment. They discover a strange chemical smell and small chemical containers in front of the apartment (R). Before the policemen on the site enter the building, they receive information about the location as well as the current inhabitant from their remote colleague. After ringing the bell, the policemen on the site enter the building with approval of the inhabitant, who appears in regular clothes in front of the police team. The policemen recognize a strange chemical smell eminating from within the house. At the site, they are able to mark suspected objects, take images of the location and send it to a remote expert (P). Again, with approval of the inhabitant, the police team starts searching the site. They follow the strange scent, which is even stronger inside the building (R). When they discover an ecstasy lab in the kitchen full of chemical bottles, they arrest the inhabitant. The remote policeman calls the fire department for further support (M).
On arrival, the local firemen receive an oral briefing on the situation as discovered by the policemen on location (R). A team of 2 firemen enters the apartment. In the apartment, the firemen investigate the different rooms in order to secure the apartment for further investigation (P). They perform measurements on the found chemicals and the air quality. On clearance of the location, the remote fireman contacts the forensic institute for further investigation (M).
The forensic investigator receives an oral briefing of the location by the local firemen (R). After entering the apartment, the forensic investigator first analyses the site and sets up a research plan. This plan includes the marking of fingerprints on objects, collection of DNA evidence or the taking of pictures on the site (P). In discussion with a remote colleague, the local investigator refines the plan or asks for additional information from the fire department and police (M). Following the plan, the local investigator starts collecting evidence.
This scenario can be played in 2 conditions (with AR support and with standard equipment). When using standard equipment, the participants are only allowed to use their standard equipment for audio communication as well as a camera to take pictures for briefing and documentation purposes.
With AR support, one of the local participants wears an HMD for displaying augmented reality content and enabling virtual co-location with a remote colleague. Via a 3D user interface, the local participant can take pictures of the scene, annotate the scene with virtual objects, e.g. arrows, spheres, hazard symbols or evidence identification numbers, and share it with a remote colleague (see Section 4.5.3). The remote expert in addition can provide information to the local participant, e.g. on the inhabitant of the apartment or the found chemicals, or annotate the scene using the same instruments as the local colleague (see Section 4.5.2).
In both conditions, the location needs to be prepared with suspect objects and fingerprints beforehand. Additionally, one actor needs to play the inhabitant on the spot. Audio communication among the local and remote team members needs to be established using the standard equipment of the different organisational units.

4.1.2 Home visit by a VIP

A VIP plans a home visit (R). Just before the visit, a reconnaissance team has to check the apartment for safety. For their safety check, the reconnaissance team receives information on the address as well as the contact person living in the apartment. One member of the reconnaissance team goes to the apartment to check for safety. Each room of the apartment is investigated. During investigation, possible suspect and dangerous objects are discussed and checked with the local contact person (M). Dangerous objects are to be removed. Pictures are being taken to make it possible to identify changes when visiting the apartment with the VIP (P). When the apartment can be declared safe, the reconnaissance team informs the personal protection unit.
The reconnaissance team orally briefs the personal protection unit using the pictures that have been taken during the investigation (R). At a later time, one member of the personal protection unit arrives with the VIP at the apartment. Together they enter the apartment. During the visit, the member of the personal protection unit discovers a recent suspect change in the apartment (R) and decides to abort the visit (M). While the remote colleague provides information on possible evacuation routes, the VIP and the local member of the personal protection unit leave the apartment (P).
This scenario can also be played with AR support and with standard equipment. When using standard equipment, the reconnaissance team and the personal protection unit, use their standard equipment for audio communication as well as a camera to take pictures for briefing and documentation purposes. With AR support, the local team member wears an HMD for displaying augmented reality content and enabling virtual co-location with a remote colleague. Via a 3D user interface the local team member can take pictures of the scene and annotate the scene with virtual objects, to indicate that a suspect object has been checked and declared safe (see Section 4.5.3). The remote colleague as an example can provide additional information on the planned visit, the address or give information about the local contact person (see Section 4.5.2). In both conditions, the location needs to be prepared with suspect objects and changed after the visit of the reconnaissance team to simulate a possible dangerous situation for a VIP. Additionally, one actor needs to play the local contact person and audio communication among the team members needs to be established.

4.2 Participants

13 participants in total took part in the experiment. Participants were chosen randomly, due to their availability on the day of the experiment. All participants were male, with an age from 25-54 years (M = 37.8 SD = 10.0). All had a minimum of 2 years experience in their recent professional occupation. The most experienced had 12 years of experience in his field (mean = 6.3). 3 participants were forensic researchers from the Netherlands Forensic Institute (NFI). 3 were firemen from the fire brigade at the port of Rotterdam. 3 were policemen from the Dutch Police in North-Holland. 2 were from a close protection team in the Dutch police and 2 were from a reconnaissance team from the Royal Netherlands Marechaussee (RNLM), which is a gendarmerie corps, i.e. a police corps with military status. In addition to the above participants, 3 more members of the above organizations participated to play the roles of the inhabitant of the apartment in the ecstasy lab scenario, the contact person as well as the VIP. These 3 members were also involved in the design of the scenarios.

4.3 Materials

In this second study, our aim was to investigate how distributed security teams collaborate with AR technology, and which effect the AR technology has on situational awareness of these teams. We used a pre-questionnaire as first measurement method (see Table 5). With the pre-questionnaire, data was collected about the participants’ background, their experience in the domain with AR technology and their expectations towards the experiment.
Table 5
Questionnaire on the participants’ background, experience and expectations.
https://static-content.springer.com/image/art%3A10.1007%2Fs10606-015-9235-4/MediaObjects/10606_2015_9235_Tab5_HTML.gif
For the first run through the scenario, participants were given the technology currently available in the field, such as their standard issue communication equipment and a camera. For the second run, one local participant used the AR support system described in chapter 3, to establish virtual co-location with a remote colleague. When using AR support, participants also used their standard communication equipment. After both rounds, a questionnaire was provided to the participants, which consisted of two sets of questions. Table 6 shows the questionnaire for the participants using AR support. The questionnaire for the participants when having no AR support only differs with regard to question 2.2. The first two sections of the questionnaire are related to the experiment itself. The third section assesses the quality of collaboration, by asking questions along the 7 dimensions of collaboration quality as introduced by (Burkhardt et al. 2009).
Table 6
Questionnaire on collaboration quality and situational awareness with AR support.
https://static-content.springer.com/image/art%3A10.1007%2Fs10606-015-9235-4/MediaObjects/10606_2015_9235_Tab6_HTML.gif
As we discussed in section 2.2, situational awareness includes the perception, comprehension and prediction of each other’s actions within a given situation in order to align and integrate the team members’ actions. The fourth section of the post-questionnaire consists of a self-rating of the individual situational awareness. Several different measurement methods exist for measuring the level of situational awareness. The measurement approaches include freeze probe techniques, real-time probe techniques, self-rating techniques, observer rating techniques, and performance measures (Salmon et al. 2009). Very little measurement approaches exist for distributed or team situational awareness. For the questionnaire we use the validated post-test self-rating technique (Taylor 1990) as this avoids the freezing of action during the test, like when applying the SAGAT method (Endsley et al. 1998). Even though the freeze-probe methods provide more significant data, it has the important drawback of interrupting an action, and thus may negatively affect performance. Self-rating techniques such as the SART questionnaire are administered post-trial, and thus have a non-intrusive character. Furthermore, in their study, (Salmon et al. 2009) come to the conclusion that a post-test self-rating technique is applicable whenever “SA content is not pre-defined and the task is dynamic, collaborative, and changeable and the outcome is not known (e.g. real world tasks)” (Salmon et al. 2009). By assessing the individual SA, the team SA can be judged as well as this is defined as “the degree to which every team member possesses the situation awareness required for his or her responsibilities” (Endsley 1995).
Finally, after each experiment, a structured de-briefing was used to further investigate the experiences of the participants with the technology, their self-rating collaboration quality and SA. Two video cameras were used to record the experiment in order to conduct a qualitative analysis, again along the seven dimensions described by (Burkhardt et al. 2009). One video camera was placed to record the actions and communications on the spot (local person), the other was recording the actions and communication of the remote person. This camera was also used to record the de-briefings.
For analysis, we consider the ordinal scale for the 5-point and 7-point Likert based questionnaire. To interpret and report results, we use the median values and the interquartile range indicators, derived from the answers to the questions. In addition, we use p-value of two-sided Wilcoxon rank sum tests to determine whether the questionnaire data for the same Likert items are valid for comparisons.
Table 7 illustrates the categories taken into account for the statistical analysis. The six categories C01-C06 are linked to the ecstasy lab scenario. From these six categories, three are representing experiments using AR support C01-C03 and three are representing experiments without AR support C04-C06. The VIP scenario is studied using the four categories (C07-C10). From these four categories, the three categories C07-C09 are representing experiments using AR support and C10 is representing the experiment without AR support. In addition, the six categories C11-C16 are not dependent on the scenario. Out of these, the two categories C15 and C16 are not dependent on the role played by the participants during the experiment sessions.
Table 7
Categories per scenario, condition and role.
Scenario
Condition
Role
Category
Ecstasy lab
AR
All AR
C01
Remote
C02
Local HMD
C03
Non-AR
All (No AR)
C04
Local
C05
Remote
C06
VIP
AR
All AR
C07
Local
C08
Remote
C09
Non-AR
Local
C10
Both scenarios
AR
Local (With HMD)
C11
Non-AR
Local
C12
AR
Remote
C13
Non-AR
Remote
C14
AR
Local & Remote
C15
Non-AR
Local & Remote
C16
To derive relevant observations from the data, the medians are used as primary comparison criterion. The comparisons take into account valid pairs of categories, which in turn, relate to the experiments from the same scenario and role. The categories C11-C14 are exceptions in the sense that they refer to experiments on both scenarios. Still, C11-C14 consider the role played during the experiment while C15 and C16 just distinguish whether AR support was used or not. Table 8 displays the pairs of categories for investigation. Please, note that the categories C7 and C9 are not used for comparison, as the non-AR VIP scenario was played without a remote colleague, as this resembles current work practices.
Table 8
Pairs of categories for comparison.
AR category
compared to
C1
C2
C3
C8
C11
C13
C15
Non-AR category
C4
C6
C5
C10
C12
C14
C16

4.4 Procedure

All experiments took place indoors in a real training environment at the Netherlands Forensic Institute (NFI). The testing altogether lasted one day. Figure 7 shows the plan of the CSI lab at the NFI. The upper highlighted box shows the plan of the apartment that was used as ecstasy lab and as the location for the house visit. The apartment consists of four rooms, i.e. a bedroom, a bathroom, a kitchen and living room combination and an entrance hall. The orange highlighted box in the middle of the plan resembles a typical Dutch street. During the experiment, this area was used by the different emergency teams to orally brief each other about the situation. The lower highlighted box shows the location for the remote colleague and further activities around the experiment, like briefing and de-briefing. The location is physically separated from the apartment by walls and doors, so that remote and local persons could only interact via the available technology.
All participants of the experiments were given a slide presentation to introduce the goal of the experiment. In addition to this general presentation, the participants of the ecstasy lab scenario experiment, i.e. 3 policemen, 3 firemen and 3 forensic investigators, were given a presentation on the general outline of their scenario with and without AR support. The same applies to the participants of the VIP scenario, i.e. 2 members of the close protection unit and 2 members of the reconnaissance team.
Each of the scenarios was played 2 times with and without AR support. First, the scenarios without AR support were played. Then, the scenarios with AR support were played. Between each round, the setup of the apartment was changed to avoid sequence effects. These changes included moving evidence from one location to another in the ecstasy lab scenario, or hiding different suspect objects in the VIP scenario. In addition, the roles of the participants were rotated to allow all participants to experience the local and remote role, e.g. a fireman who in the first round had the role of the remote colleague became the local fireman with AR support in the second round.
After the introductory presentation, all participants were asked to fill in the pre-questionnaire (see Table 5) simultaneously. After each round, all participants were asked to fill in the post-test questionnaire (see Table 6) and participate in a structured de-briefing session.
Compared to the previous experiment, the remote and the local user were both able to interact and manipulate the virtual content, using a classic 2D graphical user interface (for the remote user) and a 3D user interface with hand gestural input (for the local user). For each scenario and for each role the participants had, the user interfaces were customized according to their specific requirements. To become acquainted with the AR system, each participant group was trained on the remote user interface as well as the 3D user interface for the local user.

4.5 Distributed Collaborative Augmented Reality Environment (DECLARE)

In order to support the new scenarios, we extended our DECLARE framework (see Figure 8). Apart from a few minor changes in all components, major changes were made to the local user AR support component. These changes were necessary to enable local users to interact with the virtual content. For that purpose, the RGB-D camera of the HMD was used to enable hand tracking and implement a 3D user interface, allowing users to interact with the system with their bare hands. The following sections describe in detail the changes compared to the first evaluation round and explains the functionality available for local and remote users.

4.5.1 Localization and mapping

Compared to the implementation described in section 3.5.1, RDSLAM (Tan et al. 2013) offers an improved initialization phase and more importantly, it supports placing virtual objects in the updated version.
The remote user can initiate the initialization step, by pressing a button on the user interface. Again, the local user has to horizontally move the camera of the HMD, from left to right, and during this process the best frames are selected automatically, in order to set the 3D coordinates of the system. Re-initialization can be done at any moment by the remote user, but since this means a new coordinate system will be set, all virtual objects that are not in fixed position on the screen will be deleted, as their location will not fit in the new coordinate system.
Secondly, the updated RDSLAM algorithm offers access to the entire cloud of points recognized until the current moment, offering a higher precision for placing virtual objects. For example, in Figure 9 the yellow points represent the current tracked points, the blue points represent the whole cloud of points recognized until the current moment and the red ones represent invalid ones.

4.5.2 Remote user AR support

Besides the actions described in Section 3.5.3, the remote user is now able to perform additional actions and place additional virtual objects, by selecting the corresponding menu item in the left part of the 2D graphical user interface. Apart from the possibility to initialize and re-initialize the tracking via RDSLAM, several other actions were added to the 2D user interface of the remote user. The following subsections describe these additions and relate these to the scenarios.
Placing virtual objects superimposed on the real world
In addition to the 3D spheres, 3D blocks, 3D arrows, laser scanning markers and text notes already used in the previous experiment (see Table 2), remote users in the Ecstasy lab scenario can now place additional virtual objects (e.g. hazard symbols, DNA and fingerprint labels, barcode labels) to annotate the real scene (see Figure 10). The hazard symbols are used to indicate different dangerous substances, classified in 13 categories, depending of the kind of danger they represent (e.g. explosive, radioactive, chemical contamination etc.). The DNA labels are attached to real objects from which samples need to be taken for DNA analysis. Similarly, the fingerprint labels indicate areas to be checked for fingerprint traces. The barcode labels, also called SIN in Dutch, are attached to evidence for a later identification. All virtual objects are meant to trigger interaction and collaboration among the team members and the different involved organisations. As example, consider a policeman marking suspicious chemical substances with a 3D sphere, a firefighter checks the substance and places the corresponding hazard symbol and the forensic investigator decides based on the mark-up on whether and how to collect evidence. The latter is then indicated by text notes and probably a barcode for the evidence number.
Figure 11 shows some of the above symbols when they are placed within the environment. At the wall in the back, e.g., there is a DNA symbol, at the carpet in the front there is small hazard symbol and on the book on the table there is a fingerprint symbol.
Loading pictures taken with the HMD camera
The names of the pictures saved on the server appear in a list, which the remote user can choose one to display, either in a fixed or in a relative position. A picture in a fixed position is mainly meant to provide additional information the local user. When a picture is displayed in a relative position, this position resembles the position at which the picture has been taken. This is to support detecting suspicious changes.
Changing the colour of the virtual 3D objects
In the Home visit by a VIP scenario, the remote user can change the colour of a selected sphere, cube or arrow by pressing the R, G, or B key to colour the object correspondingly in red, green or blue (see Figure 12). The different colours can be used to indicate different levels of importance for the annotations. Initially, an object in the apartment might for example be marked with a red sphere, as it found to be suspicious. After consultation with the local inhabitant, considering additional information, or discussing the object with the local colleague, the colour of the sphere might be changed into green, as the object is not suspect anymore.

4.5.3 Local user AR support

The local user wears an optical see-through HMD and the 3D user interface is adapted for the HMD from META (see Figure 6). The 3D user interface supports free hand interaction with the environment. The local user is now able to interact with the virtual environment, not just visualize it.
If the right hand of the local user is in the view of the HMD depth camera, the hand cloud of points appears, as it can be seen in Figure 13. The hand is recognised when a small circle is displayed on the top of one finger (which is the upper most positioned finger on the vertical axes).
We designed a 3D user interface that allows local users to take specific actions depending on their role in the different scenarios as specified above. All actions fit into the following categories:
1.
Taking pictures with the HMD camera
 
2.
Placing virtual objects that are superimposed on the real world using tracking points provided by the RDSLAM component
 
All actions can be triggered if the pointing circle on the recognised finger stays for 1.4 s over a menu button. The threshold of 1.4 s was empirically set in a user study with 10 different users having different background in the use of AR systems. In this study, we noticed that 1 s was too quick in order to clearly identify the local user’s intention and 2 s was too slow and led in some cases to exhaustion of the local user.
Taking pictures with the HMD camera
The local user is able to take pictures with the HMD camera and store them in the shared memory space of DECLARE. The picture is taken 3 s after the action was triggered, so that the local person has time to remove the hand outside the view of the camera. The local user has further the possibility to save the picture or to delete it (see Figure 13). When saved on the server, the picture is automatically assigned a filename. This is done to save time for the local user. The filename is unique and allows photos to be ordered according to the time being taken.
When the local user takes a picture, the current position of the HMD camera as computed by RDSLAM (Tan et al. 2013) is used to place a virtual object containing the picture. When a user selects such an object, the picture is displayed in a fixed position over the whole display in the HMD.
This is to support comparing the current real world situation with a picture taken earlier. This functionality is especially important for the VIP scenario. In this scenario, the reconnaissance team might take pictures of the local environment, as it is considered safe. The personal protection unit might check upon the pictures to identify changes to the environment. In case of suspicious changes, the VIP visit might aborted.
Placing virtual objects that are superimposed on the real world using tracking points provided by the RDSLAM component
If a virtual object is created (e.g. the action of the first 3 buttons in Figure 14), it follows the movement of the recognised finger. To place the object in space, the finger has to be kept still for the same amount of time of 1.4 s. The coordinates of the object are computed by the RDSLAM component of DECLARE that returns the closest tracked point from the cloud of points detected by the tracking algorithm until that moment.
A virtual object is selected or deselected when the centre of the pointing circle of the recognised finger is hovering over that virtual object. A selected object can be resized, repositioned or deleted. To return to the main menu, the selected object has to be deleted, deselected or the button MAIN MENU has to be triggered (see Figure 15).
In each of the two scenarios, local users can place different virtual objects. Table 9 gives an overview of the virtual objects per scenario. The 3D spheres, blocks and arrows are used in both scenarios to mark or indicate to certain objects that require a special attention. The hazard symbols, DNA and fingerprint labels and the barcode labels can be used by the local user to annotate the scene. Annotating the scene with virtual objects supports information exchange between the local and remote users as well as among the different organisation involved in the different scenarios. As described for the remote user, a suspicious object in the Ecstasy lab scenario might be marked by the police, checked by the fire department and secured for evidence by the forensic institute. In the VIP scenario, suspicious objects in the real scene might be initially marked with, e.g., spheres coloured in red and after discussion with the remote colleague, the remote colleague might clear the object and mark in it green. This would indicate to the personal protection unit that a suspiciously looking object was checked for safety.
Table 9
Available virtual objects per scenario for placement in the real world.
 
3D sphere
3D block
3D arrow
DNA symbol
Fingerprint symbol
Barcode labels
Hazard symbols
Discovery of an Ecstasy Lab
X
X
X
X
X
X
X
Home visit by a VIP
X
X
X
    
In Figure 16 (left), the menu for placing hazard symbols can be seen in the view of the local user. The right side of the same figure shows the menu for placing fingerprint and DNA labels. The SIN button allows the selection of a barcode label that identifies evidence.

4.6 Results

This section reports on the results of the study on collaboration and situational awareness. In the following, we firstly discuss in detail the quantitative results from the questionnaires and secondly the qualitative results from the de-briefings.

4.6.1 Results from the post-test questionnaire

Table 10 presents the size of each set of data points for each of the 16 categories defined for the study. There were seven exceptions of missing data, one in the category C04, item [4.7], one in category C06, item [4.7], one in category C10, item [3.4], one in category C12, item [3.4], one in category C14, item [4.7], two in category C16, items [3.4] and [4.7].
Table 10
Size of the questionnaire data set.
Category
C01
C02
C03
C04
C05
C06
C07
C08
C09
C10
C11
C12
C13
C14
C15
C16
#samples
12
6
6
16
11
4
5
3
2
4
9
15
8
4
17
20
Given the Likert items from the questionnaire, an exploratory factor analysis identified two scales: collaboration quality (five items; Cronbach’s α = 0.98) and situational awareness (seven items; Cronbach’s α = 0.97). In order to compare the medians of the data sets C01 to C16 as specified in Table 8, a statistical significance test is run. First, to test if the data is from a population with a normal distribution, the Anderson-Darling test is used. For some items and categories (234 out of 256 test cases), the data sets are not from a population with a normal distribution. In C08, the sets of data points per category and item are too small so that testing for a normal distribution is not possible (for AD test, at least 4 samples per set are required). Secondly, to test whether the data in two sets, are samples from distributions with equal medians or not, a two-sided Wilcoxon rank sum test is used.
Table 11 shows partial results of the medians, interquartile ranges, and p-values for each test run. Only the pairs of categories, for which the statistical tests lead to the rejection of the null hypothesis, provide solid statistical proof while comparing the medians. The cases providing statistically valid comparisons are highlighted in green. The complete set of test results is presented in Appendix I.
Table 11
Medians, interquartile range, and results of two-sided Wilcoxon rank sum tests per category (p-value).
https://static-content.springer.com/image/art%3A10.1007%2Fs10606-015-9235-4/MediaObjects/10606_2015_9235_Tab11_HTML.gif
The results for the ecstasy lab scenario, indicate the level of arousal [4.4] is lower (Mdn = 5, IQR = 2) when using the AR system (C01), compared to the standard approach with no AR (C04), for both local and remote user (Mdn = 6, IQR = 1.5) (p = 0.0046 < 0.05). For the same scenario, the arousal [4.4] is lower (Mdn = 4, IQR = 4) for the local user wearing the AR HMD (C03), compared to the standard procedure with no AR (C05), (Mdn = 6, IQR = 0) (p = 0.0273 < 0.05). In the same scenario, both local and remote users (C01) using the AR system, focused on a lower number of aspects [4.5] (Mdn = 5, IQR = 1.5) than in the standard procedure that involves no AR (C04) (Mdn = 6, IQR = 0.5), (p = 0.0025 < 0.05). Additionally, the level of attention for the user wearing an HMD in this scenario was lower (Mdn = 3.5, IQR = 3), compared to the standard approach without AR (Mdn = 6, IQR = 0.8), (p = 0.0016 < 0.05). The division of attention [4.6] was lower (Mdn = 4, IQR = 3) for the local user wearing the HMD during the ecstasy lab discovery scenario (C03), as compared to using no AR support at all (C05) (Mdn = 6, IQR = 2.8), (p = 0.0315 < 0.05).
The same can be observed when considering both scenarios together. The level of arousal [4.4] (Mdn = 5, IQR = 3.3) for the local user wearing a HMD (C11) is lower than the level of arousal of the local when no AR support is used (C12) (Mdn = 6, IQR = 0.8), (p = 0.0271 < 0.05). Similarly, the concentration level [4.5] of the local user is lower when using an AR HMD (C11) (Mdn = 4, IQR = 2.3), compared to using no AR system (C12) (Mdn = 6, IQR = 1.5), (p = 0.0003 < 0.05). Further to this, the attention level [4.6] of the local user is lower when using AR HMD (C11) (Mdn = 5, IQR = 2.3), compared to using no AR support (C12) (Mdn = 6, IQR = 3), (p = 0.0351 < 0.05). The mental capacity [4.7] of the local user is lower when wearing an AR HMD (C11) (Mdn = 4, IQR = 2) as compared to not using AR support at all (C12) (Mdn = 6, IQR = 1), (p = 0.0149 < 0.05).
The level of arousal [4.4] is lower for the AR users (C15) (Mdn = 5, IQR = 2.3) then for the non-AR users (C16) (Mdn = 6, IQR = 2), (p = 0.0115 < 0.05). A similar effect on attention [4.5] is for the AR users (C15) (Mdn = 5, IQR = 1.3) as compared to the non-AR users (C16) (Mdn = 6, IQR = 1), (p = 0.0009 < 0.05).
Table 12 illustrates the results for demand, supply, understanding and overall SART scores, per category. An overall SART score is derived based on the formula: SU = U − (D − S) (Taylor 1990), where U is the summed understanding, D is the summed demand and S is the summed supply. The understanding indicator is computed using the Likert items [4.8] and [4.9]. The demand indicator uses the set of Likert items [4.1], [4.2] and [4.3]. The supply indicator uses the set of Likert items of [4.4], [4.5], [4.6] and [4.7]. The highest average overall SART score was 19.50 for the remote users using AR support in the ecstasy lab scenario (C02). This category also had the highest overall SART score (33), together with the other three categories (C01), (C13) and (C15). The lowest average overall SART score was 10.17 for the local using the AR HMD in the ecstasy lab scenario (C03). The highest average overall understanding (23.50) holds for two categories, i.e. for the remote users without AR support in the ecstasy lab scenario (C06) and for the remote user without AR support in both scenarios (C14). The lowest value for the overall understanding per scenario (8) was registered for the categories (C01), (C03), (C11) and (C15). From these four categories, the first two categories (C01) and (C03) focus on the ecstasy lab scenario. The lowest average overall understanding (15.17) was for the local user with AR HMD in the ecstasy lab scenario (C03).
Table 12
Results for demand, supply, understanding and overall SART scores per category.
 
C.01
C.02
C.03
C.04
C.05
C.06
C.07
C.08
C.09
C.10
C.11
C.12
C.13
C.14
C.15
C.16
Demand:
Mean:
10.58
11.67
9.50
9.75
10.18
9.25
10.60
10.33
11.00
7.00
9.78
9.33
11.50
9.25
10.59
9.20
Std:
1.88
1.63
1.52
2.11
1.78
2.87
1.34
1.53
1.41
0.82
1.48
2.13
1.51
2.87
1.70
2.21
Max:
14
14
12
13
13
13
12
12
12
8
12
13
14
13
14
13
Min:
8
9
8
7
7
7
9
9
10
6
8
6
9
7
8
6
Supply:
Mean:
14.25
14.00
14.50
15.25
15.27
16.50
14.40
14.67
14.00
13.50
14.56
14.80
14.00
16.50
14.29
14.90
Std:
3.70
3.85
3.89
3.84
4.13
2.38
1.34
1.53
1.41
2.38
3.17
3.75
3.30
2.38
3.14
3.61
Max:
19
18
19
20
20
19
16
16
15
16
19
20
18
19
19
20
Min:
9
9
9
8
8
14
13
13
13
11
9
8
9
14
9
8
Understanding:
Mean:
18.50
21.83
15.17
22.81
23.00
23.50
20.20
18.67
22.50
22.00
16.33
22.73
22.00
23.50
19.00
22.65
Std:
5.90
3.31
6.24
3.25
2.86
4.12
3.27
3.21
2.12
3.56
5.48
2.96
2.93
4.12
5.22
3.23
Max:
28
28
22
27
27
27
24
21
24
25
22
27
28
27
28
27
Min:
8
19
8
18
19
19
15
15
21
17
8
17
19
19
8
17
SA:
Mean:
14.83
19.50
10.17
17.31
17.19
16.25
16.40
14.33
19.50
15.50
11.56
17.27
19.50
16.25
15.29
16.95
Std:
8.39
6.80
7.49
3.81
3.67
4.79
5.68
6.03
4.95
5.57
6.97
4.18
6.05
4.79
7.55
4.11
Max:
33
33
20
23
23
22
23
20
23
21
20
23
33
22
33
23
Min:
−1
16
−1
11
13
11
8
8
16
8
−1
8
16
11
−1
8

4.6.2 Results from the de-briefing

The de-briefing of the scenarios without the use of AR technology shows that the participants value their current technology as sufficient in the first instance. Nevertheless, they also experience clear limitations of the current technology. Both the police team and the firemen in the ecstasy lab scenario used their cell phones to collect some visual material of the scene. The teams then used the material collected for the briefing of the next team. They noted that pictures taken by their cell phones, lack enough detail for proper briefing. One participant stated that sometimes he just recognizes that he is in need of further information when he is at the scene himself, but only after the other team already left.
Two main issues were raised within the de-briefing of the scenarios with the use of AR. The first one was that the majority of the participants mentioned that the role of the remote person, with the possibility of sharing the local view of the scene, to add information immediately and to take pictures of the scene that can be used later on, was an important added value of the new technology. With these abilities, the remote person can give advice and provide directions in stressful situations. It was reported as being very useful that the remote user can easily take pictures from the scene, while it is much harder to do it with the hand tracking method available to the local user. Especially, the remote user valued the AR technology as to have great potential. One limitation to the role of the remote user was also reported. The officers working in the close protection field stated that the AR technology would not be that useful in dynamic, threatening situations, as a local has to respond immediately to any danger occurring and that there would be no time and room for waiting and relying on another person’s opinion. The advantage of the remote user in the AR scenario thus was summarized as an advisory one, but not as having an important role in on the spot decision-making and action taking process.
The second issue targets the situational awareness of the whole process. When one participant stated that by participating in the experiment “you are getting more aware of the other parties involved in the whole process and that your actions do have consequences for their work”, the other participants agreed that the experiment increased their awareness for the process as a whole, and their own role in it. The experiment showed clearly that each on the spot action has consequences for the work of other emergency services in the process, and that proper information transfer is crucial. AR technology can support the provision of information, but is seen as a means to increase situational awareness in first place.
The majority of the participants agreed to the observation of one participant, that the AR technology introduces a higher workload, which could distract from crucial tasks within such a situation. One solution to this challenge discussed by the participants was that a new role could be introduced, like an AR expert, who accompanies the regular security team and handles the HMD-driven data collection on the spot.
Finally, participants can imagine the use of the AR technology for big events and for training. Participants especially considered it helpful, when several local users could wear an HMD to share their view with several remote users, who then collect and analyse the data to provide analysis results to the local users. Finally, a combination with GPS is considered as potential added value when being used for the recognition of places and objects.

4.7 Discussion

Table 13 presents in short the overall findings of the study on collaboration and situational awareness.
Table 13
Overall results of the study on collaboration and situational awareness.
Positive aspects
Negative aspects
• Remote user is considered as a useful advisor
• Strong focus on details sometimes hindered the ability to gather the bigger picture of the scene
• Remote user obtained the highest score for individual SA
• AR technology introduces a higher workload
• Collaboration with the remote user lead to higher situational awareness
• Participants showed lower alertness with AR
• AR lets users focus on details
• Some activities are slower than in real operations
• AR supports oral briefing on details
 
• AR increases awareness for the process as a whole
 
The experiment further showed that participants, both local and remote, experienced lower arousal with AR technology, compared to the same scenario without AR technology support. Additionally, reported focus and attention level were lower with AR technology. Participants also reported that they had less mental capacity while using AR technology than while not using it. This issue could be related to the fact that the AR technology was new to all participants and that they had to adapt to the system, which asks for additional mental capacity to the situation when not using AR technology. Related to the outcomes from the de-briefing, this result matches with the experience of a high workload by the participants.
Operational units rely on quick and adequate access and exchange of accurate context-related information (Lin et al. 2004). The exchange and access of information is further a prerequisite for SA (Endsley 1995) and up-to-date information facilitates and maintains situational awareness of operational units (Straus et al. 2010). The experiment showed that AR technology can be used for context-related information access and exchange in the safety domain. While current technology (mostly mobile phones) is very limited in the ability to record and share a detailed picture of a crime scene, AR technology enables users to focus on details and to support oral communication on details in the crime scene. On the other hand, the strong focus on details sometimes hindered the ability to gather the bigger picture of the scene. Still, the possibility to share information among the different organisations, using AR clearly showed to the participants that their actions have consequences on the work of other emergency services in the process and that proper information transfer is crucial. Thereby, AR indirectly increased the awareness of the participants, for inter-organisational collaboration and their own role in it. This is in line with (Reuter et al. 2014) who identified that shared information increases awareness along the organizational chain.
The experiment also illustrates shortcomings of the current technology. Some policemen experienced difficulties due to the temporary loss of visual tracking, which was caused by a very high pace of the tasks and to an improper calibration for the marker-less tracking. As the used RDSLAM system (Tan et al. 2013) relies on a computer vision-based algorithm, the quality of the calibration and online tracking strongly depend on both the richness of visible patterns (for the calibration step) and the good illumination conditions in the physical environment. Occasional technical issues were noticed during the experiment for interacting within the AR system in such conditions. The participants pointed out that some actions were slower than in real operations.
The de-briefings clearly show that the participants see the most value of the AR technology, in introducing a remote user with whom audio and video is shared in real-time. This new role, including the ability to easily interact with the scene through the AR system by placing virtual objects, setting marks or taking pictures, is evaluated as an added value to the work at a crime scene. The remote user is considered as a useful advisor in stressful situations and can provide the external support that action teams depend on (Sundstrom 1999). Using AR for such a virtual co-location of remote users might thus address the mismatch of the information needs of operational units and the ability of ICT to provide the information (Manning 1996; Sawyer and Tapia 2005). It was very beneficial that the interaction with the AR system was very easy for the remote person. The value of the remote user is also supported by the results of the post-test self-rating of SA. The remote user in the ecstasy lab scenario received the highest score for individual SA, and scored highest on understanding the situation. The de-briefing showed that the collaboration with the remote user also lead to a higher team SA, as participants playing the local role greatly appreciated the advice and actions of the remote user.
The ability of simultaneously sharing the view of the crime scene is also seen critically related to privacy issues. As contact persons might not know who is connected to the AR system, the technology might not be accepted in all places, e.g. work with VIPs. On the other hand, all participants mentioned the usefulness of the AR technology for big events and for training purposes.

5 Conclusions and future work

Operational teams in the security domain need to be provided with relevant and up-to-date information to facilitate and maintain situational awareness (Straus et al. 2010). A lack of situational awareness is identified as one of the major challenges for supporting mobile collaboration in emergencies (Reuter et al. 2014). Situational awareness (SA) develops when individuals, involved within a certain situation, look around, gather information about the situation, make inferences, test their inferences, and draw further inferences from the results (Endsley 1995).
This paper reported on the evaluation of an AR system that is being developed to promote information exchange as well as situational awareness for teams within the security domain. The evaluation was carried out in two rounds. Experts from different operational units in the security domain, i.e. the Dutch police, the Netherlands Forensic Institute (NFI) and the fire brigade of the port of Rotterdam, participated in each evaluation round. While the first evaluation round focused on the feasibility and usability of the AR system for the different operational units, the second evaluation round focused on the effect on collaboration and situational awareness.
The usability study showed that the scenarios are well defined and the AR system used was suitable for the tasks. The second test especially showed that the biggest advantage of the AR technology in the security domain can be found in the introduction of a remote user, who is virtually co-located with the users on the crime scene. Such virtual co-location does not only allow the remote user to see what the local users see, but also provide additional information on the spot by augmenting the real environment with virtual objects. Both local and remote user can interact with the virtual content. The augmentation of the real scene triggered the collaboration of the involved organisations, e.g. the police marked possible evidence, the fire department checked and indicated hazardousness and the forensic institute planned for and collected evidence. The appreciation of the remote user is in line with a recent study on mobile collaboration support for emergencies which revealed that remote team members would not only like to see the situation on site, but also be able to provide information to the local team members (Reuter et al. 2014).
The data on the SA rating and the de-briefing from the second experiment showed that the remote user provides the highest value to the team SA. We can conclude that the AR system introduced in the experiments was able to support the perception of the crime scene, especially for the remote user, which had positive impact on the comprehension and prediction of each other’s actions in the collaborating team. Thus, it can be concluded that the introduced AR system led to a higher team SA. In future studies, we aim to further study the impact of AR on team SA. We will enhance our AR system to support mixed collaboration scenarios of multiple local and remote users. Additionally, we will extend our AR system with tools, e.g. a map showing the position and view direction of all local users, to further foster team SA.
Despite its limitations with maturity in the state of the art, AR technology provides an added value to the security domain. It offers strong possibilities for further development as a tool for advice and support in stressful situations. So far, the study provided strong evidence to use AR technology for information exchange in teams operating in the security domain. The most notable and critical problem encountered is the current hardware limitation with regard to the mobility of the HMD device. For preparing the next version of our AR system, we take the findings of the current studies into account and will explore in how far handheld devices, such as mobile phones, can be used to support local users. Still, AR and especially HMD technology is constantly evolving. We expect that AR technology is close to being adopted for real operations, like big events as proposed by the participants of the tests, or for training purposes. For the latter, we aim to further develop the existing scenarios into distributed multi-player AR games, to facilitate a positive effect on collaboration and SA of teams in the security domain.

Acknowledgments

The results of this extended abstract have been achieved as part of a joint project with the Dutch Police and the Netherlands Forensic Institute (NFI) sponsored by the National Coordinator for Security and Counterterrorism (NCTV) of the Netherlands. The authors would further like to thank Rory Clifford for the detailed review and feedback as well as Kjeld Schmidt for the discussions on the ambiguity of the term ‘awareness’.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Appendix

Appendix I

Table 14
Medians, interquartile range, and results of two-sided Wilcoxon rank sum tests per category (p-value).
https://static-content.springer.com/image/art%3A10.1007%2Fs10606-015-9235-4/MediaObjects/10606_2015_9235_Tab14_HTML.gif
Literature
go back to reference Alem, Leila, Franco Tecchia, and Weidong Huang (2011). Remote Tele-assistance System for Maintenance Operators in Mines. In: Proceedings of the 11th Underground Coaloperators Conference, University of Wollongon & the Australian Institute of Mining and Metallurgy, pp. 171-177. Alem, Leila, Franco Tecchia, and Weidong Huang (2011). Remote Tele-assistance System for Maintenance Operators in Mines. In: Proceedings of the 11th Underground Coaloperators Conference, University of Wollongon & the Australian Institute of Mining and Metallurgy, pp. 171-177.
go back to reference Almeida, Igor de Souza, Marina Atsumi Oikawa, Jordi Polo Carres, Jun Miyazaki, Hirokazu Kato, and Mark Billinghurst (2012). AR-based Video-Mediated Communication: A Social Presence Enhancing Experience. In: 14th IEEE Symposium on Virtual and Augmented Reality, (SVR), Washington, DC: IEEE Computer Society, pp. 125-130. Almeida, Igor de Souza, Marina Atsumi Oikawa, Jordi Polo Carres, Jun Miyazaki, Hirokazu Kato, and Mark Billinghurst (2012). AR-based Video-Mediated Communication: A Social Presence Enhancing Experience. In: 14th IEEE Symposium on Virtual and Augmented Reality, (SVR), Washington, DC: IEEE Computer Society, pp. 125-130.
go back to reference Artman, Henrik (2000). Team situation awareness and information distribution. Ergonomics, vol. 43, no. 8, pp. 1111-1128.CrossRef Artman, Henrik (2000). Team situation awareness and information distribution. Ergonomics, vol. 43, no. 8, pp. 1111-1128.CrossRef
go back to reference Azuma, Ronald T. (1997). A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments, vol. 6, no. 4, pp. 355–385.CrossRef Azuma, Ronald T. (1997). A Survey of Augmented Reality. Presence: Teleoperators and Virtual Environments, vol. 6, no. 4, pp. 355–385.CrossRef
go back to reference Azuma, Ronald T., Yohan Baillot, Reinhold Behringer, Steven Feiner, Simon Julier, and Blair MacIntyre (2001). Recent advances in augmented reality. Computer Graphics and Applications, vol. 21, no. 6, pp. 34 –47.CrossRef Azuma, Ronald T., Yohan Baillot, Reinhold Behringer, Steven Feiner, Simon Julier, and Blair MacIntyre (2001). Recent advances in augmented reality. Computer Graphics and Applications, vol. 21, no. 6, pp. 34 –47.CrossRef
go back to reference Bekebrede, Geertje (2010). Experiencing complexity: a gaming approach for understanding infrastructure systems. NGI Infra PhD thesis series on infrastructures. Delft University of Technology, Delft, The Netherlands. Bekebrede, Geertje (2010). Experiencing complexity: a gaming approach for understanding infrastructure systems. NGI Infra PhD thesis series on infrastructures. Delft University of Technology, Delft, The Netherlands.
go back to reference Benford, Steve, John Bowers, Lennart E. Fahlén, and Chris Greenhalgh (1994). Managing Mutual Awareness in Collaborative Virtual Environments. In: Proceedings of the Conference on Virtual Reality Software and Technology, Singapore: World Scientific Publishing Co., Inc., pp. 223-236. Benford, Steve, John Bowers, Lennart E. Fahlén, and Chris Greenhalgh (1994). Managing Mutual Awareness in Collaborative Virtual Environments. In: Proceedings of the Conference on Virtual Reality Software and Technology, Singapore: World Scientific Publishing Co., Inc., pp. 223-236.
go back to reference Benford, Steve, Chris Greenhalgh, Tom Rodden, and James Pycock (2001). Collaborative Virtual Environments. Communications of the ACM, vol. 44, no. 7, pp. 79–85.CrossRef Benford, Steve, Chris Greenhalgh, Tom Rodden, and James Pycock (2001). Collaborative Virtual Environments. Communications of the ACM, vol. 44, no. 7, pp. 79–85.CrossRef
go back to reference Berlin, Johan M., and Eric D. Carlström (2011). Why is collaboration minimised at the accident scene? Disaster Prevention and Management: An International Journal, vol. 20, no. 2, pp. 159–171.CrossRef Berlin, Johan M., and Eric D. Carlström (2011). Why is collaboration minimised at the accident scene? Disaster Prevention and Management: An International Journal, vol. 20, no. 2, pp. 159–171.CrossRef
go back to reference Bharosa, Nitesh, Jinkyu Lee, and Marijn Janssen (2010). Challenges and obstacles in sharing and coordinating information during multi-agency disaster response: Propositions from field exercises. Information Systems Frontiers, vol. 12, no. 1, pp. 49–65.CrossRef Bharosa, Nitesh, Jinkyu Lee, and Marijn Janssen (2010). Challenges and obstacles in sharing and coordinating information during multi-agency disaster response: Propositions from field exercises. Information Systems Frontiers, vol. 12, no. 1, pp. 49–65.CrossRef
go back to reference Bly, Sara A., Steve R. Harrison, and Susan Irwin (1993). Media Spaces: Bringing People Together in a Video, Audio, and Computing Environment. Communications of the ACM, vol. 36, no. 1, pp. 28–46.CrossRef Bly, Sara A., Steve R. Harrison, and Susan Irwin (1993). Media Spaces: Bringing People Together in a Video, Audio, and Computing Environment. Communications of the ACM, vol. 36, no. 1, pp. 28–46.CrossRef
go back to reference Bowers, Clint A., Florian Jentsch, Eduardo Salas, and Curt C. Braun (1998). Analyzing Communication Sequences for Team Training Needs Assessment. Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 40, no. 4, pp. 672–679.CrossRef Bowers, Clint A., Florian Jentsch, Eduardo Salas, and Curt C. Braun (1998). Analyzing Communication Sequences for Team Training Needs Assessment. Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 40, no. 4, pp. 672–679.CrossRef
go back to reference Brandt, Eva (2006). Designing Exploratory Design Games: A Framework for Participation in Participatory Design? In: Proceedings of the Ninth Conference on Participatory Design: Expanding Boundaries in Design - Volume 1, pp. 57–66. New York, NY, USA: ACM. Brandt, Eva (2006). Designing Exploratory Design Games: A Framework for Participation in Participatory Design? In: Proceedings of the Ninth Conference on Participatory Design: Expanding Boundaries in Design - Volume 1, pp. 57–66. New York, NY, USA: ACM.
go back to reference Brown, Mary M. (2001). The Benefits and Costs of Information Technology Innovations: An Empirical Assessment of a Local Government Agency. Public Performance & Management Review, vol. 24, no. 4, pp. 351–366.CrossRef Brown, Mary M. (2001). The Benefits and Costs of Information Technology Innovations: An Empirical Assessment of a Local Government Agency. Public Performance & Management Review, vol. 24, no. 4, pp. 351–366.CrossRef
go back to reference Burkhardt, Jean-Marie, Francoise Détienne, Anne-Marie Hébert, Laurence Perron, Stephane Safin, and Pierre Leclercq (2009). An approach to assess the quality of collaboration in technology-mediated design situations. In: Proceedings of the European Conference on Cognitive Ergonomics: Designing beyond the Product — Understanding Activity and User Experience in Ubiquitous Environments, Helsinki, Finland: VTT Technical Research Centre of Finland, pp. 30:1–30:9. Burkhardt, Jean-Marie, Francoise Détienne, Anne-Marie Hébert, Laurence Perron, Stephane Safin, and Pierre Leclercq (2009). An approach to assess the quality of collaboration in technology-mediated design situations. In: Proceedings of the European Conference on Cognitive Ergonomics: Designing beyond the Product — Understanding Activity and User Experience in Ubiquitous Environments, Helsinki, Finland: VTT Technical Research Centre of Finland, pp. 30:1–30:9.
go back to reference Chalmers, Matthew (2002). Awareness, Representation and Interpretation. Computer Supported Cooperative Work (CSCW), vol. 11, nos. 3-4, pp. 389–409.CrossRef Chalmers, Matthew (2002). Awareness, Representation and Interpretation. Computer Supported Cooperative Work (CSCW), vol. 11, nos. 3-4, pp. 389–409.CrossRef
go back to reference Datcu, Dragos, Stephan G. Lukosch, and Heide K. Lukosch (2013). Comparing Presence, Workload and Situational Awareness in a Collaborative Real World and Augmented Reality Scenario. In: Proceedings of IEEE ISMAR workshop on Collaboration in Merging Realities (CiMeR), 6 pages. Datcu, Dragos, Stephan G. Lukosch, and Heide K. Lukosch (2013). Comparing Presence, Workload and Situational Awareness in a Collaborative Real World and Augmented Reality Scenario. In: Proceedings of IEEE ISMAR workshop on Collaboration in Merging Realities (CiMeR), 6 pages.
go back to reference Datcu, Dragos, Marina Cidota, Heide K. Lukosch, and Stephan G. Lukosch (2014). [Poster] Using Augmented Reality to Support Information Exchange of Teams in the Security Domain, In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR’14), Washington, DC: IEEE Computer Society, pp. 263–264. Datcu, Dragos, Marina Cidota, Heide K. Lukosch, and Stephan G. Lukosch (2014). [Poster] Using Augmented Reality to Support Information Exchange of Teams in the Security Domain, In: Proceedings of the IEEE International Symposium on Mixed and Augmented Reality (ISMAR’14), Washington, DC: IEEE Computer Society, pp. 263–264.
go back to reference Domova, Veronica, Elina Vartiainen, and Marcus Englund (2014). Designing a Remote Video Collaboration System for Industrial Settings. In: Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, New York, NY, USA: ACM, pp. 229–238. Domova, Veronica, Elina Vartiainen, and Marcus Englund (2014). Designing a Remote Video Collaboration System for Industrial Settings. In: Proceedings of the Ninth ACM International Conference on Interactive Tabletops and Surfaces, New York, NY, USA: ACM, pp. 229–238.
go back to reference Dong, Suyang, Amir H. Behzadan, Feng Chen, and Vineet R. Kamat (2013). Collaborative Visualization of Engineering Processes Using Tabletop Augmented Reality. Advances in Engineering Software, vol. 55, pp. 45–55.CrossRef Dong, Suyang, Amir H. Behzadan, Feng Chen, and Vineet R. Kamat (2013). Collaborative Visualization of Engineering Processes Using Tabletop Augmented Reality. Advances in Engineering Software, vol. 55, pp. 45–55.CrossRef
go back to reference Dourish, Paul and Victoria Bellotti (1992). Awareness and coordination in shared workspaces. In: Proceedings of the ACM 1992 Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 107–114. Dourish, Paul and Victoria Bellotti (1992). Awareness and coordination in shared workspaces. In: Proceedings of the ACM 1992 Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 107–114.
go back to reference Drabek, Thomas E., and David A. McEntire (2002). Emergent Phenomena and Multiorganizational Coordination in Disasters: Lessons from the Research Literature. International Journal of Mass Emergencies and Disasters (IJMED), vol. 20, no. 2, pp. 197–224. Drabek, Thomas E., and David A. McEntire (2002). Emergent Phenomena and Multiorganizational Coordination in Disasters: Lessons from the Research Literature. International Journal of Mass Emergencies and Disasters (IJMED), vol. 20, no. 2, pp. 197–224.
go back to reference Endsley, Mica R. (1988). Design and Evaluation for Situation Awareness Enhancement. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Thousand Oaks, CA, USA: Sage Publications, vol. 32, no. 2, pp. 97–101. Endsley, Mica R. (1988). Design and Evaluation for Situation Awareness Enhancement. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Thousand Oaks, CA, USA: Sage Publications, vol. 32, no. 2, pp. 97–101.
go back to reference Endsley, Mica R. (1995). Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 37, no. 1, pp. 32–64.CrossRef Endsley, Mica R. (1995). Toward a Theory of Situation Awareness in Dynamic Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, vol. 37, no. 1, pp. 32–64.CrossRef
go back to reference Endsley, Mica R. (2015). Situation Awareness Misconceptions and Misunderstandings. Journal of Cognitive Engineering and Decision Making, vol. 9, no. 1, pp. 4–32.CrossRef Endsley, Mica R. (2015). Situation Awareness Misconceptions and Misunderstandings. Journal of Cognitive Engineering and Decision Making, vol. 9, no. 1, pp. 4–32.CrossRef
go back to reference Endsley, Mica R., and William B. Jones (2001). A model of inter- and intrateam situation awareness: Implications for design, training and measurement. In M. McNeese, E. Salas, & M. R. Endsley (eds.): New trends in cooperative activities: Understanding system dynamics in complex environments, vol. 7, Santa Monica, CA, USA: Human Factors and Ergonomics Society, pp. 46–47. Endsley, Mica R., and William B. Jones (2001). A model of inter- and intrateam situation awareness: Implications for design, training and measurement. In M. McNeese, E. Salas, & M. R. Endsley (eds.): New trends in cooperative activities: Understanding system dynamics in complex environments, vol. 7, Santa Monica, CA, USA: Human Factors and Ergonomics Society, pp. 46–47.
go back to reference Endsley, Mica R., and Michelle M. Robertson (2000). Situation awareness in aircraft maintenance teams. International Journal of Industrial Ergonomics, vol. 26, no.2, pp. 301–325. Endsley, Mica R., and Michelle M. Robertson (2000). Situation awareness in aircraft maintenance teams. International Journal of Industrial Ergonomics, vol. 26, no.2, pp. 301–325.
go back to reference Endsley, Mica R., Stephen J. Selcon, Thomas D. Hardiman, and Darryl G. Croft (1998). A Comparative Analysis of Sagat and Sart for Evaluations of Situation Awareness. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Thousand Oaks, CA, USA: Sage Publications, vol. 42, no. 1, pp. 82–86. Endsley, Mica R., Stephen J. Selcon, Thomas D. Hardiman, and Darryl G. Croft (1998). A Comparative Analysis of Sagat and Sart for Evaluations of Situation Awareness. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Thousand Oaks, CA, USA: Sage Publications, vol. 42, no. 1, pp. 82–86.
go back to reference Ferrise, Francesco, Giandomenico Caruso, and Monica Bordegoni (2013). Multimodal training and tele-assistance systems for the maintenance of industrial products. Virtual and Physical Prototyping, vol. 8, no. 2, 113–126.CrossRef Ferrise, Francesco, Giandomenico Caruso, and Monica Bordegoni (2013). Multimodal training and tele-assistance systems for the maintenance of industrial products. Virtual and Physical Prototyping, vol. 8, no. 2, 113–126.CrossRef
go back to reference Fish, Robert S., Robert E. Kraut, and Barbara L. Chalfonte (1990). The VideoWindow System in Informal Communication. In Proceedings of the 1990 ACM Conference on Computer-supported Cooperative Work, New York, NY, USA: ACM, pp. 1–11. Fish, Robert S., Robert E. Kraut, and Barbara L. Chalfonte (1990). The VideoWindow System in Informal Communication. In Proceedings of the 1990 ACM Conference on Computer-supported Cooperative Work, New York, NY, USA: ACM, pp. 1–11.
go back to reference Fraser, Mike, Steve Benford, Jon Hindmarsh, and Christian Heath (1999). Supporting Awareness and Interaction Through Collaborative Virtual Interfaces. In Proceedings of the 12th Annual ACM Symposium on User Interface Software and Technology. New York, NY, USA: ACM, pp. 27-36. Fraser, Mike, Steve Benford, Jon Hindmarsh, and Christian Heath (1999). Supporting Awareness and Interaction Through Collaborative Virtual Interfaces. In Proceedings of the 12th Annual ACM Symposium on User Interface Software and Technology. New York, NY, USA: ACM, pp. 27-36.
go back to reference Fussell, Susan R., Robert E. Kraut, and Jane Siegel (2000). Coordination of Communication: Effects of Shared Visual Context on Collaborative Work. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp 21-30. Fussell, Susan R., Robert E. Kraut, and Jane Siegel (2000). Coordination of Communication: Effects of Shared Visual Context on Collaborative Work. In Proceedings of the 2000 ACM Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp 21-30.
go back to reference Fussell, Susan R., Leslie D. Setlock, and Robert E. Kraut (2003). Effects of Head-mounted and Scene-oriented Video Systems on Remote Collaboration on Physical Tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, pp.513-520. Fussell, Susan R., Leslie D. Setlock, and Robert E. Kraut (2003). Effects of Head-mounted and Scene-oriented Video Systems on Remote Collaboration on Physical Tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, pp.513-520.
go back to reference Gauglitz, Steffen, Benjamin Nuernberger, Matthew Turk, and Tobias Höllerer (2014a). In Touch with the Remote World: Remote Collaboration with Augmented Reality Drawings and Virtual Navigation. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA: ACM, pp. 197-205. Gauglitz, Steffen, Benjamin Nuernberger, Matthew Turk, and Tobias Höllerer (2014a). In Touch with the Remote World: Remote Collaboration with Augmented Reality Drawings and Virtual Navigation. In Proceedings of the 20th ACM Symposium on Virtual Reality Software and Technology, New York, NY, USA: ACM, pp. 197-205.
go back to reference Gauglitz, Steffen, Benjamin Nuernberger, Matthew Turk, and Tobias Höllerer (2014b). World-stabilized Annotations and Virtual Scene Navigation for Remote Collaboration. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA: ACM, pp. 449–459. Gauglitz, Steffen, Benjamin Nuernberger, Matthew Turk, and Tobias Höllerer (2014b). World-stabilized Annotations and Virtual Scene Navigation for Remote Collaboration. In Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, New York, NY, USA: ACM, pp. 449–459.
go back to reference Gaver, William W. (1991). Sound Support for Collaboration. In Proceedings of the Second Conference on European Conference on Computer-Supported Cooperative Work, Kluwer Academic Publishers, Norwell, MA, USA, pp. 293–308. Gaver, William W. (1991). Sound Support for Collaboration. In Proceedings of the Second Conference on European Conference on Computer-Supported Cooperative Work, Kluwer Academic Publishers, Norwell, MA, USA, pp. 293–308.
go back to reference Gaver, William W. (1992). The Affordances of Media Spaces for Collaboration. In Proceedings of the 1992 ACM Conference on Computer-supported Cooperative Work, New York, NY, USA: ACM, pp. 17–24. Gaver, William W. (1992). The Affordances of Media Spaces for Collaboration. In Proceedings of the 1992 ACM Conference on Computer-supported Cooperative Work, New York, NY, USA: ACM, pp. 17–24.
go back to reference Gergle, Darren, Robert E. Kraut, and Susan R. Fussell (2013). Using Visual Information for Grounding and Awareness in Collaborative Tasks. Human-Computer Interaction, vol. 28, no. 1, pp. 1–39. Gergle, Darren, Robert E. Kraut, and Susan R. Fussell (2013). Using Visual Information for Grounding and Awareness in Collaborative Tasks. Human-Computer Interaction, vol. 28, no. 1, pp. 1–39.
go back to reference Gerosa, Marco A., Hugo Fuks, Alberto B. Raposo, and Carlos J.P. de Lucena (2004). Awareness Support in the AulaNet Learning Environment. In Proceedings of the IASTED International Conference on Web-Based Education - WBE 2004, Innsbruck, Austria: ACTA Press, pp. 490–495. Gerosa, Marco A., Hugo Fuks, Alberto B. Raposo, and Carlos J.P. de Lucena (2004). Awareness Support in the AulaNet Learning Environment. In Proceedings of the IASTED International Conference on Web-Based Education - WBE 2004, Innsbruck, Austria: ACTA Press, pp. 490–495.
go back to reference Gu, Ning, Mi J. Kim, and Mary L. Maher (2011). Technological advancements in synchronous collaboration: The effect of 3D virtual worlds and tangible user interfaces on architectural design. Automation in Construction, 20(3), 270 – 278.CrossRef Gu, Ning, Mi J. Kim, and Mary L. Maher (2011). Technological advancements in synchronous collaboration: The effect of 3D virtual worlds and tangible user interfaces on architectural design. Automation in Construction, 20(3), 270 – 278.CrossRef
go back to reference Gurevich, Pavel, Joel Lanir, Benjamin Cohen, and Ran Stone (2012). TeleAdvisor: A Versatile Augmented Reality Tool for Remote Assistance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, pp. 619–622. Gurevich, Pavel, Joel Lanir, Benjamin Cohen, and Ran Stone (2012). TeleAdvisor: A Versatile Augmented Reality Tool for Remote Assistance. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, pp. 619–622.
go back to reference Gutwin, Carl, and Saul Greenberg (2002). A Descriptive Framework of Workspace Awareness for Real-Time Groupware. Computer Supported Cooperative Work, vol. 11, no. 3, pp. 411–446.CrossRef Gutwin, Carl, and Saul Greenberg (2002). A Descriptive Framework of Workspace Awareness for Real-Time Groupware. Computer Supported Cooperative Work, vol. 11, no. 3, pp. 411–446.CrossRef
go back to reference Harteveld, Casper (2011). Triadic Game Design: Balancing Reality, Meaning and Play. Berlin, Germany: Springer.CrossRef Harteveld, Casper (2011). Triadic Game Design: Balancing Reality, Meaning and Play. Berlin, Germany: Springer.CrossRef
go back to reference Huang, Weidong, Leila Alem, and Franco Tecchia (2013). HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments. In P. Kotzé, G. Marsden, G. Lindgaard, J. Wesson, & M. Winckler (Eds.), Human-Computer Interaction - INTERACT 2013, Heidelberg NewYork Dordrecht London: Springer, pp. 70–77.CrossRef Huang, Weidong, Leila Alem, and Franco Tecchia (2013). HandsIn3D: Supporting Remote Guidance with Immersive Virtual Environments. In P. Kotzé, G. Marsden, G. Lindgaard, J. Wesson, & M. Winckler (Eds.), Human-Computer Interaction - INTERACT 2013, Heidelberg NewYork Dordrecht London: Springer, pp. 70–77.CrossRef
go back to reference Jones, Hank, and Pamela Hinds (2002). Extreme Work Teams: Using SWAT Teams As a Model for Coordinating Distributed Robots. In Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 372–381. Jones, Hank, and Pamela Hinds (2002). Extreme Work Teams: Using SWAT Teams As a Model for Coordinating Distributed Robots. In Proceedings of the 2002 ACM Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 372–381.
go back to reference Kraut, Robert E., Susan R. Fussell, and Jane Siegel (2003). Visual Information As a Conversational Resource in Collaborative Physical Tasks, Human-Computer Interaction, vol. 18, no. 1, pp. 13–49.CrossRef Kraut, Robert E., Susan R. Fussell, and Jane Siegel (2003). Visual Information As a Conversational Resource in Collaborative Physical Tasks, Human-Computer Interaction, vol. 18, no. 1, pp. 13–49.CrossRef
go back to reference Lauwers, J. Chris, and Keith A. Lantz (1990). Collaboration awareness in support of collaboration transparency: requirements for the next generation of shared window systems. In CHI ’90 Conference on Human Factors in Computing Systems, Special Issue of the SIGCHI Bulletin, New York, NY, USA: ACM, pp. 303–311. Lauwers, J. Chris, and Keith A. Lantz (1990). Collaboration awareness in support of collaboration transparency: requirements for the next generation of shared window systems. In CHI ’90 Conference on Human Factors in Computing Systems, Special Issue of the SIGCHI Bulletin, New York, NY, USA: ACM, pp. 303–311.
go back to reference Leonard, Michael, Graham, S., and Bonacum, D. (2004). The human factor: the critical importance of effective teamwork and communication in providing safe care, Quality and Safety Health Care, vol. 13, pp. i85–i90.CrossRef Leonard, Michael, Graham, S., and Bonacum, D. (2004). The human factor: the critical importance of effective teamwork and communication in providing safe care, Quality and Safety Health Care, vol. 13, pp. i85–i90.CrossRef
go back to reference Li, Nico, Aditya Shekhar Nittala, Ehud Sharlin, and Mario Costa Sousa (2014). Shvil: Collaborative Augmented Reality Land Navigation. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems. New York, NY, USA: ACM, pp. 1291–1296. Li, Nico, Aditya Shekhar Nittala, Ehud Sharlin, and Mario Costa Sousa (2014). Shvil: Collaborative Augmented Reality Land Navigation. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems. New York, NY, USA: ACM, pp. 1291–1296.
go back to reference Lin, Chienting, Paul J.-H. Hu, and Hsinchun Chen (2004). Technology Implementation Management in Law Enforcement: COPLINK System Usability and User Acceptance Evaluations. Social Science Computer Review, vol. 22, no. 1, pp. 24–36.CrossRef Lin, Chienting, Paul J.-H. Hu, and Hsinchun Chen (2004). Technology Implementation Management in Law Enforcement: COPLINK System Usability and User Acceptance Evaluations. Social Science Computer Review, vol. 22, no. 1, pp. 24–36.CrossRef
go back to reference Lukosch, Heide, Bas van Nuland, Theo van Ruijven, Linda van Veen, and Alexander Verbraeck (2014). Building a Virtual World for Team Work Improvement. In S. A. Meijer & R. Smeds (Eds.), Frontiers in Gaming Simulation, , vol. 8264, pp. 60–68. Lukosch, Heide, Bas van Nuland, Theo van Ruijven, Linda van Veen, and Alexander Verbraeck (2014). Building a Virtual World for Team Work Improvement. In S. A. Meijer & R. Smeds (Eds.), Frontiers in Gaming Simulation, , vol. 8264, pp. 60–68.
go back to reference Manning, Peter K. (1996). Information Technology in the Police Context: The “Sailor” Phone. Information Systems Research, vol. 7, no. 1, pp. 52–62.CrossRef Manning, Peter K. (1996). Information Technology in the Police Context: The “Sailor” Phone. Information Systems Research, vol. 7, no. 1, pp. 52–62.CrossRef
go back to reference Milgram, Paul, and Fumio Kishino (1994). A taxonomy of mixed reality visual displays. IEICE Transactions on Information Systems, vol. E77-D, no. 12, pp. 1321-1329. Milgram, Paul, and Fumio Kishino (1994). A taxonomy of mixed reality visual displays. IEICE Transactions on Information Systems, vol. E77-D, no. 12, pp. 1321-1329.
go back to reference Militello, Laura G., Emily S. Patterson, Lynn Bowman, and Robert Wears (2007). Information flow during crisis management: challenges to coordination in the emergency operations center. Cognition, Technology & Work, vol. 9, no. 1, pp. 25–31.CrossRef Militello, Laura G., Emily S. Patterson, Lynn Bowman, and Robert Wears (2007). Information flow during crisis management: challenges to coordination in the emergency operations center. Cognition, Technology & Work, vol. 9, no. 1, pp. 25–31.CrossRef
go back to reference Niehaus, James, and Mark Riedl (2009). Scenario adaptation: An approach to customizing computer-based training games and simulations. In AIED 2009 Workshops Proceedings Volume 3, Intelligent Educational Games, pp. 89–98. Niehaus, James, and Mark Riedl (2009). Scenario adaptation: An approach to customizing computer-based training games and simulations. In AIED 2009 Workshops Proceedings Volume 3, Intelligent Educational Games, pp. 89–98.
go back to reference Nilsson, Susanna, Björn Johansson, and Arne Jönsson (2009). Using AR to support cross-organisational collaboration in dynamic tasks. In 8th IEEE International Symposium on Mixed and Augmented Reality, 2009. ISMAR 2009., Washington, DC: IEEE Computer Society, pp. 3–12. Nilsson, Susanna, Björn Johansson, and Arne Jönsson (2009). Using AR to support cross-organisational collaboration in dynamic tasks. In 8th IEEE International Symposium on Mixed and Augmented Reality, 2009. ISMAR 2009., Washington, DC: IEEE Computer Society, pp. 3–12.
go back to reference Pascual, R. G., M. C. Mills and Carol Blendell (1999). Supporting distributed and ad-hoc team interaction. In International Conference on Human Interfaces in Control Rooms, Cockpits and Command Centres, Stevenage, UK: IET, pp. 64–71. Pascual, R. G., M. C. Mills and Carol Blendell (1999). Supporting distributed and ad-hoc team interaction. In International Conference on Human Interfaces in Control Rooms, Cockpits and Command Centres, Stevenage, UK: IET, pp. 64–71.
go back to reference Poelman, Ronald, Oytun Akman, Stephan Lukosch, and Pieter Jonker (2012). As if Being There: Mediated Reality for Crime Scene Investigation. In CSCW ’12: Proceedings of the 2012 ACM conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 1267–1276. Poelman, Ronald, Oytun Akman, Stephan Lukosch, and Pieter Jonker (2012). As if Being There: Mediated Reality for Crime Scene Investigation. In CSCW ’12: Proceedings of the 2012 ACM conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 1267–1276.
go back to reference Procyk, Jason, Carman Neustaedter, Carolyn Pang, Anthony Tang, and Tejinder K. Judge (2014). Exploring Video Streaming in Public Settings: Shared Geocaching over Distance Using Mobile Video Chat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, pp. 2163–2172. Procyk, Jason, Carman Neustaedter, Carolyn Pang, Anthony Tang, and Tejinder K. Judge (2014). Exploring Video Streaming in Public Settings: Shared Geocaching over Distance Using Mobile Video Chat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, New York, NY, USA: ACM, pp. 2163–2172.
go back to reference Reuter, Christian, Thomas Ludwig, and Volkmar Pipek (2014). Ad Hoc Participation in Situation Assessment: Supporting Mobile Collaboration in Emergencies. ACM Transactions on Computer-Human Interaction (TOCHI), vol. 21, no. 5, pp. 26:1–26:26.CrossRef Reuter, Christian, Thomas Ludwig, and Volkmar Pipek (2014). Ad Hoc Participation in Situation Assessment: Supporting Mobile Collaboration in Emergencies. ACM Transactions on Computer-Human Interaction (TOCHI), vol. 21, no. 5, pp. 26:1–26:26.CrossRef
go back to reference Salmon, Paul M., Neville A. Stanton, Guy H. Walker, Chris Baber, Daniel P. Jenkins, Richard McMaster, and Mark S. Young (2008). What really is going on? Review of situation awareness models for individuals and teams. Theoretical Issues in Ergonomics Science, vol. 9, no. 4, pp. 297–323.CrossRef Salmon, Paul M., Neville A. Stanton, Guy H. Walker, Chris Baber, Daniel P. Jenkins, Richard McMaster, and Mark S. Young (2008). What really is going on? Review of situation awareness models for individuals and teams. Theoretical Issues in Ergonomics Science, vol. 9, no. 4, pp. 297–323.CrossRef
go back to reference Salmon, Paul M., Neville A. Stanton, Guy H. Walker, Daniel Jenkins, Darshna Ladva, Laura Rafferty, and Mark Young (2009). Measuring Situation Awareness in complex systems: Comparison of measures study. International Journal of Industrial Ergonomics, vol. 39, no. 3, pp. 490–500.CrossRef Salmon, Paul M., Neville A. Stanton, Guy H. Walker, Daniel Jenkins, Darshna Ladva, Laura Rafferty, and Mark Young (2009). Measuring Situation Awareness in complex systems: Comparison of measures study. International Journal of Industrial Ergonomics, vol. 39, no. 3, pp. 490–500.CrossRef
go back to reference Sawyer, Steve, and Andrea Tapia (2005). The Sociotechnical Nature of Mobile Computing Work: Evidence from a Study of Policing in the United States. International Journal of Technology and Human Interaction, vol. 1, no. 3, pp. 1–14.CrossRef Sawyer, Steve, and Andrea Tapia (2005). The Sociotechnical Nature of Mobile Computing Work: Evidence from a Study of Policing in the United States. International Journal of Technology and Human Interaction, vol. 1, no. 3, pp. 1–14.CrossRef
go back to reference Schmidt, Kjeld (2002). The Problem with “Awareness”: Introductory Remarks on “Awareness in CSCW.” Computer Supported Cooperative Work (CSCW), vol. 11, nos. 3-4, pp. 285–298.CrossRef Schmidt, Kjeld (2002). The Problem with “Awareness”: Introductory Remarks on “Awareness in CSCW.” Computer Supported Cooperative Work (CSCW), vol. 11, nos. 3-4, pp. 285–298.CrossRef
go back to reference Schmidt, Kjeld (2011). Cooperative Work and Coordinative Practices – Contributions to the Conceptual Foundations of Computer-Supported Cooperative Work (CSCW). London Dordrecht Heidelberg New York: Springer. Schmidt, Kjeld (2011). Cooperative Work and Coordinative Practices – Contributions to the Conceptual Foundations of Computer-Supported Cooperative Work (CSCW). London Dordrecht Heidelberg New York: Springer.
go back to reference Schnier, Christian, Karola Pitsch, Angelika Dierker, and Thomas Hermann (2011). Collaboration in Augmented Reality: How to establish coordination and joint attention? In S. Bødker, N. O. Bouvin, V. Wulf, L. Ciolfi, & W. Lutters (Eds.), ECSCW 2011: Proceedings of the 12th European Conference on Computer Supported Cooperative Work, London: Springer, pp. 405–416. Schnier, Christian, Karola Pitsch, Angelika Dierker, and Thomas Hermann (2011). Collaboration in Augmented Reality: How to establish coordination and joint attention? In S. Bødker, N. O. Bouvin, V. Wulf, L. Ciolfi, & W. Lutters (Eds.), ECSCW 2011: Proceedings of the 12th European Conference on Computer Supported Cooperative Work, London: Springer, pp. 405–416.
go back to reference Schön, Donald A. (1983). The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books. Schön, Donald A. (1983). The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books.
go back to reference Smith, Paul A., Chris Baber, John Hunter, and Marc Butler (2008). Measuring team skills in crime scene investigation: exploring ad hoc teams. Ergonomics, vol. 51, no. 10, pp. 1463–1488.CrossRef Smith, Paul A., Chris Baber, John Hunter, and Marc Butler (2008). Measuring team skills in crime scene investigation: exploring ad hoc teams. Ergonomics, vol. 51, no. 10, pp. 1463–1488.CrossRef
go back to reference Stammers, Rob B., and J. Hallam (1985). Task allocation and the balancing of task demands in the multi-man-machine systems: some case studies. Applied Ergonomics, vol. 16, pp. 251–257.CrossRef Stammers, Rob B., and J. Hallam (1985). Task allocation and the balancing of task demands in the multi-man-machine systems: some case studies. Applied Ergonomics, vol. 16, pp. 251–257.CrossRef
go back to reference Stanton, Neville A., R. Stewart, Don Harris, R. J. Houghton, Chris Baber, Richard McMaster, Paul M. Salmon, G. Hoyle, Guy H. Walker, Mark S. Young, M. Linsell, R. Dymott, and D. Green (2006). Distributed situation awareness in dynamic systems: theoretical development and application of an ergonomics methodology. Ergonomics, vol. 49, nos. 12-13, pp. 1288–1311.CrossRef Stanton, Neville A., R. Stewart, Don Harris, R. J. Houghton, Chris Baber, Richard McMaster, Paul M. Salmon, G. Hoyle, Guy H. Walker, Mark S. Young, M. Linsell, R. Dymott, and D. Green (2006). Distributed situation awareness in dynamic systems: theoretical development and application of an ergonomics methodology. Ergonomics, vol. 49, nos. 12-13, pp. 1288–1311.CrossRef
go back to reference Straus, Susan G., Tora K. Bikson, Edward Balkovich, and John F. Pane (2010). Mobile Technology and Action Teams: Assessing BlackBerry Use in Law Enforcement Units. Computer Supported Cooperative Work (CSCW), vol. 19, no. 1, pp. 45–71.CrossRef Straus, Susan G., Tora K. Bikson, Edward Balkovich, and John F. Pane (2010). Mobile Technology and Action Teams: Assessing BlackBerry Use in Law Enforcement Units. Computer Supported Cooperative Work (CSCW), vol. 19, no. 1, pp. 45–71.CrossRef
go back to reference Streefkerk, Jan. W., Caro Wiering, Myra van Esch-Bussemakers, and Mark Neerincx (2008). Effects of Presentation Modality on Team Awareness and Choice Accuracy in a Simulated Police Team Task. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Thousand Oaks, CA, USA: Sage Publications, vol. 52, no. 4, pp. 378–382. Streefkerk, Jan. W., Caro Wiering, Myra van Esch-Bussemakers, and Mark Neerincx (2008). Effects of Presentation Modality on Team Awareness and Choice Accuracy in a Simulated Police Team Task. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Thousand Oaks, CA, USA: Sage Publications, vol. 52, no. 4, pp. 378–382.
go back to reference Streefkerk, Jan. W., Mark Houben, Pjotr van Amerongen, Frank ter Haar, and Judith Dijk (2013). The ART of CSI: An Augmented Reality Tool (ART) to Annotate Crime Scenes in Forensic Investigation. In R. Shumaker (Ed.), Virtual, Augmented and Mixed Reality. Systems and Applications, Berlin Heidelberg: Springer, vol. 8022, pp. 330–339. Streefkerk, Jan. W., Mark Houben, Pjotr van Amerongen, Frank ter Haar, and Judith Dijk (2013). The ART of CSI: An Augmented Reality Tool (ART) to Annotate Crime Scenes in Forensic Investigation. In R. Shumaker (Ed.), Virtual, Augmented and Mixed Reality. Systems and Applications, Berlin Heidelberg: Springer, vol. 8022, pp. 330–339.
go back to reference Sundstrom, Eric D. (1999). The Challenges of Supporting Work Team Effectiveness. In E. D. Sundstrom (Ed.), Supporting work team effectiveness: best management practices for fostering high performance, San Francisco, CA, USA: Jossey-Bass Inc., Publishers, pp. 3–23. Sundstrom, Eric D. (1999). The Challenges of Supporting Work Team Effectiveness. In E. D. Sundstrom (Ed.), Supporting work team effectiveness: best management practices for fostering high performance, San Francisco, CA, USA: Jossey-Bass Inc., Publishers, pp. 3–23.
go back to reference Tan, Wei, Haomin Liu, Zilong Dong, Guofeng Zhang, and Hujun Bao (2013). Robust monocular SLAM in dynamic environments. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR), , Washington, DC: IEEE Computer Society, pp. 209–218. Tan, Wei, Haomin Liu, Zilong Dong, Guofeng Zhang, and Hujun Bao (2013). Robust monocular SLAM in dynamic environments. In IEEE International Symposium on Mixed and Augmented Reality (ISMAR), , Washington, DC: IEEE Computer Society, pp. 209–218.
go back to reference Taylor, R. M. (1990). Situational awareness rating technique (SART): The development of a tool for aircrew systems design. In Proceedings of the Symposium on Situational Awareness in Aerospace Operations (AGARD-CP-478), pp. 3/1–3/17. Taylor, R. M. (1990). Situational awareness rating technique (SART): The development of a tool for aircrew systems design. In Proceedings of the Symposium on Situational Awareness in Aerospace Operations (AGARD-CP-478), pp. 3/1–3/17.
go back to reference Taylor, R. M. and S. J. Selcon (1994). Situation in mind: Theory, application and measurement of situational awareness. In Gilson, R. D.; Garland, D. J. and Koonce, J. M. (Eds.), Situational awareness in complex settings, Daytona Beach, FL, USA: Embry-Riddle Aeronautical University Press, pp. 69–78. Taylor, R. M. and S. J. Selcon (1994). Situation in mind: Theory, application and measurement of situational awareness. In Gilson, R. D.; Garland, D. J. and Koonce, J. M. (Eds.), Situational awareness in complex settings, Daytona Beach, FL, USA: Embry-Riddle Aeronautical University Press, pp. 69–78.
go back to reference Van Knippenberg, Daan, Carsten K.W. De Dreu, and Astrid C. Homan (2004). Work Group Diversity and Group Performance: An Integrative Model and Research Agenda. Journal of Applied Psychology, vol. 89, no. 6, pp. 1008–1022.CrossRef Van Knippenberg, Daan, Carsten K.W. De Dreu, and Astrid C. Homan (2004). Work Group Diversity and Group Performance: An Integrative Model and Research Agenda. Journal of Applied Psychology, vol. 89, no. 6, pp. 1008–1022.CrossRef
go back to reference Voida, Amy, Stephen Voida, Saul Greenberg, and Helen A. He (2008). Asymmetry in Media Spaces. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 313–322. Voida, Amy, Stephen Voida, Saul Greenberg, and Helen A. He (2008). Asymmetry in Media Spaces. In Proceedings of the 2008 ACM Conference on Computer Supported Cooperative Work, New York, NY, USA: ACM, pp. 313–322.
go back to reference Wang, Xiangyu, and Phillip S. Dunston (2011). Comparative Effectiveness of Mixed Reality-Based Virtual Environments in Collaborative Design. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 41, no. 3, 284–296.CrossRef Wang, Xiangyu, and Phillip S. Dunston (2011). Comparative Effectiveness of Mixed Reality-Based Virtual Environments in Collaborative Design. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 41, no. 3, 284–296.CrossRef
go back to reference Wichert, Reiner (2002). Collaborative Gaming in a Mobile Augmented Reality Environment. In Proceedings of the Ibero-American Symposium in Computer Graphics 2002 (pp. 31–37). Wichert, Reiner (2002). Collaborative Gaming in a Mobile Augmented Reality Environment. In Proceedings of the Ibero-American Symposium in Computer Graphics 2002 (pp. 31–37).
go back to reference Wille, Matthias, Britta Grauel, and Lars Adolph (2013). Strain caused by head mounted displays. In D. de Waard, K. Brookhuis, R. Wiczorek, F. Di Nocera, P. Barham, C. Weikert, A. Kluge, W. Gerbino, and A. Toffetti (Eds.), Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2013 Annual Conference, pp. 267-277. Wille, Matthias, Britta Grauel, and Lars Adolph (2013). Strain caused by head mounted displays. In D. de Waard, K. Brookhuis, R. Wiczorek, F. Di Nocera, P. Barham, C. Weikert, A. Kluge, W. Gerbino, and A. Toffetti (Eds.), Proceedings of the Human Factors and Ergonomics Society Europe Chapter 2013 Annual Conference, pp. 267-277.
Metadata
Title
Providing Information on the Spot: Using Augmented Reality for Situational Awareness in the Security Domain
Authors
Stephan Lukosch
Heide Lukosch
Dragoş Datcu
Marina Cidota
Publication date
01-12-2015
Publisher
Springer Netherlands
Published in
Computer Supported Cooperative Work (CSCW) / Issue 6/2015
Print ISSN: 0925-9724
Electronic ISSN: 1573-7551
DOI
https://doi.org/10.1007/s10606-015-9235-4

Other articles of this Issue 6/2015

Computer Supported Cooperative Work (CSCW) 6/2015 Go to the issue

Premium Partner