Skip to main content
Top
Published in: Intelligent Service Robotics 4/2019

Open Access 20-07-2019 | Original Research Paper

Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study

Authors: Kevin Huang, Digesh Chitrakar, Fredrik Rydén, Howard Jay Chizeck

Published in: Intelligent Service Robotics | Issue 4/2019

Activate our intelligent search to find suitable subject content or patents.

search-config
loading …

Abstract

This work presents a user-study evaluation of various visual and haptic feedback modes on a real telemanipulation platform. Of particular interest is the potential for haptic guidance virtual fixtures and 3D-mapping techniques to enhance efficiency and awareness in a simple teleoperated valve turn task. An RGB-Depth camera is used to gather real-time color and geometric data of the remote scene, and the operator is presented with either a monocular color video stream, a 3D-mapping voxel representation of the remote scene, or the ability to place a haptic guidance virtual fixture to help complete the telemanipulation task. The efficacy of the feedback modes is then explored experimentally through a user study, and the different modes are compared on the basis of objective and subjective metrics. Despite the simplistic task and numerous evaluation metrics, results show that the haptic virtual fixture resulted in significantly better collision avoidance compared to 3D visualization alone. Anticipated performance enhancements were also observed moving from 2D to 3D visualization. Remaining comparisons lead to exploratory inferences that inform future direction for focused and statistically significant studies.
Notes

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

1 Introduction

In teleoperation involving high-value or delicate structures, it is imperative that the operator is able to perceive as much information about the remote environment as possible without being distracted. A combination of feedback qualities beyond simple monocular vision can assist the operator by overlaying sensory information over data captured from the remote scene using virtual fixtures [42]. This is particularly useful when the teleoperated task is known in advance. With an effective operator interface, the user can make the best decisions to efficiently complete the task with minimal physical and mental effort.
Operator performance can be improved with the implementation of haptic virtual fixtures and new vision modalities. However, haptic virtual fixtures for teleoperation are predominantly employed to increase awareness and understanding, not to directly assist in the teleoperated task [34]. For example, a novel augmented reality interface coupled with virtual fixtures reduced overall positioning error in a maintenance task; however, the fixtures primarily assisted as operator notifications [29]. Implementations of task-specific and directly assistive fixtures need to be evaluated in order to be widely deployed, e.g., in telemanipulation. Therefore, user studies are required to gauge relative impact of haptic and visual augmentations for telemanipulation.
Ni et al. investigated operator performance and taskload when servoing a robot manipulator to reach a point target using haptic fixtures within a virtual reality environment [36]. While promising, this research involved unconstrained tasks: provide arbitrary motion commands, navigate to a point location. Many robotic tasks require trajectory following or finer, more constrained commands, especially in the presence of sensitive structures. Learning of new interfaces has also been demonstrated to benefit from virtual fixtures, as shown in [14], whereby a user study validated the performance gains in an sEMG controlled 2D game. Real-world tasks, however, require manipulation in three spatial dimensions. In another study, haptic virtual fixtures were implemented to execute multiple grasps of various objects [26]. The only metric evaluated was the number of objects grasped, and the task was positioning of the grasper. More general performance metrics provide a broader insight into the benefits and drawbacks of 3D visualization and task-specific haptic virtual fixtures.
This study explores the effects of user-placed haptic guidance virtual fixtures and 3D-mapping methods for a telemanipulation interface. In particular, the following teleoperation user interface feedback types were examined:
1.
Visual feedback
  • 2D monocular RGB
  • 3D voxel representation
 
2.
Task-specific haptic guidance
 
Contributions
This work investigates the aforementioned feedback modes in a bilateral teleoperation task through extensive user-study experiments. These experiments are used to:
1.
Quantify the effects of feedback mode in several performance metrics.
 
2.
Provide baseline insight for designing feedback for user interfaces in telemanipulation.
 
3.
Provide concrete directions and comparisons for future experiments.
 

2 Background

2.1 Visual feedback in teleoperation

Binocular vision has been shown to improve teleoperator speed and precision [11], but requires bulky and expensive 3D capable displays. In contrast, inexpensive commodity RGB-D cameras have provided non-contact means of collecting geometrical information and have grown in popularity in teleoperation applications [10, 33, 37, 51]. Stereo vision is another widely used alternative, and in robot-assisted minimally invasive surgery, stereo endoscopes provide real-time 3D geometries often represented as point clouds [24, 4648]. It has been shown that providing the teleoperator with depth information in the form of a real-time point cloud for certain navigation tasks can improve performance when compared to monocular RGB streams [32]. Another efficient method of displaying 3D surface geometries is through voxel occupancy grids [15, 53], which can preserve previously observed occluded geometries.
However, with depth information, several factors can deteriorate the quality of feedback and increase confusion to the viewer. For example, sensor resolution may be an issue when resolving smaller manipulation targets—the visual data may only provide a general localization of the object. Furthermore, depending on material properties, depth information may be noisy or arbitrary (e.g., transparent materials, glancing angles or light absorbant materials) [30, 35]. Other variables such as measurement distance have also been shown to affect measurement noise and density [22]. Because of this variability, it is not certain that 3D-mapping techniques can improve operator performance in telemanipulation tasks.
Mast et al. [32] determined that the usefulness of voxel-based 3D-mapping in navigation varied between environments. In this work, the utility of such 3D representations when applied to a manipulation task is explored. In such a task, 3D information may not be conveyed in a useful way and could in fact be detrimental and confusing when dealing with movement and small objects such as a valve handle. Additionally, occlusions could result in lack of information and heightened interpretation effort, while a monocular RGB stream is intuitive and familiar to most users.

2.2 Haptic feedback in teleoperation

Forbidden region haptic virtual fixtures help to prevent the operator from entering an undesired configuration. In [17, 18], forbidden region virtual fixtures in combination with pretouching sensing were used to help prevent unwanted contacts during exploration of an unknown, potentially delicate object. Similar types of forbidden region fixtures have found their use in surgical contexts [27, 39]. In contrast to forbidden region schemes, haptic guidance virtual fixtures push, prod or otherwise guide the operator’s hand in a desired direction or trajectory [1, 3, 31]. This is useful for maintaining a predefined trajectory [2, 38, 43, 44] as well as adaptive constraints [40]. In the case where depth perception is difficult, avoiding contacts and maintaining a safe, desired path can be assisted with such virtual fixtures. Vision and haptic virtual fixtures have been used in tandem for novel clinical applications as well [7, 8].
Moreover, since the virtual environment and force feedback are calculated in software, the actual robot end effector can be locked out of deviating from the desired path, while a guiding force is applied to the user. In [19], a flexible guidance fixture was demonstrated where computer vision was used to identify obstacles obstructing a predefined 2D virtual guidance trajectory. A modified trajectory was calculated that avoided obstacles. The above types of guidance fixtures deal with fixed, predefined paths. In the case of telemanipulation in an unknown environment, while the task may be predefined, its ideal configuration in the remote location is difficult to determine. However, when enough information about the physical task space is obtained, it is feasible and desirable for the teleoperator to place the desired trajectory [52].

2.3 Comparative studies

Goodrich et al. [12] outlined several different schemes and levels for providing autonomous assistance in teleoperation. Peon et al. [41] explored the effect of different haptic modalities in combination with audio feedback on subject response time to violating a spatiotemporal constraint. Wang et al. [52] showed an increase in user execution time and accuracy from the combination of both the visual and haptic display of a guidance virtual fixture in a computer simulation, i.e., the user could see the desired trajectory as well as feel guidance forces. The guidance virtual fixture was generated and placed by the user using a computer mouse on a virtual surface [52]. While this method limited the user-defined virtual guidance fixture to the face of an object, it is extendable to trajectories in three-dimensional space. In a similar study, Kuiper et al. [25] examined haptic and visual feedback modes and their effects for nonholonomic steering, whereby a user controlled nonholonomic vehicle in a simulated steering task with different levels of constraints. The virtual fixtures were used to guide users along predicted or suggested vehicle paths. It was found that visual feedback is needed for improvements when providing only predicted trajectories. In [5], various haptic feedback modes for visuo-manual tracking for learning predefined trajectories, namely writing different Arabic and Japanese characters, were evaluated. It was suggested that haptic feedback can assist in learning to write 2D characters. In a similar study, it was shown that haptic information from handwriting can be compared and classified based on the users kinematic variations [54]. Different haptic assistance levels were assessed in completing a 2DOF maze navigation task in [40].
It has been established that haptic virtual fixtures can be useful for object avoidance and following a predefined path based on a priori information [3, 5], and several works explore 2D effects [5, 40]. In this work, the user is given the ability to manually set the virtual fixture for a useful path with predefined geometry to complete a 3D telemanipulation task—however, this path is not required to complete the task. For properly and efficiently placing this fixture, it is imperative that the user be provided with the real-time 3D-mapping representation—the negative effects of inaccuracies in shared haptic guidance feedback are described in [6]. In this study, depending on several spatial, temporal and sensor-limited factors (e.g., the user may have a difficult time placing the virtual fixture), it is not clear whether such a feedback option is beneficial.

3 Experimental setup

3.1 System description

The setup for this project includes a bilateral teleoperation arrangement. On the master console station, the teleoperator manipulates a haptic device, the Sensable PHANToM Omni. This device sends 3DOF position commands to control the end effector location of the youBot, and it also receives and displays 3DOF haptic force feedback commands from the virtual fixture software. In addition, the user is presented with visual feedback on a LCD monitor. The teleoperator’s goal is to manipulate the robot arm to turn a gas valve. The master console setup is illustrated in Fig. 1.
The remote robot proxy includes a Kuka youBot robot with a Primsense Carmine RGB-D camera. The youBot has an omnidirectional base and a 5 degree-of-freedom (DOF) manipulator. In this implementation, these joints are controlled by National Instruments’ Compact RIO real-time controller, and commands to the master console are transmitted via an Asus AC router. These features are shown in Fig. 2.
To evaluate the effectiveness of the described feedback modes and user-placed guidance fixtures, teleoperator performances during the valve turn task are compared under the following different user feedback conditions:
1.
Visual only, monocular RGB stream (R)
 
2.
Visual only, 3D-mapping voxel method (V)
 
3.
Visual and haptic guidance virtual fixture (VF)
 
Scenario 1 (R) represents a particularly simple baseline case, RGB streaming video, which is still employed in teleoperated tasks.
Scenario 2 (V) provides a baseline for 3D visual representation. The user is able to rotate and translate his or her view within the 3D representation as well. This representation includes a 3D voxel map of a volume enclosing the youBot’s task space, which is updated with RGB-D sensor information based on a Bayesian statistical model to determine binary occupancy state. Similar methods were explored in [15, 35]; however in this study, the voxel allocation and update is hardware accelerated to ensure real-time acquisition and fast response to motion.
Scenario 3 (VF) provides the operator with a guidance virtual fixture of proper shape for the task and visualization from Scenario 2. This fixture will prevent the operator from deviating from a path known to successfully complete the valve turn and avoid undesired contacts, and overcome confusion caused by occlusion of the valve itself due to the manipulator itself. The trajectory is a series of finely sampled, ordered points. Because the environment is rendered as a 3D voxel grid, it is simple and quick for the operator to place the visualized desired trajectory properly. The three feedback modes are described in detail in Sect. 4.5.
It is of interest to determine whether or not, in this telemanipulation task, 3D-mapping techniques will improve operator performance, even if displayed to the user with 2D visual display. Furthermore, the efficacy of a user-placed guidance trajectory is explored. A comparison is sought between 2D monocular RGB stream (R), 3D voxel mapping techniques (V), and visual + haptic feedback (VF). Three questions are being explored:
1.
Does the addition of 3D-mapping techniques improve user performance, decrease workload or increase awareness?
 
2.
Do manually placed haptic virtual fixtures provide additional improvements?
 
3.
Which comparisons warrant immediate further study?
 
Because of the variability of the above factors, the nature of this experiment is exploratory and investigates a broad range of metrics with a simple, generalizable task. The results of this work will provide insight into the suitability of 3D-mapping methods and manually placed virtual fixtures for feedback in telemanipulation tasks, and inform metrics for future studies.

4 Methods

4.1 Experimental task

The operator is asked to complete a valve turn task. Such a task is motivated from a disaster recovery perspective. In the case of a gas leak during a natural disaster, teleoperation is attractive because it reduces risk to human responders. Moreover, a teleoperated device may be better designed to reach constrained physical scenarios than a human being. In this study, the valve to be turned consists of a ball valve structure with \(90^\circ \) dynamic range. The task can be broken down into two subtasks and turns:
Task A
turning the ball valve from the 12 o’clock position to the 3 o’clock position.
Task B
turning the ball valve from the 3 o’clock position to the 12 o’clock position.
The task is depicted in Fig. 3, and it is with this setup that the user study for the project was conducted.
The slave robotic device, as described in Sect. 3.1, consists of a modified Kuka youBot platform and has been assumed to have reached the task location. The user is not required to navigate the robot base.

4.2 Subject recruitment

In this study, recruitment was performed on campus and subjects consisted solely of undergraduate and graduate students. As described previously, a total of three test conditions exist. In this project, a between-user study was employed, in which 21 male subjects participated (seven in each test group). Their age ranged from 18 to 35 years of age (mean age group R: 25.143; V: 23.000; VF: 26.143). Participants were chosen to be male to avoid any effects due to possible differences between males and females in spatial problem solving as described by [20].
Each of the participants used computers at least 10 h per week. In each group (seven participants total), six of the participants played less than 2 h per week of video games, while exactly one participant played more than 10 h per week (mean videogame usage per week R: 2.214; V: 1.857; VF: 2.000). None of the participants had prior experience using the Sensable PHANToM Omni.

4.3 Metrics

In this project, both objective and subjective metrics were employed for comparison. In particular, objective performance metrics included:
  • Time to complete the valve turn task (s)
  • Path length of the end effector (mm)
  • Number of undesired collisions
  • Jerk of the end effector \(\left( \frac{{\hbox {m}}}{{\hbox {s}}^3}\right) \)
Each trial was video recorded, and post-experiment was manually labeled for undesired collision count. After the completion of the task, subjective measures were assessed via post-task questionnaires evaluating:
  • Perceived workload
  • Situational awareness
Perceived workload was measured using the unweighted NASA Task Load Index (TLX) [13], and situational awareness the three-dimensional Situation Awareness Rating Technique (SART) [50] scaled to (0, 120). Situational awareness is critical for producing effectual robot behavior [12].

4.4 Procedure

The experiments were conducted in an office and the hallway corridor outside. The participant teleoperated from the master console within the office, while the simulated remote environment was in the hallway out of view from the subject. In the hallway, the youBot and the valve structure were placed in the same location for each experiment. Prior to the experiment, the users were allowed to see the valve and robot position, and were further allowed to turn the valve manually to obtain a sense of the range of motion as well as the torque needed to turn the valve.
After viewing the valve structure and youBot, the subjects underwent a training period which lasted for 20 min or when the user was satisfied, whichever happened first. (In all cases in this study, the user was satisfied with the training prior to the 20 min.) The training session occurred with the youBot in the office space within view of the operator. During the training session, the user was only allowed to teleoperate with their given feedback mode only. For the monocular RGB mode (R), the user was presented with a \(640\times 480\) video stream of the manipulator in well lit conditions, for (V) a voxelized representation and for (VF) the operator received the voxel map visual feedback and could furthermore place a haptic guidance fixture. Data was acquired at 50 Hz and visual feedback updated at 30 Hz, the data acquisition rate of the Primesense Carmine camera. The haptic update rate was set at 1200 Hz to maintain realistic force feedback. (A minimum rate of 1 KHz is needed for realistic interaction [28].)
For each subject, once the experiment began, noise isolating ear protection was placed on the participant’s ears, and the trial was videotaped for post-processing of unwanted collision. Each subject was asked to perform ten tasks, and they performed in order: Task A five times followed by Task B five times. Between trials, the robot was homed to a fixed starting configuration. The user was timed from movement from this home position until the valve was turned completely. In mode VF, the user needed to place the guidance virtual fixture during each trial, i.e., ten times per subject.

4.5 Visual and haptic feedback design

4.5.1 Monocular RGB, (R)

Monocular RGB feedback, (R) is a simple and widely available baseline case. The user was presented only with streaming RGB video feedback displayed via the LCD monitor. Figure 4 shows a typical screenshot of the visual feedback from this mode, which was rendered in OpenGL using components found in RViz.

4.5.2 Voxel-based 3D-mapping, (V)

In the 3D-mapping mode (V), a voxelized cube with side length of one meter and resolution of 5 mm was graphically rendered in front of the youBot and is depicted in Fig. 5.
A simple Bayesian update method using heuristically tuned update parameters was used to determine voxel occupancy. The voxel generation scheme can be summarized as pseudocode below:
In each RGB-D frame, for each voxel, project the voxel onto the depth image. The voxel is represented in the depth image with pixel location p, and depth value of v. In the measured depth image, p also has a camera measured depth representing real-world data, call it s. The two depths, v and s, represent the voxel depth and the sensed surface depth respectively. If \(v\le s\) (i.e., the voxel is closer than the surface sensed by the camera), determine the voxel occupancy state, O, via a Bayesian update rule.
In this way, the occupancy was updated every RGB-D frame while preserving occupancy states of now occluded voxels. \(i,j,k,\tau \) were all heuristically tuned. The algorithm is highly parallelizable and was hardware accelerated to ensure real-time acquisition and fast response to motion. In mode V, the user could view the occupancy grid from various angles using a computer mouse. The RGB-D data was captured at an acquisition rate of 30 Hz, and again the visual feedback was rendered in OpenGL using components found in RViz. The same methods can be used to generate voxel occupancy grids from stereo captured point clouds.

4.5.3 Guidance virtual fixture, (VF)

In the haptic virtual guidance fixture feedback mode (VF), the user was provided with the same 3D visual feedback described for mode V. In addition to this, the user was able to place a visualized path (a green colored arc) on the ball valve structure, as shown in Fig. 6.
This path provided haptic feedback once placed, and ideally passed through occupied voxels representing objects for interaction. For the valve turn in particular, it was desired that the path passes through the voxels representing the handle of the ball valve. The guidance path constituted of a \(90^{\circ }\) circular arc of radius https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figb_HTML.gif the handle length. This path lies in a plane normal to the ball valve axis of rotation and would ensure sufficient torque applied to the ball valve from the end effector while allowing for acceptable deviation from the ideal circular arc.
In order to render the haptic feedback, the path is first sampled as a set of spatially ordered points. As the operator approaches the guidance path to within X of any sampled point, an attractive haptic well is generated around that point. The force profile of this haptic well is defined by a simple piecewise cubic polynomial, as described by Eq. 1.
$$\begin{aligned} f(x) = {\left\{ \begin{array}{ll} a_2x^2+a_3x^3 &{}0\le x<\frac{X}{2}\\ a_2(X-x)^2+a_3(X-x)^3&{}\frac{X}{2}\le x\le X\\ 0&{}\text {else} \end{array}\right. } \end{aligned}$$
(1)
This cubic polynomial results in a haptic well force profile whose shape is shown in Fig. 7.
The effect of this force profile is twofold. Firstly, the user can remove themselves from the guidance fixture by moving beyond X of the guidance point. Secondly, the user is strongly encouraged to stay within \(\frac{X}{2}\) of the guidance point while receiving force feedback. In this work, \(X =\) 1 cm, and the peak guidance force is scaled to 2.5 N.
To move along the path, the current closest point and directly adjacent ordered points are considered. When the user moves and an adjacent point is now closer, the haptic well around the current point is attenuated, while a new haptic well is enforced at the new closest point. This process is repeated on the subsequent points, guiding the user along the ordered points. If the user leaves the guidance fixture, the entire procedure is repeated.

5 Results

5.1 Quantitative metrics

Time to completion was measured from initial movement from the pre-calibrated home position to when the valve turn was completed. Unwanted collisions were manually labeled post-experiment. For this, physical contact of the robot manipulator with any object not the valve handle was considered unwanted. Distinct contacts required lift-off between the contacts; i.e., a dragged contact was counted only once. Path length was calculated via forward calculated end effector trajectory from recorded joint angles. Finally, differentials of the sampled position were used to approximate jerk. A low-pass filter removed high-frequency components introduced through discrete differentiation. These metrics measure operator performance in the valve-turning task.
Figure 8 shows graphically the results across the various feedback modes (R: monocular video stream, V: 3D-voxel mapping, VF: 3D-voxel mapping and guidance virtual fixture) along the four different quantitative metrics. The boxplots show mean values with standard deviation as error bars.
The four quantitative metrics, completion time, number of unwanted collisions, path length and jerk, were measured for each trial using repeatable and consistent methods. The results were grouped by feedback mode, and then, the mean for each category was calculated. The results are shown in Table 1.

5.2 Qualitative metrics

Two post-experiment questionnaires were administered, the NASA TLX and SART, to evaluate task load and situational awareness respectively. Figure 9 shows these results. The user responses were compiled, and the scores were grouped by feedback mode. The mean results are shown in Table 2.

5.3 Analysis

Several statistical approaches were employed to analyze the experiments, including a multivariate comparison as well as two different post hoc comparisons. Multivariate analysis of variance (MANOVA) was used as an omnibus measure to determine at least how many dependent variables (i.e., time to completion, number of collisions, path length, jerk, TLX and SART) may significantly differentiate the three groups (i.e., operating modes: R, V, and VF), while simultaneously controlling for multiple comparisons.
Table 1
Mean raw values of quantitative metrics
Metric
Feedback mode
R
V
VF
Mean completion time (s)
29.573
23.468
22.547
Mean number of collisions
2.913
1.186
0.300
Path length (mm)
964.68
771.81
742.06
Mean Jerk \(\left( \frac{{\hbox {m}}}{{\hbox {s}}^3}\right) \)
236.375
96.5122
94.8093
The MANOVA resulted in at least two degrees of freedom between groups. Further statistical analyses are thus warranted. The entire data is separated along the two identified degrees of freedom as shown in Fig. 10.
Table 2
Mean raw values of qualitative metrics
Metric
Feedback mode
R
V
VF
NASA TLX out of 120
51.071
32.214
33.429
SART out of 120
88.071
87.000
89.500
Figure 10 indicates that the second ranked feature deviates greatly from the top ranked feature. Post hoc approaches investigate pairwise comparisons to identify precisely which measures differentiate the three feedback modes, and future comparisons for study. First consider pairwise two-sample t tests between the three feedback groups within the quantitative metrics, as shown in Table 3. Consider next the pairwise two-sample t tests between the three feedback groups within the qualitative metrics, as shown in Table 4. The statistical significance metrics are explained in detail in the following text.
Table 3
Statistical p values of quantitative metrics
Metric
Comparison mode
R–V
R–VF
V–VF
Mean completion time
0.0757
0.0373\(^\dagger \)
0.6129
Mean number of collisions
1.12e\(-\)5*\(^\dagger \)
8.46e\(-\)12*\(^\dagger \)
9.11e\(-\)7*\(^\dagger \)
Path length
0.0499\(^\dagger \)
0.0212\(^\dagger \)
0.5927
Mean Jerk
0.0023*\(^\dagger \)
0.0020*\(^\dagger \)
0.7853
*Significance in FWER
\(^\dagger \)Significance in FDR
Table 4
Statistical p values of qualitative metrics
Metric
Comparison mode
R–V
R–VF
V–VF
NASA TLX
0.0350\(^\dagger \)
0.0515\(^\dagger \)
0.8387
SART
0.9195
0.8943
0.7407
\(^\dagger \)Significance in FDR

5.4 Statistical corrections

Recall that three separate experimental groups and six different metrics were examined. The result is a total of 18 different hypotheses considered from the data set. Thus, a multiplicity problem arises, and statistical analysis must account for this in order to avoid Type I errors, i.e., falsely rejecting a null hypothesis. Two different post hoc measures were employed:
1.
family-wise error rate (FWER)
 
2.
false discovery rate (FDR)
 
The former probes the propensity of making at least one false discovery, whereas the latter characterizes the probability of false positives.

5.4.1 FWER

The multiple comparisons problem is addressed controlling for FWER with a conservative measure, Holm–Bonferroni correction. To begin the Holm–Bonferroni correction, first consider the \(i = 18\) different p values from the two-sample t tests analyzing the 18 null hypotheses (three experimental groups, six metrics). Then sort these p values in ascending order in a list with corresponding null hypotheses:
$$\begin{aligned} p_1, p_2, \ldots , p_{i}~~~~~~n_1, n_2, \ldots , n_{i} \end{aligned}$$
Now take the typical analysis significance level, \(\alpha = 0.05\). Via the Holm–Bonferroni method, let \(j \in [1,i]\) be the least value index such that the inequality below is satisfied:
$$\begin{aligned} p_j > \frac{\alpha }{i-j+1} = \frac{0.05}{19-j} \end{aligned}$$
Then, the null hypotheses \(\{n_1, n_2, \ldots , n_j\}\) are rejected, while the remaining are not. Using this analysis method yields the statistical results in Table 5:
Table 5
Holm–Bonferroni correction
j
Null hypothesis
p value
\(\frac{\alpha }{i-j+1} = \frac{0.05}{19-j}\)
1*
Collisions (R–VF)
8.459e\(-\)12
0.0028
2*
Collisions (V–VF)
9.110e\(-\)7
0.0029
3*
Collisions (R–V)
1.118e\(-\)5
0.0031
4*
Jerk (R–VF)
0.0020
0.0033
5*
Jerk (R–V)
0.0023
0.0036
6
Path length (R–VF)
0.0212
0.0038
7
TLX (R–V)
0.0350
0.0042
8
Time (R–VF)
0.0373
0.0045
9
Path length (R-V)
0.0499
0.0050
10
TLX (R–VF)
0.0515
0.0056
11
Time (R–V)
0.0757
0.0063
12
Path length (V–VF)
0.5927
0.0071
13
Time (V–VF)
0.6129
0.0083
14
SART (V–VF)
0.7407
0.0100
15
Jerk (V–VF)
0.7853
0.0125
16
TLX (V–VF)
0.8387
0.0167
17
SART (R–VF)
0.8943
0.0250
18
SART (R–V)
0.9195
0.0500
*Significance

5.4.2 FDR

The multiple comparisons problem is addressed controlling for FDR via Benjamini–Hochberg correction. Similar to the Holm–Bonferroni correction, first label \(i = 18\) different p values from the two-sample t tests sorted in ascending order. Considering a false discovery rate appropriate for exploratory experiments, begin with \(Q = 0.1\). Then proceed via Benjamini–Hochberg, and let \(k \in [1,i]\) be the least value index such that the inequality below is satisfied:
$$\begin{aligned} p_k > \frac{Qk}{i} = \frac{0.1k}{18} \end{aligned}$$
Then, the null hypotheses \(\{n_1, n_2, \ldots , n_k\}\) are rejected, while the remaining are not. Using this analysis method yields the statistical results in Table 6.
Table 6
Benjamini–Hochberg correction
k
Null hypothesis
p value
\(\frac{Qk}{i} = \frac{0.1k}{18}\)
1\(^\dagger \)
Collisions (R–VF)
8.459e\(-\)12
0.0056
2\(^\dagger \)
Collisions (V–VF)
9.110e\(-\)7
0.0111
3\(^\dagger \)
Collisions (R–V)
1.118e\(-\)5
0.0167
4\(^\dagger \)
Jerk (R–VF)
0.0020
0.0222
5\(^\dagger \)
Jerk (R–V)
0.0023
0.0278
6\(^\dagger \)
Path length (R–VF)
0.0212
0.0333
7\(^\dagger \)
TLX (R–V)
0.0350
0.0389
8\(^\dagger \)
Time (R–VF)
0.0373
0.0444
9\(^\dagger \)
Path length (R–V)
0.0499
0.0500
10\(^\dagger \)
TLX (R–VF)
0.0515
0.0556
11
Time (R–V)
0.0757
0.0611
12
Path length (V–VF)
0.5927
0.0667
13
Time (V–VF)
0.6129
0.0722
14
SART (V–VF)
0.7407
0.0778
15
Jerk (V–VF)
0.7853
0.0833
16
TLX (V–VF)
0.8387
0.0889
17
SART (R–VF)
0.8943
0.0944
18
SART (R–V)
0.9195
0.1000
\(^\dagger \)Significance

5.5 Post hoc summary

With Holm–Bonferroni corrections, five total pairwise comparisons are shown to be significant despite the exploratory nature of this work. The significance of these comparisons for FEWR is shown in Table 7.
Table 7
Holm–Bonferroni summary
 
R–V
R–VF
V–VF
Time
\(\times \)
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figc_HTML.gif
\(\times \)
Collisions
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figd_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Fige_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figf_HTML.gif
Path length
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figg_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figh_HTML.gif
\(\times \)
Jerk
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figi_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figj_HTML.gif
\(\times \)
TLX
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figk_HTML.gif
\(\times \)
\(\times \)
SART
\(\times \)
\(\times \)
\(\times \)
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figl_HTML.gif Significance        https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figm_HTML.gif \(p<0.05\)      \(\times \) Lack of significance
Holm–Bonferroni is a conservative measure of significance. The exploratory nature and motivation of this work lends itself to a more forgiving correction that focuses instead on FDR, which controls the proportion of discoveries that are false, and can indicate potential comparisons for further in-depth study. Thus, Benjamini–Hochberg corrections are more consistent with the stated contributions and goals of this work, and result in ten total pairwise comparisons of interest. These are shown in Table 8.
Table 8
Benjamini–Hochberg summary
 
R–V
R–VF
V–VF
Time
\(\times \)
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Fign_HTML.gif
\(\times \)
Collisions
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figo_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figp_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figq_HTML.gif
Path length
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figr_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figs_HTML.gif
\(\times \)
Jerk
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figt_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figu_HTML.gif
\(\times \)
TLX
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figv_HTML.gif
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figw_HTML.gif
\(\times \)
SART
\(\times \)
\(\times \)
\(\times \)
https://static-content.springer.com/image/art%3A10.1007%2Fs11370-019-00283-w/MediaObjects/11370_2019_283_Figx_HTML.gif Significance      \(\times \) Lack of significance

6 Discussion

In all quantitative metrics, the direction of the comparisons are encouraging and match expectations; VF yielded better performance than V which performed better than R. Most of the qualitative metrics lacked statistical power, with the exception of TLX. The two post hoc correction methods provide insight from separate vantages.
The metrics compared in light of FEWR were corrected conservatively and seek to validate strong concluding statistical power. These comparisons, five of which are statistically powerful, are shown in Table 7. Of particular interest is that adding a user-placed haptic guidance fixture resulted in better collision avoidance compared to 3D voxel visualization alone. To better suit the exploratory aspect and numerous metrics of interest, statistical analysis focusing on FDR was conducted, the results of which are depicted in Table 8. Significance in this light conveys exploratory indicators for future in-depth work. The particular p values from these comparisons are shown in Tables 3 and 4, with * indicating significance in FEWR and \(^\dagger \) significance in FDR.

6.1 Design and real-world implications

The data suggests that it is beneficial to use RGB-D data over a RGB video stream alone. The teleoperator performance was shown only to improve with this modification. Improvements in collision avoidance, overall path length, jerk and task load can expand accessibility and safe operation of limited and highly trained teleoperated tasks—the novice users in this study realized appreciable gains in these metrics from 3D visualization. One highly skilled teleoperated task is underwater telemanipulation: users operate tools, manipulate valves or match cables underwater. This is accomplished with macro-scale imaging [16, 23, 49] and is challenging and expensive due to the high skill level required [45]. Designing systems with small-scale depth imaging and voxel representation, as demonstrated in this work, can enhance the telemanipulation performance of less trained individuals, thus reducing the skill and subsequent operating costs to execute such operations.
Furthermore, when delicate or critical structures are involved, a collision could be disastrous. For such scenarios, this study shows that the addition of haptic virtual fixtures can further enhance telemanipulation collision avoidance. Search and rescue (SAR) robotics is one such application area, a case where safe control is essential in a hazardous, unstructured environment. Robots in this application, also known as response robots, can be used for incident prevention and support, and can be a useful tool for saving human lives and accelerating the search and rescue process. From data collected by search and rescue initiative ICARUS end users, it was a general consensus that in practical SAR applications, ”the robots will always need to be teleoperated [sic] for safety and legal reasons” [9]. These types of robots have been used in trials-by-fire, including response to the Chernobyl meltdown, the Fukushima Daiichi meltdown, the Sago mine disaster, terrorist attacks of September 11th, Hurricane Katrina and La Conchita Mudslide [4, 21].
Current response robots for SAR applications, however, operate under a minimal amount of autonomy and assistance modes. This may be due to the delicate nature of incorporating novel technology into high-risk operations, where safety and legal issues may arise. The complexity and unstructured context of many crisis and SAR missions also make such missions extremely technology-unfriendly [9]. In terms of current utilization, remote robot responders are used merely to obtain geographical information and to assist human responders—human responders are able to remotely navigate and access robot sensor data [9]. In this way, most current practical applications of rescue robots have been passively assisting human responders in situation assessment.
This user study helps to expand the scope and utility of telerobots in rescue situations. The data provides encouraging results toward integrating the use of 3D visualization and haptic assistance in telemanipulation. Of particular note is the significant improvement in collisions avoidance by adding a guidance virtual fixture. Further gains in jerk indicate that these telemanipulation interface improvements offer safer, more direct and smoother operation, all of which are encouraging for designing SAR application telerobots. This contribution provides results, limitations and future implications of the described user interface features. Such information is needed in order to intelligently investigate performance effects in focused studies, with ultimate goals to adopt new technologies that eliminate the need for and risk of in-the-field human responders, replaced instead with intelligently controlled telerobots with capabilities to safely and efficiently execute sensitive tasks.

7 Conclusion

In this work, feedback modalities were examined in a basic telemanipulation task. In addition to monocular visual feedback, depth information (voxel occupancy grid) and manually placed haptic guidance fixtures were tested. We explored the effects of these modifications to task performance in quantitative and qualitative metrics. While 3D voxelization techniques have been shown to improve performance in navigation, the effect on telemanipulation had not yet been quantified. While the use of predefined fixtures has been shown to improve performance, we evaluated the effect of manually set guidance fixtures in a real-time telemanipulation task. User studies evaluated these methods.
The results of the user study show that even in simple telemanipulation tasks:
1.
Guidance virtual fixtures significantly improve collision avoidance.
 
2.
3D visualization significantly reduces the number of collisions compared to 2D.
 
3.
3D visualization significantly improves path smoothness compared to 2D.
 
Furthermore, this study showed that varying the user feedback mode did not affect situational awareness. There were no detrimental effects using 3D-mapping methods over RGB streams in any of the metrics.

7.1 Future work

This user study provides a baseline assessment of the effects of 3D voxel representations and user-placed haptic guidance virtual fixtures on operator performance in telemanipulation. Results provide a comparative baseline for evaluating the effects of additional augmentations, while the exploratory nature of this work involved numerous performance evaluation metrics. Encouragingly, the quantitative metrics resulted in comparison directions consistent with feedback augmentations, i.e., VF performed better than V, which was superior to R. While testing many metrics reduced statistical power, this experiment gives way and direction to focused studies into feedback modes for telemanipulation. In particular, in light of the false discovery rate corrections (compare Table 7 with Table 8), it would be of interest to investigate effects of guidance haptic fixtures and visual feedback on
  • reducing completion time.
  • path length
  • task load
Furthermore, comparing soley VF and V with more complicated telemanipulation tasks (e.g., increased clutter, nonlinear subtasks etc.) and fewer metrics may increase statistical power. This is consistent with the quantitative comparison directions between VF and V that this study revealed, despite a relatively simplistic task and multiple performance metrics.

Acknowledgements

We would like to acknowledge members of the BioRobotics lab and Dr. Maya Cakmak for their insight and assistance. Further, we acknowledge National Instruments for their generosity and support in contributing to this project. This material is based upon work supported by the National Science Foundation under Grant No. CNS-1329751 and the National Science Foundation Graduate Research Fellowship under Grant No. DGE-1256082. Any opinion, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Literature
1.
go back to reference Abbott JJ, Marayong P, Okamura AM (2007) Haptic virtual fixtures for robot-assisted manipulation. In: Thrun S, Brooks R, Durrant-Whyte H (eds) Robotics research, springer tracts in advanced robotics, vol 28. Springer, Berlin, pp 49–64 Abbott JJ, Marayong P, Okamura AM (2007) Haptic virtual fixtures for robot-assisted manipulation. In: Thrun S, Brooks R, Durrant-Whyte H (eds) Robotics research, springer tracts in advanced robotics, vol 28. Springer, Berlin, pp 49–64
2.
go back to reference Basdogan C, Kiraz A, Bukusoglu I, Varol A, Doğanay S (2007) Haptic guidance for improved task performance in steering microparticles with optical tweezers. Opt Express 15(18):11616–11621CrossRef Basdogan C, Kiraz A, Bukusoglu I, Varol A, Doğanay S (2007) Haptic guidance for improved task performance in steering microparticles with optical tweezers. Opt Express 15(18):11616–11621CrossRef
4.
go back to reference Birk A, Schwertfeger S, Pathak K (2009) A networking framework for teleoperation in safety, security, and rescue robotics. IEEE Wirel Commun 16(1):6–13CrossRef Birk A, Schwertfeger S, Pathak K (2009) A networking framework for teleoperation in safety, security, and rescue robotics. IEEE Wirel Commun 16(1):6–13CrossRef
5.
go back to reference Bluteau J, Coquillart S, Payan Y, Gentaz E (2008) Haptic guidance improves the visuo-manual tracking of trajectories. PLoS One 3(3):e1775CrossRef Bluteau J, Coquillart S, Payan Y, Gentaz E (2008) Haptic guidance improves the visuo-manual tracking of trajectories. PLoS One 3(3):e1775CrossRef
6.
go back to reference Boessenkool H, Abbink DA, Heemskerk CJ, van der Helm FC (2011) Haptic shared control improves teleoperated task performance towards performance in direct control. In: World Haptics conference (WHC), 2011. IEEE, pp 433–438 Boessenkool H, Abbink DA, Heemskerk CJ, van der Helm FC (2011) Haptic shared control improves teleoperated task performance towards performance in direct control. In: World Haptics conference (WHC), 2011. IEEE, pp 433–438
7.
go back to reference Chowdhury A, Meena YK, Raza H, Bhushan B, Uttam AK, Pandey N, Hashmi AA, Bajpai A, Dutta A, Prasad G (2018) Active physical practice followed by mental practice using BCI-driven hand exoskeleton: a pilot trial for clinical effectiveness and usability. IEEE J Biomed Health Inform 22(6):1786–1795CrossRef Chowdhury A, Meena YK, Raza H, Bhushan B, Uttam AK, Pandey N, Hashmi AA, Bajpai A, Dutta A, Prasad G (2018) Active physical practice followed by mental practice using BCI-driven hand exoskeleton: a pilot trial for clinical effectiveness and usability. IEEE J Biomed Health Inform 22(6):1786–1795CrossRef
9.
go back to reference Doroftei D, Matos A, de Cubber G (2014) Designing search and rescue robots towards realistic user requirements. In: Applied mechanics and materials, vol 658. Trans Tech Publications, Zürich, pp 612–617CrossRef Doroftei D, Matos A, de Cubber G (2014) Designing search and rescue robots towards realistic user requirements. In: Applied mechanics and materials, vol 658. Trans Tech Publications, Zürich, pp 612–617CrossRef
10.
go back to reference Du J, Mouser C, Sheng W (2016) Design and evaluation of a teleoperated robotic 3-D mapping system using an RGB-D sensor. IEEE Trans Syst Man Cybern Syst 46(5):718–724CrossRef Du J, Mouser C, Sheng W (2016) Design and evaluation of a teleoperated robotic 3-D mapping system using an RGB-D sensor. IEEE Trans Syst Man Cybern Syst 46(5):718–724CrossRef
11.
go back to reference Falk V, Mintz D, Grunenfelder J, Fann J, Burdon T (2001) Influence of three-dimensional vision on surgical telemanipulator performance. Surg Endosc 15(11):1282–1288CrossRef Falk V, Mintz D, Grunenfelder J, Fann J, Burdon T (2001) Influence of three-dimensional vision on surgical telemanipulator performance. Surg Endosc 15(11):1282–1288CrossRef
12.
go back to reference Goodrich MA, Crandall JW, Barakova E (2013) Teleoperation and beyond for assistive humanoid robots. Rev Hum Factors Ergon 9(1):175–226CrossRef Goodrich MA, Crandall JW, Barakova E (2013) Teleoperation and beyond for assistive humanoid robots. Rev Hum Factors Ergon 9(1):175–226CrossRef
13.
go back to reference Hart SG, Staveland LE (1988) Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv Psychol 52:139–183CrossRef Hart SG, Staveland LE (1988) Development of NASA-TLX (task load index): results of empirical and theoretical research. Adv Psychol 52:139–183CrossRef
14.
go back to reference Heo T, Huang K, Chizeck HJ (2018) Performance evaluation of haptically enabled sEMG. In: 2018 international symposium on medical robotics (ISMR). IEEE, pp 1–6 Heo T, Huang K, Chizeck HJ (2018) Performance evaluation of haptically enabled sEMG. In: 2018 international symposium on medical robotics (ISMR). IEEE, pp 1–6
16.
go back to reference Hover FS, Eustice RM, Kim A, Englot B, Johannsson H, Kaess M, Leonard JJ (2012) Advanced perception, navigation and planning for autonomous in-water ship hull inspection. Int J Rob Res 31(12):1445–1464CrossRef Hover FS, Eustice RM, Kim A, Englot B, Johannsson H, Kaess M, Leonard JJ (2012) Advanced perception, navigation and planning for autonomous in-water ship hull inspection. Int J Rob Res 31(12):1445–1464CrossRef
17.
go back to reference Huang K, Jiang LT, Smith JR, Chizeck HJ (2015) Sensor-aided teleoperated grasping of transparent objects. In: IEEE international conference on robotics and automation (ICRA), 2015. IEEE, pp 4953–4959 Huang K, Jiang LT, Smith JR, Chizeck HJ (2015) Sensor-aided teleoperated grasping of transparent objects. In: IEEE international conference on robotics and automation (ICRA), 2015. IEEE, pp 4953–4959
18.
go back to reference Huang K, Lancaster P, Smith JR, Chizeck HJ (2018) Visionless tele-exploration of 3D moving objects. In: IEEE international conference on robotics and biomimetics (ROBIO), 2018. IEEE, pp. 2238–2244 Huang K, Lancaster P, Smith JR, Chizeck HJ (2018) Visionless tele-exploration of 3D moving objects. In: IEEE international conference on robotics and biomimetics (ROBIO), 2018. IEEE, pp. 2238–2244
20.
go back to reference Jones CM, Healy SD (2006) Differences in cue use and spatial memory in men and women. Proc Biol Sci 273(1598):2241–2247CrossRef Jones CM, Healy SD (2006) Differences in cue use and spatial memory in men and women. Proc Biol Sci 273(1598):2241–2247CrossRef
21.
go back to reference Katyal KD, Brown CY, Hechtman SA, Para MP, McGee TG, Wolfe KC, Murphy RJ, Kutzer MD, Tunstel EW, McLoughlin MP et al (2014) Approaches to robotic teleoperation in a disaster scenario: from supervised autonomy to direct control. In: IEEE/RSJ international conference on intelligent robots and systems, 2014 (IROS 2014). IEEE, pp 1874–1881 Katyal KD, Brown CY, Hechtman SA, Para MP, McGee TG, Wolfe KC, Murphy RJ, Kutzer MD, Tunstel EW, McLoughlin MP et al (2014) Approaches to robotic teleoperation in a disaster scenario: from supervised autonomy to direct control. In: IEEE/RSJ international conference on intelligent robots and systems, 2014 (IROS 2014). IEEE, pp 1874–1881
23.
go back to reference Kim A, Eustice RM (2013) Real-time visual slam for autonomous underwater hull inspection using visual saliency. IEEE Trans Robot 29(3):719–733CrossRef Kim A, Eustice RM (2013) Real-time visual slam for autonomous underwater hull inspection using visual saliency. IEEE Trans Robot 29(3):719–733CrossRef
24.
go back to reference Kim M, Lee C, Hong N, Kim YJ, Kim S (2017) Development of stereo endoscope system with its innovative master interface for continuous surgical operation. Biomed Eng Online 16(1):81CrossRef Kim M, Lee C, Hong N, Kim YJ, Kim S (2017) Development of stereo endoscope system with its innovative master interface for continuous surgical operation. Biomed Eng Online 16(1):81CrossRef
25.
go back to reference Kuiper RJ, Heck DJ, Kuling IA, Abbink DA (2016) Evaluation of haptic and visual cues for repulsive or attractive guidance in nonholonomic steering tasks. IEEE Trans Hum Mach Syst 46(5):672–683CrossRef Kuiper RJ, Heck DJ, Kuling IA, Abbink DA (2016) Evaluation of haptic and visual cues for repulsive or attractive guidance in nonholonomic steering tasks. IEEE Trans Hum Mach Syst 46(5):672–683CrossRef
26.
go back to reference Leeper A, Hsiao K, Ciocarlie M, Sucan I, Salisbury K (2013) Methods for collision-free arm teleoperation in clutter using constraints from 3D sensor data. In: IEEE international conferences on humanoid robots, Atlanta, GA Leeper A, Hsiao K, Ciocarlie M, Sucan I, Salisbury K (2013) Methods for collision-free arm teleoperation in clutter using constraints from 3D sensor data. In: IEEE international conferences on humanoid robots, Atlanta, GA
27.
go back to reference Li M, Taylor RH (2003) Optimum robot control for 3D virtual fixture in constrained ENT surgery. In: Medical image computing and computer-assisted intervention-MICCAI 2003. Springer, pp 165–172 Li M, Taylor RH (2003) Optimum robot control for 3D virtual fixture in constrained ENT surgery. In: Medical image computing and computer-assisted intervention-MICCAI 2003. Springer, pp 165–172
28.
go back to reference Lin MC, Otaduy M (2008) Haptic rendering: foundations, algorithms, and applications. CRC Press, Boca RatonCrossRef Lin MC, Otaduy M (2008) Haptic rendering: foundations, algorithms, and applications. CRC Press, Boca RatonCrossRef
29.
go back to reference Maddahi Y, Zareinia K, Sepehri N (2015) An augmented virtual fixture to improve task performance in robot-assisted live-line maintenance. Comput Electr Eng 43:292–305CrossRef Maddahi Y, Zareinia K, Sepehri N (2015) An augmented virtual fixture to improve task performance in robot-assisted live-line maintenance. Comput Electr Eng 43:292–305CrossRef
31.
go back to reference Marayong P, Li M, Okamura A, Hager G (2003) Spatial motion constraints: theory and demonstrations for robot guidance using virtual fixtures. In: IEEE international conference on robotics and automation, 2003. Proceedings. ICRA ’03, vol 2, pp 1954–1959. https://doi.org/10.1109/ROBOT.2003.1241880 Marayong P, Li M, Okamura A, Hager G (2003) Spatial motion constraints: theory and demonstrations for robot guidance using virtual fixtures. In: IEEE international conference on robotics and automation, 2003. Proceedings. ICRA ’03, vol 2, pp 1954–1959. https://​doi.​org/​10.​1109/​ROBOT.​2003.​1241880
32.
go back to reference Mast M, Španěl M, Arbeiter G, Štancl V, Materna Z, Weisshardt F, Burmester M, Smrž P, Graf B (2013) Teleoperation of domestic service robots: effects of global 3D environment maps in the user interface on operators’ cognitive and performance metrics. In: Herrmann G, Pearson MJ, Lenz A, Bremner P, Spiers A, Leonards U (eds) Social robotics. Lecture notes in computer science, vol 8239. Springer, pp 392–401. https://doi.org/10.1007/978-3-319-02675-6-39 Mast M, Španěl M, Arbeiter G, Štancl V, Materna Z, Weisshardt F, Burmester M, Smrž P, Graf B (2013) Teleoperation of domestic service robots: effects of global 3D environment maps in the user interface on operators’ cognitive and performance metrics. In: Herrmann G, Pearson MJ, Lenz A, Bremner P, Spiers A, Leonards U (eds) Social robotics. Lecture notes in computer science, vol 8239. Springer, pp 392–401. https://​doi.​org/​10.​1007/​978-3-319-02675-6-39
33.
go back to reference Michieletto S, Tosello E, Pagello E, Menegatti E (2016) Teaching humanoid robotics by means of human teleoperation through RGB-D sensors. Rob Auton Syst 75:671–678CrossRef Michieletto S, Tosello E, Pagello E, Menegatti E (2016) Teaching humanoid robotics by means of human teleoperation through RGB-D sensors. Rob Auton Syst 75:671–678CrossRef
34.
go back to reference Nam CS, Richard P, Yamaguchi T, Bahn S (2014) Does touch matter? The effects of haptic visualization on human performance, behavior and perception. Int J Hum Comput Interact 30(11):839–841CrossRef Nam CS, Richard P, Yamaguchi T, Bahn S (2014) Does touch matter? The effects of haptic visualization on human performance, behavior and perception. Int J Hum Comput Interact 30(11):839–841CrossRef
35.
go back to reference Newcombe RA, Davison AJ, Izadi S, Kohli P, Hilliges O, Shotton J, Molyneaux D, Hodges S, Kim D, Fitzgibbon A (2011) Kinectfusion: real-time dense surface mapping and tracking. In: 10th IEEE international symposium on mixed and augmented reality (ISMAR), 2011, pp 127–136. https://doi.org/10.1109/ISMAR.2011.6092378 Newcombe RA, Davison AJ, Izadi S, Kohli P, Hilliges O, Shotton J, Molyneaux D, Hodges S, Kim D, Fitzgibbon A (2011) Kinectfusion: real-time dense surface mapping and tracking. In: 10th IEEE international symposium on mixed and augmented reality (ISMAR), 2011, pp 127–136. https://​doi.​org/​10.​1109/​ISMAR.​2011.​6092378
36.
go back to reference Ni D, Nee A, Ong S, Li H, Zhu C, Song A (2018) Point cloud augmented virtual reality environment with haptic constraints for teleoperation. Trans Inst Meas Control 40(15):4091–4104CrossRef Ni D, Nee A, Ong S, Li H, Zhu C, Song A (2018) Point cloud augmented virtual reality environment with haptic constraints for teleoperation. Trans Inst Meas Control 40(15):4091–4104CrossRef
37.
go back to reference Odelga M, Stegagno P, Bülthoff HH (2016) Obstacle detection, tracking and avoidance for a teleoperated UAV. In: 2016 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2984–2990 Odelga M, Stegagno P, Bülthoff HH (2016) Obstacle detection, tracking and avoidance for a teleoperated UAV. In: 2016 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2984–2990
38.
go back to reference van Oosterhout J, Wildenbeest JG, Boessenkool H, Heemskerk CJ, de Baar MR, van der Helm FC, Abbink DA (2015) Haptic shared control in tele-manipulation: effects of inaccuracies in guidance on task execution. IEEE Trans Haptics 8(2):164–175CrossRef van Oosterhout J, Wildenbeest JG, Boessenkool H, Heemskerk CJ, de Baar MR, van der Helm FC, Abbink DA (2015) Haptic shared control in tele-manipulation: effects of inaccuracies in guidance on task execution. IEEE Trans Haptics 8(2):164–175CrossRef
39.
go back to reference Park JW, Choi J, Park Y, Sun K (2011) Haptic virtual fixture for robotic cardiac catheter navigation. Artif Organs 35(11):1127–1131CrossRef Park JW, Choi J, Park Y, Sun K (2011) Haptic virtual fixture for robotic cardiac catheter navigation. Artif Organs 35(11):1127–1131CrossRef
40.
go back to reference Passenberg C, Glaser A, Peer A (2013) Exploring the design space of haptic assistants: the assistance policy module. IEEE Trans Haptics 6(4):440–452CrossRef Passenberg C, Glaser A, Peer A (2013) Exploring the design space of haptic assistants: the assistance policy module. IEEE Trans Haptics 6(4):440–452CrossRef
43.
44.
go back to reference Rydén F, Chizeck HJ, Kosari SN, King H, Hannaford B (2011) Using kinect and a haptic interface for implementation of real-time virtual fixtures. In: Proceedings of the 2nd workshop on RGB-D: advanced reasoning with depth cameras (in conjunction with RSS 2011) Rydén F, Chizeck HJ, Kosari SN, King H, Hannaford B (2011) Using kinect and a haptic interface for implementation of real-time virtual fixtures. In: Proceedings of the 2nd workshop on RGB-D: advanced reasoning with depth cameras (in conjunction with RSS 2011)
45.
go back to reference Rydén F, Stewart A, Chizeck HJ (2013) Advanced telerobotic underwater manipulation using virtual fixtures and haptic rendering. In: 2013 OCEANS-San Diego. IEEE, pp 1–8 Rydén F, Stewart A, Chizeck HJ (2013) Advanced telerobotic underwater manipulation using virtual fixtures and haptic rendering. In: 2013 OCEANS-San Diego. IEEE, pp 1–8
46.
go back to reference Stoyanov D, Scarzanella MV, Pratt P, Yang GZ (2010) Real-time stereo reconstruction in robotically assisted minimally invasive surgery. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 275–282 Stoyanov D, Scarzanella MV, Pratt P, Yang GZ (2010) Real-time stereo reconstruction in robotically assisted minimally invasive surgery. In: International conference on medical image computing and computer-assisted intervention. Springer, pp 275–282
47.
go back to reference Su YH, Huang I, Huang K, Hannaford B (2018) Comparison of 3D surgical tool segmentation procedures with robot kinematics prior. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 4411–4418 Su YH, Huang I, Huang K, Hannaford B (2018) Comparison of 3D surgical tool segmentation procedures with robot kinematics prior. In: 2018 IEEE/RSJ international conference on intelligent robots and systems (IROS). IEEE, pp 4411–4418
48.
go back to reference Su YH, Huang K, Hannaford B (2018) Real-time vision-based surgical tool segmentation with robot kinematics prior. In: 2018 international symposium on medical robotics (ISMR). IEEE, pp 1–6 Su YH, Huang K, Hannaford B (2018) Real-time vision-based surgical tool segmentation with robot kinematics prior. In: 2018 international symposium on medical robotics (ISMR). IEEE, pp 1–6
49.
go back to reference Susperregi L, Martínez-Otzeta JM, Ansuategui A, Ibarguren A, Sierra B (2013) RGB-D, laser and thermal sensor fusion for people following in a mobile robot. Int J Adv Robot Syst 10:271CrossRef Susperregi L, Martínez-Otzeta JM, Ansuategui A, Ibarguren A, Sierra B (2013) RGB-D, laser and thermal sensor fusion for people following in a mobile robot. Int J Adv Robot Syst 10:271CrossRef
50.
go back to reference Taylor R (1990) Situational awareness rating technique (SART): the development of a tool for aircrew systems design. AGARD, situational awareness in aerospace operations 17 p(SEE N 90-28972 23-53) Taylor R (1990) Situational awareness rating technique (SART): the development of a tool for aircrew systems design. AGARD, situational awareness in aerospace operations 17 p(SEE N 90-28972 23-53)
53.
go back to reference Wurm KM, Hornung A, Bennewitz M, Stachniss C, Burgard W (2010) Octomap: a probabilistic, flexible, and compact 3D map representation for robotic systems. In: Proceedings of the ICRA 2010 workshop Wurm KM, Hornung A, Bennewitz M, Stachniss C, Burgard W (2010) Octomap: a probabilistic, flexible, and compact 3D map representation for robotic systems. In: Proceedings of the ICRA 2010 workshop
54.
go back to reference Yan J, Huang K, Bonaci T, Chizeck HJ (2015) Haptic passwords. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), 2015. IEEE, pp 1194–1199 Yan J, Huang K, Bonaci T, Chizeck HJ (2015) Haptic passwords. In: IEEE/RSJ international conference on intelligent robots and systems (IROS), 2015. IEEE, pp 1194–1199
Metadata
Title
Evaluation of haptic guidance virtual fixtures and 3D visualization methods in telemanipulation—a user study
Authors
Kevin Huang
Digesh Chitrakar
Fredrik Rydén
Howard Jay Chizeck
Publication date
20-07-2019
Publisher
Springer Berlin Heidelberg
Published in
Intelligent Service Robotics / Issue 4/2019
Print ISSN: 1861-2776
Electronic ISSN: 1861-2784
DOI
https://doi.org/10.1007/s11370-019-00283-w

Other articles of this Issue 4/2019

Intelligent Service Robotics 4/2019 Go to the issue