It is important to recognize the carrying position of an unconstrained smartphone accurately and robustly, since carrying positions may directly impact the parameter settings for step detection and step length estimation. Many previous works [
24,
25] have pointed out that the acceleration patterns for different carrying positions show distinct features. The statistics of three dimensional raw measured acceleration samples are deployed as input features to develop the carrying position classifier. The developed classifiers are all designed upon relatively stable device orientations. If the device orientations are unconstrained, the raw measured acceleration samples may vary a lot with the changing orientations under the same carrying positions. As a result, these classifiers may render degraded recognition accuracy with the unconstrained uses of smartphones.
In order to adapt the arbitrary smartphone orientations, we develop a robust carrying position classifier based on three kinds of orientation invariant features, including the total magnitude of acceleration MAcc, magnitude of the acceleration in the horizontal plane HAcc, and acceleration in the gravity direction GAcc,
$$ \mathrm{MAcc}={\left|\mathbf{Acc}\right|}_2=\sqrt{{\mathrm{Acc}}_x^2+{\mathrm{Acc}}_y^2+{\mathrm{Acc}}_z^2} $$
(1)
$$ \mathrm{HAcc}={\left|\mathbf{Acc}-\left(\mathbf{Acc}\cdot {\mathbf{g}}_{\mathrm{normal}}\right){\mathbf{g}}_{\mathrm{normal}}\right|}_2 $$
(2)
$$ \mathrm{GAcc}={\left|\left(\mathbf{Acc}\cdot {\mathbf{g}}_{\mathrm{normal}}\right){\mathbf{g}}_{\mathrm{normal}}\right|}_2 $$
(3)
where
Acc = [Acc
x Acc
y Acc
z]
T is the raw measured acceleration vector at DCS, |·|
2 is the two-norm of a vector,
gnormal is the normalized gravity vector at DCS, which can be calculated by tracked device attitude as seen in (7). These features also exploit the principle that different carrying positions show distinct acceleration patterns. In contrast, these features remain stable for the same carrying positions, regardless of smartphone orientation changes.The design of a carrying position classifier consists of three phases: data pre-processing, feature extraction, and classifier training. Firstly, we collect acceleration samples continuously and divide them into small segments by a sliding window [
25], whose size is 2 s and with 50% overlap between adjacent windows. After data pre-processing, we extract three orientation invariant feature samples within the sliding window. Upon these feature samples, we deploy their statistics as ultimate input features of classifier, including variance, mean, medium, maximum, and minimum. These ultimate input features are all useful to discriminate carrying positions. For example, owing to the different intensity of user body locomotion, medium values of HAcc for in-pocket and swinging-hand positions are always much larger than those of phone-call and hand-held positions.After feature extraction, as suggested by previous works [
25,
26], we deploy random forest [
26] as the carrying position classifier. We gather a total of 4000 samples for four carrying positions to train the classifier. For each carrying position, all possible device orientations are covered as much as possible and sampled uniformly. The ten-fold cross-validation method is used to evaluate the random forest-based classifier. The data samples are partitioned into ten parts randomly. Nine ones are used for training and the rest one for testing.Table
1 shows the random forest-based position recognition results with the proposed orientation invariant features and normal features, which are the same statistics of the raw measured acceleration samples. The results show that compared with the classifier using normal time domain features [
25], the proposed robust position classifier shows an average classification accuracy improvement from 90.9 to 98.5%. This is because the orientation invariant features may better adapt the arbitrary device orientation change and enhance the generalization ability of the related classifier. As seen in Table
1, except for a very low probability of confusing between hand-held and phone-call positions, and swinging-hand and in-pocket positions, the device carrying positions can be correctly recognized by a sufficiently high probability. Therefore, in the rest of the paper, we assume that the device carrying position can be correctly recognized in the proposed WiFi/PDR-integrated positioning approach.
Table 1
Confusion table of random forest classifier with orientation invariant and normal features
Hand-held | 0.982/0.904 | 0.018/0.063 | 0/0.033 | 0/0 |
Phone-call | 0.01/0.064 | 0.990/0.915 | 0/0.004 | 0/0.017 |
Swinging-hand | 0/0.018 | 0/0.001 | 0.986/0.896 | 0.014/0.085 |
In-pocket | 0/0 | 0/0 | 0.028/0.081 | 0.972/0.919 |