Skip to main content
Erschienen in: Complex & Intelligent Systems 1/2023

Open Access 29.06.2022 | Original Article

SPCS: a spatial pyramid convolutional shuffle module for YOLO to detect occluded object

verfasst von: Xiang Li, Miao He, Yan Liu, Haibo Luo, Moran Ju

Erschienen in: Complex & Intelligent Systems | Ausgabe 1/2023

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

In crowded scenes, one of the most important issues is that heavily overlapped objects are hardly distinguished from each other since most of their pixels are shared and the visible pixels of the occluded objects, which are used to represent their features, are limited. In this paper, a spatial pyramid convolutional shuffle (SPCS) module is proposed to extract refined information from the limited visible pixels of the occluded objects and generate distinguishable representations for the heavily overlapped objects. We adopt four convolutional kernels with different sizes and dilation rates at each location in the pyramid features and adjacently recombine their fused outputs spatially using a pixel shuffle module. In this way, four distinguishable instance predictions corresponding different convolutional kernels can be produced for each location in the pyramid feature. In addition, multiple convolutional operations with different kernel sizes and dilation rates at the same location can generate refined information for the corresponding regions, which is helpful to extract features for the occluded objects from their limited visible pixels. Extensive experimental results demonstrate that SPCS module can effectively boost the performance in crowded human detection. YOLO detector with SPCS module achieves 94.11% AP, 41.75% MR, 97.75% Recall on CrowdHuman, 93.04% AP, and 98.45% Recall on WiderPerson, which are the best compared with previous state-of-the-art models.
Hinweise

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Introduction

Object detection is a basic and practical task in computer vision. In recent years, depending on the development of convolutional neural networks (CNNs), researchers have seen broad prospects of utilizing detection technique in various domains, such as pedestrian and vehicle detection in automatic drives, remote object recognition [1, 2] and intelligence surveillance systems [3]. Many CNN-based detectors have been proposed such as YOLO series [47], SSD [8], DSSD [9], Faster-RCNN [10], CenterNet [11], FCOS [12], which are all proved to have state-of-the-art (SOTA) performance on general object detection benchmarks such as COCO [13] and Pascal VOC [14]. However, for all the mentioned models, there are still room for improvement when the objects crowdedly occur and overlap each other heavily. There are two main challenges in that situation: (1) heavily overlapped objects are hardly distinguished in the semantic feature space, because most of their pixels are shared and the visible pixels of the occluded object which are used to represent its particularity are limited; (2) the traditional greedy non-maximum suppression (NMS) process will suppress the heavily overlapped prediction boxes by mistake when their overlapping degree greater than a specific threshold. These two challenges make the current models unable maximize their potential.
To date, some works have been proposed to improve the detection performance in crowded scenes [1520]. Other works pay attention to the NMS process [2123]. To the best of our knowledge, most works that specifically pay attention to the occlusion issue are based on two-stage detectors. Compared with two-stage models, one-stage detectors have many obvious advantages. YOLO series as representative models among SOTA detectors, have a good balance between precision and inference speed, so that they have serviced in industries extensively. Many works, such as scaled-YOLOv4 [24] and YOLOX [25], improve the YOLO detector in terms of the structures, image augmentation, training methods and so on, and they all achieve impressive progress on COCO dataset compared with the original YOLO, which demonstrates that the YOLO detectors still have potential to perform better. However, for YOLO-based detectors, there is another shortcoming towards the crowded object detection. The original YOLO separates each pyramid feature into several grids (e.g., \(13\times 13\), \(26\times 26\) and \(52\times 52\) grids with input size of \(416\times 416\)), and each grid is assigned only one ground truth box whose center point is located in it. When two or more objects overlap with each other heavily and their center points are located in the same grid, only one of these objects will be reserved into the training process and the others will be ignored, as shown in Fig. 1a.
In this paper, we propose a spatial pyramid convolutional shuffle module named SPCS for the YOLO detectors to handle the crowded scenes. The YOLO-based detector with SPCS module is named as YOLOC. SPCS module enlarges the pyramid features by fusing the outputs of four convolutional layers with different kernel sizes via pixel shuffle module [26]. There are two steps in SPCS module: First, for each grid in YOLO pyramid feature, the spatial pyramid convolutional (SPC) module generates four distinguishable sub-features extracted using four convolutional kernels with different sizes and dilation rates. In this way, the distinguishable representations can be generated for the multiple overlapped objects that occupy almost the same regions. Then, the four output features with same channels are concatenated in the channel wise, and a pixel-shuffle module is adopted to increase the resolution of feature pyramid, i.e., adjacently place the sub-features which are extracted from same location spatially. The SPCS module cannot only increase the resolution of the feature pyramid, which can cover the YOLO’s shortage in positive target determination mechanism when facing the crowded scenes, as shown in Fig. 1b, but also provide distinguishable features for heavily overlapped targets. To verify the ability of SPCS module in occluded object detection and prevent the influence of the NMS post-process, we adopt three NMS methods, i.e., greedy NMS, Adaptive NMS [22] and Soft NMS [27], to comprehensively show the performance of SPCS module. It is worth noting that the predicted density information of objects is required as the Intersection over Union (IoU) threshold in the Adaptive NMS algorithm. Compared with extracting information from a single object, the density prediction needs to extract information from multiple overlapped objects, and a strong information extraction ability and larger receptive fields are necessary [22]. Therefore, the performance of the predicted density in Adaptive NMS can be used as a metric to measure the ability to extract information from occluded objects. In this paper, we design an experiment on the density prediction to demonstrate that the proposed SPCS module can improve information extraction ability. We adopt a tiny density prediction head to predict density of objects, which is different from Adaptive NMS [22], to make the influence of SPCS prominent and prevent the extra complex networks from covering up the shortage in terms of density prediction.
Extensive experiments are implemented to verify the effectiveness of the SPCS module. First, ablation studies are implemented on the CrowdHuman [28] dataset to verify the comprehensive effectiveness of the SPCS module in occluded object detection. Second, a density prediction experiment is conducted to demonstrate that the SPCS module can improve the information extraction ability. Third, comparative experiments are conducted on the CrowdHuman and WiderPerson [29] to compare the comprehensive performance of our model with some SOTA models. The results show that YOLOC achieves the best performance in AP and Recall, and the second best performance in MR on CrowdHuman, i.e., 94.11%, 97.75% and 41.77%, respectively. On WiderPerson, YOLOC achieves 93.04% AP, 50.71% MR and 98.45% Recall. Moreover, benefitting from its one-stage structure, YOLOC achieves the fastest inference speed among all SOTA models.
For clarity, the main contributions of this paper can be summarized as follows:
1.
A spatial pyramid convolutional shuffle module is proposed to boost the ability of extracting information from the limited visible pixels of occluded objects and generating distinguishable representations for the occluded objects.
 
2.
A tiny density prediction head and a density loss function are proposed for the density prediction experiment which are designed to prove that the SPCS module can improve information extraction ability.
 
3.
Extensive comparative experiments are conducted on CrowdHuman and WiderPerson to prove that models with SPCS module can achieve best performance in heavily occluded object detection.
 
Generic object detection Object detection as a fundamental computer vision task, has achieved great progress with the rapidly development of the convolutional neural network. The mainstream detection models are usually categorized into two-stage models [10, 3034] and one-stage [49, 35] models. RCNN [30] first adopt CNNs into object detection and proposes a two-stage framework: first generate proposal boxes using selective search algorithm [36], then conduct box regression and classification based on the proposal boxes obtained in the first stage to get refined predictions. To solve the problem that the computations between different object prediction cannot be shared, Fast RCNN [31] proposes RoI (region of interest) pooling to make the output from each proposal has the same size, which increases the inference speed significantly. Faster-RCNN [10] lays the foundation of the two-stage detectors, proposes a RPN (region proposal network) to replace the selective research algorithm, and filter the background regions and effectively generates precise proposal regions. Some other works such as RoI Align [32], RoI warping pooling [37], PrRoIPooling [38], and PSRoI pooling [34] pay attention to the pooling process to the region of interesting. Although the two-stage methods achieve impressive precision, the inference speed of two-stage methods are always not satisfactory. Different from the two-stage detectors, the one-stage methods replace the predicted proposal boxes by the fixed anchor boxes which are densely paved in the prediction features, and conduct regression and classification based on the anchor boxes in the fully convolutional way. SSD [8] proposes a one-stage framework and utilize the multi-scale features to detect objects with different scales. DSSD [9] fuses the deep and the shallow features to enrich the semantic information of the features with large resolution, which is helpful for the small target detection. RetinaNet [35] proposes a Focal Loss for the issue that the one-stage detectors suffer from extreme imbalance between the positive and negative samples. YOLOv3 [6] proposes a full convolution network DarkNet as the backbone achieves great balance between speed and precision. Later, YOLOv4 [7], Scale-YOLOv4 [24] and YOLOX [25] are proposed to optimize the YOLO model in terms of the network structures, image augmentation, training method. To deal with the scale changing problem, the aforementioned detectors are all based on the anchor boxes which are of various scales and shapes and densely paved on the feature maps. To eliminate the influence of the hyper-parameters brought by the anchor boxes and speed reduction caused by the non-maximum suppression post-processing, the anchor-free methods are proposed. CornerNet [39] proposes a key point-based method that predict the top-left and the bottom-right point of the object box. CenterNet [11] is also a key point-based method that predicts the center point of the bounding box as the key point. FCOS [12] proposes a full convolution anchor-free model and utilizes multi-scale features to solve the ambiguity when objects overlap with each other. Since the computation of one-stage detectors are shared between all the targets in an image, the one-stage detectors have great advantages in the terms of inference speed than the two-stage methods.
Works for crowded scenes Although the generic detectors have achieved great performance, the crowded scenes are still challenging for them, and many works have been proposed to dig out their potential in crowded target detection. [15] proposes a novel concept that each proposal box predicts multiple targets rather than one to solve the problem of feature confusion between the heavily overlapped objects. In addition, a set-NMS is proposed that prediction boxes that are generated based on the same proposal box should be preserved, and others should be suppressed. [16] follows the iterative scheme to detect a subset of objects at each iteration and there are no interactions between the detection results of different iterations. This method needs to conduct more than one inference framework in one detection process, which is obviously inefficient. [17] proposes a multi-scale attention feature aggregation module that can extract deeper information, and an attention block is added to enhance the features of objects.
In addition, many works focus on the improving the NMS process. [20] develops a double anchor RPN to capture the body and head parts in pairs, which are used to guide the NMS process. This method uses both head and body information, however, not all instances in various datasets are labeled in the head-body pair way. Different from the [20] which predicts the head–body box pair, the [21, 23], which work in very similar ways, predict visible-full box pairs. It is obvious that the visible parts of objects in the crowded scenes rarely overlap and there is a correspondence between each visible box and the full box. Thus, the visible boxes that are preserved after the NMS process can be used to guide the selection of full prediction boxes. However, only the datasets that labeled in that certain way can be trained using this method. [22] finds that the fixed IoU threshold is not reasonable for the crowded scenes and claims that the IoU threshold should change according to the density of the counterpart object, i.e., increase when objects are dense and decrease when sparse. To solve the occlusion problem in pedestrian detection, [40] adopts a channel-wise attention mechanism into the Faster-RCNN to handle different occlusion patterns. It find that some specific channels show strong activations at the human head, upper body and feet, respectively. Guided by the difference of the activations from different regions, the attention mechanism will reweight each channel and make the occluded parts have lower impact on the final score. [18] proposes AggLoss, which is also adopted by [22], to make the proposal boxes corresponding to the same object more compact. In addition, a new part occlusion aware region of interest (PORoI) pooling is utilized to integrate the prior structure information of the human body with visibility prediction into the network.
The aforementioned algorithms have achieved great performance in crowded pedestrian detection. However, to the best of our knowledge, most works that specifically pay attention to the occlusion issue are based on two-stage detectors such as Faster-RCNN, which does not have as a great balance between precision and inference speed as the one-stage models. In this paper, we propose a one-stage detector that achieves SOTA performance in terms of precision and is significantly faster than the current two-stage models.

Methods

In crowded scenes, there are many objects occluded heavily by other objects, and the pixels that represent their specificity are limited, which makes them hardly to be distinguished from the objects that cover them. In this section, we introduce the SPCS module to generate refined distinguishable features for heavily occluded objects.

SPCS module

YOLO’s mechanism to determinate the positive anchor is not friendly to the crowded targets. In the YOLO algorithm, the feature pyramids are divided into several grids spatially (e.g., \(13 \times 13\), \(26 \times 26\) and \(52 \times 52\) with input sizes of \(416\times 416\)), and only the grid that contains the center points of objects will be seen as positive. However, when more than one targets that heavily overlaps each other occurs and their center points are located in the same grid, only one target will be preserved in the training process, and the others will be ignored. This shortcoming makes the target that heavily overlaps another one hardly to be predicted by YOLO. Increasing the resolution of the feature pyramid is a good way to cover this problem. Fine meshing will let the targets be assigned to different grids as much as possible, as shown in Fig. 1b. However, there is another problem: for an object that heavily covered by another front object, the front object occupies most of the region of its bounding box, and its visible pixels which are used to express its particularity are limited, as shown in Fig. 1c. Therefore, for the occluded object, it is difficult to extract a distinguishable representation which is far away from the front object in the feature space.
In this paper, we propose a spatial pyramid convolutional shuffle module to increase the resolution of the pyramid feature and generate refined distinguishable representations for heavily overlapped targets at the same time. As shown in Fig. 2, the SPCS takes pyramid features as input. Inspired by the spatial pyramid pooling (SPP) [42], we use four convolutional kernels with different sizes and dilation rates to each pyramid feature, and concatenate the outputs of these four convolutional layers in channel wise. Specifically, we adopt four kinds of convolutional kernels, i.e., kernel size \(1 \times 1\), kernel size \(3\times 3\) with dilation rate 1, kernel size \(4\times 4\) with dilation rate 2 and kernel size \(5\times 5\) with dilation rate 2. Different kernels cover different spatial scopes. Compared with a single \(3\times 3\) convolutional layer, we look one location four times through four different kernels. This hierarchical structure extracts information from different scopes to form refined features with not only detailed but also relatively global information of the current region. Then, a pixel-shuffle module is utilized to recombine the features to increase the resolution. In this way, 4 distinguishable sub-features which correspond to 4 different convolutional kernels can be generated based on the same grid of the original feature pyramid.
As shown in Fig. 3, we name each \(2\times 2\) grid in the enlarged feature as a cell, and the feature with the resolution \(2W\times 2H\) contains \(\hbox {W}\times H\) cells. Given an input feature map with size \(4C\times W\times H\), where W is width, H is height and 4C is number of channels, the pixel-shuffle module divides the input feature into four parts in the channel wise averagely, and each part is of the size \(C\times W\times H\). The grids in the first part (the green feature in Fig. 3) are resettled in the top-left grid of each cell in the output feature of the pixel-shuffle module, the grids in the second part (the blue feature map in Fig. 3) are resettled in the top-right grid of all cells, the third part (the red feature map in Fig. 3) in the bottom-left and the fourth part (the yellow feature map in Fig. 3) in the bottom-right grid of all cells. This mechanism makes the grids in every cell have a fixed relationship to the previous convolutional kernel sizes, or in other words, receptive fields. However, this fixed relationship may not be the best choice, i.e., the top-left grids may need larger receptive fields than other grids in the same cell, or the bottom-right grid may need to focus on a small region. Therefore, we add a \(1\times 1\) convolutional layer before the pixel-shuffle module which does not change the channel size of the feature to fuse the results of the four convolutional layers in channel wise. Following the baseline, we use Mish [41] as the activation function in all network structures.
In the enlarged feature map, as shown in Fig. 3, the sub-features in the four grids of the same cell are obtained based on the same grid of the previous feature map, which is similar to the idea of [15] that predicts multiple boxes based on one single proposal box. The SPCS module predicts 4 instances based on a same location in the original pyramid feature, but they are different from each other because of the difference between their corresponding convolutional kernel sizes. Multiple convolutional kernels with different receptive fields utilized on the same location can provide multilevel information.
In addition, to further enhance the dissimilarity between adjacent sub-features, so we add a skip branch to directly transmit the detail information to the enlarged features from the low level features with the same resolution in the backbone, as shown in Fig. 2. It is obvious that the differences between the overlapped objects are from the details, which means the low level features that contain more detail information can further enhance the difference between the adjacent sub-features. Following the YOLO style, the low level features are concatenated with the output features of SPCS module to form the new feature pyramid.
Benefitting from its structure, the SPCS module can provide four distinguishable representations for each location of the original pyramid, and the increased resolution covers the shortcoming of the positive anchor determination mechanism of YOLO in crowded scenes.

NMS process

There are two main challenges in heavily overlapped object detection, i.e., the distinguishable feature extraction issue and the NMS process. Even though the heavily overlapped objects are predicted correctly, the post-processing NMS may suppress some of them by mistake.
The original NMS, which is adopted by most of the SOTA algorithms and achieves great performance on general targets datasets such as COCO, cannot leisurely cope with the scenes where targets occur crowdedly. The traditional greedy NMS adopts a fixed IoU threshold and directly deletes the boxes whose IoU score with the proposal box greater than the threshold, which is definitely not reasonable for crowded objects. Soft NMS [27] improves the strategy to prune the real redundancy prediction boxes. It punishes the confidence scores of the candidate boxes according to their IoU value with the proposal box, i.e., the larger the IoU value with the proposal box, the smaller the confidence scores will be, and then suppresses all candidate boxes whose confidence scores are less than the threshold. The Adaptive NMS [22] provides a reasonable idea that the IoU threshold should be adaptive according to the density of the targets, i.e., the IoU threshold should increase when targets are dense (heavily overlap with each other) and decrease when targets are sparse (no touch or mildly overlap).
As mentioned above, there are two main challenges in heavily overlapped object detection, i.e., the distinguishable feature extraction issue and the NMS process. These two challenges can simultaneously influence the performance of detectors. Since our SPCS module is proposed for the first challenge, in this paper we adopt all these three NMS methods, respectively, aiming to alleviate the influence of NMS and comprehensively show the performance of the SPCS module. The related discussion will be released in the experimental section.
Table 1
Ablation studies on CrowdHuman
NMS type
Model
AP (%)
MR (%)
Recall (%)
Original
YOLO\(^1\)
92.99
43.29
96.63
YOLOC\(^2\)
93.96 (+0.97)
41.85 (– 1.44)
97.71 (+1.08)
Adaptive\(^3\)
YOLO
93.56
42.85
96.99
YOLOC
94.31 (+0.75)
41.44 (– 1.41)
97.84 (+0.85)
Soft
YOLO
94.07
43.26
98.36
YOLOC
94.69 (+0.62)
41.83 (– 1.43)
99.01 (+0.63)
\(^1\)YOLO is the Scaled-YOLOv4
\(^2\)YOLOC is the Scaled-YOLOv4+SPCS
\(^3\)The density adopted by adaptive NMS is calculated using annotation information
Bold values indicate better results than other methods under the current index

Experiments

To verify that the SPCS module can improve the performance in heavily overlapped object detection and boost the information extraction ability, we evaluate our model on two public datasets, i.e., CrowdHuman and WiderPerson. First, we implement ablation studies on the CrowdHuman dataset to verify the effectiveness of the SPCS module on occluded object detection. Second, a density prediction experiment is conducted to verify that the SPCS module can improve the information extraction ability. Finally, we perform comparative experiments on the CrowdHuman and WiderPerson datasets to compare the performance of YOLOC with some SOTA methods on crowded target detection.

Dataset

CrowdHuman CrowdHuman is a public human dataset that contains 15000 images in the training set, 4370 images in the validation set and 5000 images in test set. There are approximately 470K instances in the training set and the validation set, and each image contains 23 instances on average. Each instance has three labels, i.e., full box that surrounds the whole pedestrian including the occluded parts, visible box and the head box. In our method, we only use the full box labels. Compared with other pedestrian datasets, instances in CrowdHuman are more denser and there are on average 2.40 instances per image whose IoU with other instances greater than 0.5. The results evaluated on CrowdHuman is more convincing for verifying the ability of the crowded target detection, so we perform most of the ablations and comparisons on CrowdHuman. All results are reported on the validation set.
WiderPerson WiderPerson is a crowded human detection dataset. It contains 8000, 1000 and 4382 images in training set, validation set and test set, respectively, and each image contains 28.87 instances on average. The objects in this dataset are annotated for 5 classes: pedestrian, rider, partially visible persons, crowd and ignored region. Following the protocol of the official evaluation code, we only use annotations of the first category, i.e., pedestrians, for training and testing, and ignore all annotations of other categories. All results are reported on the validation set.

Evaluation metric

Average precision (AP) AP is the mainstream evaluation metric for object detection, which takes into account not only precision but also the recall ratio of the detection results. Larger AP means better performance.
MR MR is a metric commonly adopted by pedestrian detection. It is the short for long-average miss rate on false positive per image (FPPI) with the overlap thresholds range of \([10^{-2}, 10^0]\), which is the same as the official metric of Caltech [43]. The MR is very sensitive to the false-positive rate. Lower MR means better performance.
Recall Recall is the short for the maximum recall among all detection boxes, and this metric reflects the proportion of the predicted true positive to ground truth, i.e., how many ground truth objects can be predicted properly. It can be calculated as the following:
$$\begin{aligned} Recall=\frac{True\ Positive}{True\ Positive + False\ Negative} \end{aligned}$$
(1)
Larger Recall means better performance.

Training settings

We train all the models using the SGD optimizer with momentum 0.937; warm up epochs are 3; all training images are resized to 864, and Mosaic and MixUp [7] are used for image augmentation. The larger edges of the testing images are resized to 896 without any image augmentation. Multiscale training and testing are not adopted. The cosine learning rate [44] scheduling strategy is adopted, which is defined as follows:
$$\begin{aligned} lr_t=(1-\eta )\left( 1-\frac{1}{2}\left( 1-cos\left( \frac{t\pi }{T}\right) \right) \right) lr_{init} \end{aligned}$$
(2)
where \(lr_t\) is the learning rate of the epoch t, \(lr_{init}\) is the initial learning rate, T is the number of epochs the model will be trained, and \(1-\eta \) controls the lower limit of the learning rate. Similar to the original YOLO, we use the K-means algorithm to statistic 9 anchor boxes of the corresponding dataset, and in the training process, anchors are determined as positive or negative according to the following:
$$\begin{aligned} anchor = {\left\{ \begin{array}{ll} positive, &{} if max(\frac{w_a}{w_t},\frac{w_t}{w_a},\frac{h_t}{h_a},\frac{h_t}{h_a})<=4.0\\ negative, &{} otherwise \\ \end{array}\right. }\nonumber \\ \end{aligned}$$
(3)
where \(w_a\) and \(h_a\) are the width and height of the current anchor box, respectively, and \(w_t\) and \(h_t\) are the width and height of any one target bounding box, respectively. For the sake of fairness, all of our experiments use the same hyper-parameters and image augmentation method. The results are obtained based on PyTorch framework 1.7.1 using 8 NVIDIA RTX 3090 GPUs.

Ablation studies on CrowdHuman

Table 1 shows the ablation experiments of the proposed SPCS module. The baseline is Scaled-YOLOv4 with three different NMS methods. For original NMS, we set IoU threshold as 0.6. For Adaptive NMS, for the sake of fairness, we use ground truth density which is calculated using the annotations of the validation dataset as the IoU threshold. We set the batch size to 32, the initial learning rate \(lr_{init}\) of the cosine scheduling strategy to 0.005, \(\eta \) to 0.12, T to 300 and momentum as 0.937. The backbone and neck of the models are initialized using the pre-trained scaled-YOLOv4 on COCO and the rest are initialized with a Gaussian distribution with a mean value of 0 and variance of 0.2.
We can see that compared with the original Scaled-YOLOv4, the SPCS module can significantly improve performance of the crowded object detection. Under the circumstances of the original NMS, the AP, MR and Recall increase 0.97%, 1.44% and 1.08%, respectively; under the circumstances of adaptive NMS with ground truth density, the AP, MR, and Recall increase 0.75%, 1.42% and 0.85%, respectively; Under the circumstances of soft NMS, the AP, MR, and Recall increase 0.62%, 1.43% and 0.63%, respectively. These comprehensive comparative results can exclude the influence of different NMS methods and demonstrate that the SPCS module can improve the performance in crowded scenes.
The increased resolution of the pyramid features can guarantee as many targets as possible to be preserved in the training process, and the distinguishable multilevel semantic information provided by the SPCS module can extract refined semantic information for occluded objects. These two properties can be reflected in terms of recall rate directly. Table 1 shows only the maximum recall rate, however, as we all know that the recall rates are different as the confidence score threshold changes, we sample recall rate values with an interval of 0.1 in the confidence score threshold range [0, 1], as shown in Fig. 4. Regardless of the NMS process, the recall rate generated by the model with the SPCS module outperforms the recall rate generated by the baseline. This benefits from the two properties mentioned above. In addition, the results generated by the Soft NMS achieves best Recall 99.01%, which means that only very few objects are missed. And the comparison between original NMS and the Soft NMS also demonstrate that the many objects are suppressed by NMS process rather than missed by the detector.

Ablation studies on information extraction ability

In this section, we use the adaptive NMS to design a density prediction experiment to demonstrate that our SPCS module can enhance the information extraction ability for the crowded object. The adaptive NMS utilizes the density information that is predicted by the network. It is clear that the density prediction needs to consider information of multiple overlapping objects, and the limited visible pixels of the occluded objects play an important role in the precise density prediction. If two objects overlap extremely heavily and only very few pixels of the occluded object are visible, their real density, i.e., the IoU value between their bounding boxes will tend to be 1. However, if these very few visible pixels are ignored by the detector, which means that the detector will think there is only one object there, the predicted density will tend to be 0, which is completely opposite to the truth and will produce great error in adaptive NMS. Therefore, the performance of the predicted density in adaptive NMS can be used to demonstrate the information extraction ability for the occluded objects indirectly. The performance variation caused by the predicted density and the ground truth density of the objects in Adaptive NMS algorithms can be used as a metric to show the information extraction ability. The results closer to the ground truth density indicate better performance.
Table 2
Ablation studies on density prediction
Model
Density source
AP(%)
MR (%)
Recall (%)
\(\Delta \)AP
\(\Delta \)MR
\(\Delta \)Recall
YOLO
Prediction\(^1\)
93.2
43.14
96.73
0.36
0.29
0.26
Ground truth\(^2\)
93.56
42.85
96.99
YOLOC
Prediction
94.11
41.75
97.75
0.2
0.31
0.09
Ground truth
94.31
41.44
97.84
\(^1\)Prediction density generated by the density prediction head shown as Fig. 5(b)
\(^2\)The ground truth density is calculated using annotation information
Bold values indicate better results than other methods under the current index
Prediction head In our method, to make the influence of the SPCS module prominent and prevent the extra complex structure from covering up the shortage of the original network in density prediction, a tiny density prediction branch is adopted. Our density prediction subnet contains only two convolutional layers, i.e., a 3\(\times 3\) convolutional layer and a \(5\times 5\) convolutional layer, which is much simpler than the [22], as shown in Fig. 5b.
Density loss In an image with multiple objects, each object may overlap with more than one other, and we choose the maximum IoU value as its density label. The object density is defined as follows:
$$\begin{aligned} td_{i}=max_{b_{i}\in \psi ,i \ne j}iou(b_{i}, b_{j}). \end{aligned}$$
(4)
where \(td_{i}\) is the density label of the box \(b_{i}\), and is defined as the maximum bounding box IoU with all other ground truth boxes in set \(\psi \) , iou(xy) computes IoU value of the two input boxes x and y.
In the NMS process, a candidate box \(b_{i}\) should be suppressed if \(iou(b_i,M) > t_M\), where M is the current proposal box and \(t_M\) is the adaptive IoU threshold of M. Following the Adaptive NMS, the \(t_M\) is defined as
$$\begin{aligned} t_{M}=max(d_{t},d_{M}) \end{aligned}$$
(5)
where \(d_t\) is the lower bound of the adaptive threshold and is set as 0.6 manually in our method, and the \(d_M\) is the predicted density value of the predicted box M. The NMS process is summarized as following: for the current proposal box M and candidate box B:
1.
if \(d_t>d_M\), which means box M is located in a sparse region, the NMS process will follow the traditional process, i.e., the boxes whose IoU values greater than the fixed threshold \(d_t\) will be suppressed and others will be preserved.
 
2.
if \(d_t<d_M\), which means box M is located in a crowded region and the boxes whose IoU values greater than the M’s density \(d_M\) will be suppressed. This adaptive threshold will save the predicted box belonging to different objects even though they heavily overlap with each other.
 
Different from [22] which uses smooth L1 loss for density prediction, in our method, the focal loss is utilized in the training process of the density prediction, which is defined as
$$\begin{aligned}&L_{d}=-\sum _{i=0}^{K\times K}\sum _{j=0}^{N}1^{obj}_{ij}[td_i(1-d_i)^{\gamma }log(d_i)\nonumber \\&\qquad \quad +(1-td_i)(d_i)^{\gamma }log(1-d_i)] \end{aligned}$$
(6)
where K is the width or height of the output feature of the SPCS, N is the number of anchor boxes at each grid, and \(1_{ij}^{obj}\) means that the loss function will penalize the corresponding density prediction error only if an object occurs in the grid whose index is (ij). The \(\gamma \) is set as 0.2 in this paper.
We train the scaled-YOLOv4 with the density prediction head 5(b) using the loss function \(L_d\) shown as the equation (6) and use the results obtained through the Adaptive NMS as the baseline. To measure the performance of the predicted density, we test the model through Adaptive NMS, respectively, using the ground truth density and the predicted density. The difference between the testing results caused by the predicted density and the ground density can be seen as a metric to evaluate the performance of the predicted density. The results closer to the ground truth density indicate better performance.
As shown in Table 2, if we use predicted density as the IoU threshold of the Adaptive NMS, the scaled-YOLOv4 without SPCS module achieves AP 93.20%, MR 43.14% and Recall 96.73%; if we use the ground truth density as the IoU threshold, the Scaled-YOLOv4 without SPCS module achieves AP 93.56%, MR 42.85% and Recall 96.99%. The deviation values in AP, MR and Recall are 0.36%, 0.29% and 0.26%, respectively. After the SPCS module is adopted, the AP, MR and Recall are 94.11%, 41.75% and 97.75%, respectively, with the predicted density as the IoU threshold; the AP, MR and Recall are 94.31%, 41.44% and 97.84%, respectively, with the ground truth density as the IoU threshold. The deviations are 0.2%, 0.31% and 0.09%, respectively. It is obvious that the density predicted after using the SPCS module is closer to the ground truth density. This phenomenon proves that the SPCS module is helpful in density prediction. As mentioned above, the density information prediction needs refined information of several overlapping objects, so the experimental results indirectly prove that the SPCS module is helpful in refined information extraction.
To visually demonstrate the changes brought by the SPCS module, we visualize the pyramid features. The details are shown in Appendix A.

Discussion on parameters and inference speed

We also study the parameter increases and the time costs brought by the SPCS module. Since the testing results reported in Table 1 are all based on the YOLOC that is integrated with our tiny density prediction head as shown in Fig. 5(b), we only report the inference speed when the density prediction head is adopted, as shown in Table 3.
Table 3
Studies on parameters and inference speed
Model
Tiny density head
SPCS
Parameters (M)
fps
Scaled-YOLOv4
  
200.24
/
\(\checkmark \)
 
204.29
34.5
 
\(\checkmark \)
220.93
/
\(\checkmark \)
\(\checkmark \)
225.26
33
Table 4
Comparative results on CrowdHuman
Model
Type
AP (%)
MR (%)
Recall (%)
fps
FPN [22]
Two-stage
84.71
49.73
91.27
RCNN-FPN [15]
Two-stage
85.80
42.90
IterDet [16]
Two-stage
88.08
49.44
95.80
PS-RCNN [45]
Two-stage
87.94
95.11
PBM [23]
Two-stage
89.29
43.35
93.33
DA [20]
Two-stage
51.79
V2F-Net [46]
Two-stage
91.03
42.28
NOH-NMS [47]
Two-stage
89.00
43.90
92.90
CrowdDet [15]
Two-stage
90.70
41.70
19
RetinaNet [22]
One-stage
80.83
63.33
93.80
RFB-Net [22]
One-stage
79.67
63.03
94.77
YOLOC (original NMS)
One-stage
93.96
41.85
97.84
33
YOLOC (adaptive NMS)
One-stage
94.11
41.75
97.75
33
YOLOC (soft NMS)
One-stage
94.69
41.83
99.01
33
\(^1\)The inference speed of CrowdDet is tested using its official code on the same platform as our YOLOC
\(^2\)The results generated by adaptive NMS method takes the predicted density as IoU threshold
Bold values indicate better results than other methods under the current index
The tiny prediction head increases about 4.05 M parameters when the SPCS is not integrated and 4.43 M parameters when the SPCS module is involved. The difference is because that the channel number of the features that are input to the density prediction head are different when the SPCS module is adopted. In addition, the SPCS module increases about 21 M parameters. The difference when the density predicted is adopted and not adopted is also caused by the change of the input feature channels. The increases in parameters are mainly brought by the four dilation convolutions. We concatenate their output features in channel wise rather than add or multiply them in element-wise, which makes the channel numbers of the intermedia features of the SPCS are 4 times as much as before. However, this mechanism can guarantee the information be preserved as much as possible as shown in Table 1. In addition, the added parameters have little effect on the inference speed, our YOLOC can still achieve real-time performance.

Comparative experiment

In this section, we compare the performance of YOLOC with some current SOTA algorithms in terms of precision and inference speed on CrowdHuman and WiderPerson, respectively.
CrowdHuman The CrowdHuman is one of the most convincing datasets to test the model’s ability to detect occluded objects. We compare our YOLOC with the newest SOTA method and the results are shown in Table 4.
Table 5
Comparative results on WiderPerson
Model
Type
AP(%)
MR(%)
Recall(%)
fps
Faster-RCNN [45]
Two-Stage
88.89
93.6
Improved Faster-RCNN [29]
Two-stage
46.06
0.83
PS-RCNN [45]
Two-stage
90.52
95.61
IterDet (Faster-RCNN)[16]
Two-stage
91.95
40.78
97.15
RetinaNet [29]
One-stage
48.32
8.93
IterDet (RetinaNet) [16]
One-stage
90.23
43.88
95.35
YOLOC (adaptive NMS)
One-stage
93.04
50.71
98.45
33
Bold values indicate better results than other methods under the current index
As shown in Table 4, our method achieves the best performance in AP and Recall, and the MR is the second best among all previous SOTA models. Among all one-stage models, YOLOC leads with a huge margin in detection performance. Figure 9 shows the PR curves of YOLOC and CrowdDet which is currently the best detector on CrowdHuman. Other results in Fig. 9 are also generated by the official codes of [15]. It is clear that, regardless of which NMS method is used, our YOLOC achieves better performance compared with CrowdDet. Moreover, benefitting from the one-stage structure, YOLOC achieves real-time performance in inference speed.
WiderPerson For the WiderPerson dataset, we set the batch size to 32, the initial learning rate \(lr_{init}\) of the cosine scheduling strategy to 0.001, \(\eta \) to 0.1, T to 100 and momentum to 0.937. For original NMS, we set IoU threshold as 0.6. The other training configurations are the same as CrowdHuman.
As shown in Table 4, YOLOC achieves the best performance among all detectors in terms of AP and Recall, which are 93.04% and 98.45%, respectively. However, compared with other SOTA methods, YOLOC falls behind in terms of MR for a large margin.
Discussion about the bad performance in MR on WiderPerson As mentioned above, MR is extremely sensitive to the false-positive rate. By observing the ground truth annotations used by the official evaluation codes, we find that the possible reason why YOLOC performs poorly in MR is that YOLOC can detect many heavily occluded objects that will be seen as negative in the official evaluation codes. According to our observation, this phenomenon is very common in our test process of WiderPerson, as shown in Fig. 6. In WiderPerson’s annotations referenced by the evaluation code, many heavily occluded objects are ignored, which means that if these occluded objects are detected as positive, they will be seen as false positive, and the MR will increase. Our YOLOC has a strong power to detect not only the fully visible objects, but also the occluded objects that are of limited visible pixels, which is why we achieve the best performance in Recall. However, many occluded objects detected by YOLOC will be seen as false positives during the process the official evaluation process, which is very harmful to the metric MR. Different from the WiderPerson, the CrowdHuman dataset annotates all objects that occur in the image as possible, regardless of how many pixels of them are visible, as shown in Fig. 7. Therefore, YOLOC can achieve SOTA performance in MR on the CrowdHuman dataset.

Conclusion

In this paper, we propose a spatial pyramid convolutional shuffle module named SPCS for occluded object detection. Since it is difficult to extract distinguishable representations for heavily occluded objects, at each location of the pyramid features, we adopt multiple convolutional kernels with different receptive fields, and the output features are recombined spatially using a pixel-shuffle module to increase the resolution. In this way, four instance predictions can be generated based on each location of the pyramid feature, and each of them is distinguishable since they correspond four different convolutional kernels, respectively. Moreover, the multiple convolutional kernels with different receptive fields can extract refined information for each region, which is helpful for the detection of occluded objects whose visible pixels are limited. Extensive experimental results demonstrate the effectiveness of the SPCS module on occluded object detection.

Declarations

Competing interests

On behalf of all the authors, the corresponding author states that there is no conflict of interest.
Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://​creativecommons.​org/​licenses/​by/​4.​0/​.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Anhänge

Appendix A: Visualization of the ablation studies

For simplicity and intuition, we only consider the distinction between the spatial wise sub-features. For a pyramid feature \(F\in R^{C\times H\times W}\)which need to be visualized, we first compute its mean along the channel wise, and then normalize the mean feature. The visualized map \(F_v\in R^{H\times W}\) can be computed using the following equation:
$$\begin{aligned} F_{m} = mean(F) \end{aligned}$$
(A1)
$$\begin{aligned} F_{v} = \frac{F_{m}-min(F_{v})}{max(F_{m})-min(F_{m})} \end{aligned}$$
(A2)
where \(F_m\in R^{H\times W}\)is the mean feature of F along the channel wise, max(x) and min(x) are the maximum and minimum value of the input x, respectively.
As Fig. A1 presented, two changes were caused by the SPCS module. Firstly, the adjacent sub-features in the middle column features are distinguishable. In detail, each 2x2 grid (Red frame A) corresponds to the 1x1 grid (Red frame B), and the mean values of the four sub-features in frame A are unequal, which means two overlapped objects are distinguished if their center points are in different grids in frame A. Whereas, they will not be distinguished if their centers are located in the frame B. Second, the features generated by SPCS module have higher overall contrast, and compared with the background regions, the regions where the objects are located in are more prominent , which means that the information extraction ability is boosted by the SPCS module. This phenomenon also explains why the performance of the object density prediction is enhanced after the SPCS adopted in section 4.5.
Literatur
4.
Zurück zum Zitat Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
5.
Zurück zum Zitat Redmon J, Farhadi A (2017) Yolo9000: Better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Redmon J, Farhadi A (2017) Yolo9000: Better, faster, stronger. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
8.
Zurück zum Zitat Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C.-Y, Berg AC (2016) Ssd: Single shot multibox detector. In: Computer Vision – ECCV 2016, pp 21–37. Springer, Cham Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu C.-Y, Berg AC (2016) Ssd: Single shot multibox detector. In: Computer Vision – ECCV 2016, pp 21–37. Springer, Cham
13.
Zurück zum Zitat Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: Common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer Vision - ECCV 2014. Springer, Cham, pp 740–755CrossRef Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, Dollár P, Zitnick CL (2014) Microsoft coco: Common objects in context. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer Vision - ECCV 2014. Springer, Cham, pp 740–755CrossRef
16.
Zurück zum Zitat Rukhovich D, Sofiiuk K, Galeev D, Barinova O, Konushin A (2021) Iterdet: iterative scheme for object detection in crowded environments. Structural, Syntactic, and Statistical Pattern Recognition. Springer, Cham, pp 344–354 Rukhovich D, Sofiiuk K, Galeev D, Barinova O, Konushin A (2021) Iterdet: iterative scheme for object detection in crowded environments. Structural, Syntactic, and Statistical Pattern Recognition. Springer, Cham, pp 344–354
18.
Zurück zum Zitat Zhang S, Wen L, Bian X, Lei Z, Li SZ (2018) Occlusion-aware r-cnn: detecting pedestrians in a crowd. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer Vision - ECCV 2018. Springer, Cham, pp 657–674 Zhang S, Wen L, Bian X, Lei Z, Li SZ (2018) Occlusion-aware r-cnn: detecting pedestrians in a crowd. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer Vision - ECCV 2018. Springer, Cham, pp 657–674
20.
21.
Zurück zum Zitat Gählert N, Hanselmann N, Franke U, Denzler J (2020) Visibility guided NMS: efficient boosting of amodal object detection in crowded traffic scenes. arXiv:2006.08547 Gählert N, Hanselmann N, Franke U, Denzler J (2020) Visibility guided NMS: efficient boosting of amodal object detection in crowded traffic scenes. arXiv:​2006.​08547
26.
Zurück zum Zitat Shi W, Caballero J, Huszr F, Totz J, Aitken A.P, Bishop R, Rueckert D, Wang Z (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1874–1883 . https://doi.org/10.1109/CVPR.2016.207 Shi W, Caballero J, Huszr F, Totz J, Aitken A.P, Bishop R, Rueckert D, Wang Z (2016) Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1874–1883 . https://​doi.​org/​10.​1109/​CVPR.​2016.​207
28.
Zurück zum Zitat Shao S, Zhao Z, Li B, Xiao T, Yu G, Zhang X, Sun J (2018) Crowdhuman: A benchmark for detecting human in a crowd. arXiv:1805.00123 Shao S, Zhao Z, Li B, Xiao T, Yu G, Zhang X, Sun J (2018) Crowdhuman: A benchmark for detecting human in a crowd. arXiv:​1805.​00123
30.
38.
Zurück zum Zitat Jiang B, Luo R, Mao J, Xiao T, Jiang Y (2018) Acquisition of localization confidence for accurate object detection. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer Vision - ECCV 2018. Springer, Cham, pp 816–832CrossRef Jiang B, Luo R, Mao J, Xiao T, Jiang Y (2018) Acquisition of localization confidence for accurate object detection. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer Vision - ECCV 2018. Springer, Cham, pp 816–832CrossRef
46.
47.
Zurück zum Zitat Zhou P, Zhou C, Peng P, Du J, Sun X, Guo X, Huang F (2020) Noh-nms: Improving pedestrian detection by nearby objects hallucination. In: Proceedings of the 28th ACM International Conference on Multimedia. MM ’20, pp. 1967–1975. Association for Computing Machinery, New York, NY, USA . https://doi.org/10.1145/3394171.3413617 Zhou P, Zhou C, Peng P, Du J, Sun X, Guo X, Huang F (2020) Noh-nms: Improving pedestrian detection by nearby objects hallucination. In: Proceedings of the 28th ACM International Conference on Multimedia. MM ’20, pp. 1967–1975. Association for Computing Machinery, New York, NY, USA . https://​doi.​org/​10.​1145/​3394171.​3413617
Metadaten
Titel
SPCS: a spatial pyramid convolutional shuffle module for YOLO to detect occluded object
verfasst von
Xiang Li
Miao He
Yan Liu
Haibo Luo
Moran Ju
Publikationsdatum
29.06.2022
Verlag
Springer International Publishing
Erschienen in
Complex & Intelligent Systems / Ausgabe 1/2023
Print ISSN: 2199-4536
Elektronische ISSN: 2198-6053
DOI
https://doi.org/10.1007/s40747-022-00786-7

Weitere Artikel der Ausgabe 1/2023

Complex & Intelligent Systems 1/2023 Zur Ausgabe

Premium Partner