Skip to main content
Erschienen in: Arabian Journal for Science and Engineering 8/2022

30.10.2021 | Research Article-Computer Engineering and Computer Science

A Ship Detection Method in Complex Background Via Mixed Attention Model

verfasst von: Hao Meng, Fei Yuan, Yang Tian, Hongwei Wei

Erschienen in: Arabian Journal for Science and Engineering | Ausgabe 8/2022

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

With the development of deep learning, recent object detection methods have made considerable progress. Unlike other simple visual object detection problems, it is more difficult to detect ships in a complex background. Nearshore vessels are always confused with background objects. When the ship target is small, the feature information in the background will greatly interfere with the information extraction and learning of the target area of the ship by the convolution neural network, resulting in the problem of ship sample imbalance. In response to this problem, this paper proposes a mixed attention model to decrease the difficulty of ship detection, which is composed of pixel attention model (PAM) and feature attention model (FAM). The PAM structure is a generative adversarial network designed to prove the sensitivity of the target area without extra manual works. FAM is a convolution network designed to increase the utilization rate of useful features. While MAM is not a fixed structure, it could be implanted into almost any object detection and classification networks. Meanwhile, PAM is for the original image preprocessing part and FAM is always added to the position of low-level features to high-level features. Following experimental results show that using MAM method into Yolov3, the mean average precision increased by 2.2% when achieved 0.975, which effectively improve the accuracy of ship detection in complex background.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Literatur
2.
Zurück zum Zitat He, H.; Lin, Y.; Chen, F., et al.: Inshore ship detection in remote sensing images via weighted pose voting[J]. IEEE Trans. Geoence Remote Sens. 55, 3091–3107 (2017)CrossRef He, H.; Lin, Y.; Chen, F., et al.: Inshore ship detection in remote sensing images via weighted pose voting[J]. IEEE Trans. Geoence Remote Sens. 55, 3091–3107 (2017)CrossRef
3.
Zurück zum Zitat Zhang, L.; Hong, X.; Wang, Y.H., et al.: Inshore ship detection in high-resolution remote sensing image using projection analysis [J][J]. J. Image Graph. 23(9), 1424–1432 (2018) Zhang, L.; Hong, X.; Wang, Y.H., et al.: Inshore ship detection in high-resolution remote sensing image using projection analysis [J][J]. J. Image Graph. 23(9), 1424–1432 (2018)
4.
Zurück zum Zitat Zhai, L.; Li, Y.; Su, Y.: Inshore ship detection via saliency and context information in high-resolution SAR images[J]. IEEE Geosci. Remote Sens. Lett. 13, 1870–1874 (2016)CrossRef Zhai, L.; Li, Y.; Su, Y.: Inshore ship detection via saliency and context information in high-resolution SAR images[J]. IEEE Geosci. Remote Sens. Lett. 13, 1870–1874 (2016)CrossRef
5.
Zurück zum Zitat Li, Y.; Zhang, X.; Li, H., et al.: Object detection and tracking under complex environment using deep learning-based LPM[J]. IET Comput. Vision 13(2), 157–164 (2019)CrossRef Li, Y.; Zhang, X.; Li, H., et al.: Object detection and tracking under complex environment using deep learning-based LPM[J]. IET Comput. Vision 13(2), 157–164 (2019)CrossRef
6.
Zurück zum Zitat Song, P.; Qi, L.; Qian, X., et al.: Detection of ships in inland river using high-resolution optical satellite imagery based on mixture of deformable part models[J]. J. Parallel Distrib. Comput. 132, 1–7 (2019)CrossRef Song, P.; Qi, L.; Qian, X., et al.: Detection of ships in inland river using high-resolution optical satellite imagery based on mixture of deformable part models[J]. J. Parallel Distrib. Comput. 132, 1–7 (2019)CrossRef
7.
Zurück zum Zitat Zhang, T.; Hao, L.Y.; Guo, G.: A feature enriching object detection framework with weak segmentation loss[J]. Neurocomputing 335, 72–80 (2019)CrossRef Zhang, T.; Hao, L.Y.; Guo, G.: A feature enriching object detection framework with weak segmentation loss[J]. Neurocomputing 335, 72–80 (2019)CrossRef
8.
Zurück zum Zitat Gallego, A.J.; Pertusa, A.; Gil, P.: Automatic ship classification from optical aerial images with convolutional neural networks[J]. Remote Sens. 10(4), 511 (2018)CrossRef Gallego, A.J.; Pertusa, A.; Gil, P.: Automatic ship classification from optical aerial images with convolutional neural networks[J]. Remote Sens. 10(4), 511 (2018)CrossRef
9.
Zurück zum Zitat Krizhevsky, A.; Sutskever, I.; Hinton, G.E.: ImageNet classification with deep convolutional neural networks[J]. Commun. ACM 60(6), 84–90 (2017)CrossRef Krizhevsky, A.; Sutskever, I.; Hinton, G.E.: ImageNet classification with deep convolutional neural networks[J]. Commun. ACM 60(6), 84–90 (2017)CrossRef
10.
Zurück zum Zitat Long, J.; Shelhamer, E.; Darrell, T.: Fully convolutional networks for semantic segmentation[C]/Proceedings of the IEEE conference on computer vision and pattern recognition. 3431–3440 (2015) Long, J.; Shelhamer, E.; Darrell, T.: Fully convolutional networks for semantic segmentation[C]/Proceedings of the IEEE conference on computer vision and pattern recognition. 3431–3440 (2015)
11.
Zurück zum Zitat Tang, H.; Liu, H.; Xu, D. et al.: Attentiongan: unpaired image-to-image translation using attention-guided generative adversarial networks[J]. arXiv preprint, (2019) Tang, H.; Liu, H.; Xu, D. et al.: Attentiongan: unpaired image-to-image translation using attention-guided generative adversarial networks[J]. arXiv preprint, (2019)
12.
Zurück zum Zitat Feng, T.; Gu, D.: SGANVO: unsupervised deep visual odometry and depth estimation with stacked generative adversarial networks[J]. IEEE Robot. Autom. Lett. 4(4), 4431–4437 (2019)CrossRef Feng, T.; Gu, D.: SGANVO: unsupervised deep visual odometry and depth estimation with stacked generative adversarial networks[J]. IEEE Robot. Autom. Lett. 4(4), 4431–4437 (2019)CrossRef
13.
Zurück zum Zitat Ma, J.; Zhou, Z.; Wang, B., et al.: Ship detection in optical satellite images via directional bounding boxes based on ship center and orientation prediction[J]. Remote Sens. 11(18), 2173 (2019)CrossRef Ma, J.; Zhou, Z.; Wang, B., et al.: Ship detection in optical satellite images via directional bounding boxes based on ship center and orientation prediction[J]. Remote Sens. 11(18), 2173 (2019)CrossRef
14.
Zurück zum Zitat Yao, Y.; Jiang, Z.; Zhang, H., et al.: Ship detection in optical remote sensing images based on deep convolutional neural networks[J]. J. Appl. Remote Sens. 11(4), 1 (2017)CrossRef Yao, Y.; Jiang, Z.; Zhang, H., et al.: Ship detection in optical remote sensing images based on deep convolutional neural networks[J]. J. Appl. Remote Sens. 11(4), 1 (2017)CrossRef
15.
Zurück zum Zitat Huang, G.; Wan, Z.; Liu, X., et al.: Ship detection based on squeeze excitation skip-connection path networks for optical remote sensing images[J]. Neurocomputing 332, 215–223 (2019)CrossRef Huang, G.; Wan, Z.; Liu, X., et al.: Ship detection based on squeeze excitation skip-connection path networks for optical remote sensing images[J]. Neurocomputing 332, 215–223 (2019)CrossRef
16.
Zurück zum Zitat Guo, M.; Guo, C.; Zhang, C., et al.: Fusion of ship perceptual information for electronic navigational chart and radar images based on deep learning[J]. J. Navig. 73(1), 192–211 (2020)CrossRef Guo, M.; Guo, C.; Zhang, C., et al.: Fusion of ship perceptual information for electronic navigational chart and radar images based on deep learning[J]. J. Navig. 73(1), 192–211 (2020)CrossRef
17.
Zurück zum Zitat You, Y.; Cao, J.; Zhang, Y., et al.: Nearshore ship detection on high-resolution remote sensing image via scene-Mask R-CNN[J]. IEEE Access 7, 128431–128444 (2019)CrossRef You, Y.; Cao, J.; Zhang, Y., et al.: Nearshore ship detection on high-resolution remote sensing image via scene-Mask R-CNN[J]. IEEE Access 7, 128431–128444 (2019)CrossRef
18.
Zurück zum Zitat Han, J.; Yu, Y.; Liang, K., et al.: Infrared small-target detection under complex background based on subblock-level ratio-difference joint local contrast measure[J]. Opt. Eng. 57(10), 103105 (2018)CrossRef Han, J.; Yu, Y.; Liang, K., et al.: Infrared small-target detection under complex background based on subblock-level ratio-difference joint local contrast measure[J]. Opt. Eng. 57(10), 103105 (2018)CrossRef
19.
Zurück zum Zitat Ng, W.W.Y.; Hu, J.; Yeung, D.S., et al.: Diversified sensitivity-based undersampling for imbalance classification problems[J]. IEEE Trans. Cybern. 45(11), 2402–2412 (2017)CrossRef Ng, W.W.Y.; Hu, J.; Yeung, D.S., et al.: Diversified sensitivity-based undersampling for imbalance classification problems[J]. IEEE Trans. Cybern. 45(11), 2402–2412 (2017)CrossRef
20.
Zurück zum Zitat Zhuang, Y.; Li, L.; Chen, H.: Small sample set inshore ship detection from VHR optical remote sensing images based on structured sparse representation[J]. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 13, 2145–2160 (2020)CrossRef Zhuang, Y.; Li, L.; Chen, H.: Small sample set inshore ship detection from VHR optical remote sensing images based on structured sparse representation[J]. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 13, 2145–2160 (2020)CrossRef
21.
Zurück zum Zitat Deng, Z.; Sun, H.; Zhou, S., et al.: Learning deep ship detector in SAR images from scratch[J]. IEEE Trans. Geosci. Remote Sens. 57(6), 4021–4039 (2019)CrossRef Deng, Z.; Sun, H.; Zhou, S., et al.: Learning deep ship detector in SAR images from scratch[J]. IEEE Trans. Geosci. Remote Sens. 57(6), 4021–4039 (2019)CrossRef
22.
Zurück zum Zitat Zhou, M.; Jing, M.; Liu, D., et al.: Multi-resolution networks for ship detection in infrared remote sensing images[J]. Infrared Phys. Technol. 92, 183–189 (2018)CrossRef Zhou, M.; Jing, M.; Liu, D., et al.: Multi-resolution networks for ship detection in infrared remote sensing images[J]. Infrared Phys. Technol. 92, 183–189 (2018)CrossRef
23.
Zurück zum Zitat Buda, M.; Maki, A.; Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks[J]. Neural Netw. 106, 249–259 (2018)CrossRef Buda, M.; Maki, A.; Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks[J]. Neural Netw. 106, 249–259 (2018)CrossRef
24.
Zurück zum Zitat Pang, J.; Chen, K.; Shi, J.; et al.: Libra r-cnn: towards balanced learning for object detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 821–830 (2019) Pang, J.; Chen, K.; Shi, J.; et al.: Libra r-cnn: towards balanced learning for object detection[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 821–830 (2019)
25.
Zurück zum Zitat Gao, L.; He, Y.; Sun, X., et al.: Incorporating negative sample training for ship detection based on deep learning[J]. Sensors 19(3), 684 (2019)CrossRef Gao, L.; He, Y.; Sun, X., et al.: Incorporating negative sample training for ship detection based on deep learning[J]. Sensors 19(3), 684 (2019)CrossRef
26.
Zurück zum Zitat Derakhshani, M.M.; Masoudnia, S.; Shaker; A.H. et al.: Assisted excitation of activations: a learning technique to improve object detectors[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9201–9210, (2019) Derakhshani, M.M.; Masoudnia, S.; Shaker; A.H. et al.: Assisted excitation of activations: a learning technique to improve object detectors[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9201–9210, (2019)
27.
Zurück zum Zitat Zhou, X.; Zhuo, J.; Krahenbuhl, P.: Bottom-up object detection by grouping extreme and center points[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 850–859 (2019) Zhou, X.; Zhuo, J.; Krahenbuhl, P.: Bottom-up object detection by grouping extreme and center points[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 850–859 (2019)
28.
Zurück zum Zitat Li, X.; Wang, S.: Object detection using convolutional neural networks in a coarse-to-fine manner[J]. IEEE Geosci. Remote Sens. Lett. 14(11), 2037–2041 (2017)CrossRef Li, X.; Wang, S.: Object detection using convolutional neural networks in a coarse-to-fine manner[J]. IEEE Geosci. Remote Sens. Lett. 14(11), 2037–2041 (2017)CrossRef
29.
Zurück zum Zitat Redmon, J.; Divvala, S.; Girshick, R. et al.: You only look once: unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition, 779–788 (2016) Redmon, J.; Divvala, S.; Girshick, R. et al.: You only look once: unified, real-time object detection[C]//Proceedings of the IEEE conference on computer vision and pattern recognition, 779–788 (2016)
30.
Zurück zum Zitat Redmon, J.; Farhadi, A.: YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE conference on computer vision and pattern recognition, 7263–7271 (2017) Redmon, J.; Farhadi, A.: YOLO9000: better, faster, stronger[C]//Proceedings of the IEEE conference on computer vision and pattern recognition, 7263–7271 (2017)
31.
Zurück zum Zitat Redmon, J., Farhadi, A.: Yolov3: an incremental improvement[J]. arXiv preprint, (2018) Redmon, J., Farhadi, A.: Yolov3: an incremental improvement[J]. arXiv preprint, (2018)
32.
Zurück zum Zitat Lee, H.; Eum, S.; Kwon, H.: Me r-cnn: multi-expert r-cnn for object detection[J]. IEEE Trans. Image Process. 29, 1030–1044 (2019)MathSciNetCrossRef Lee, H.; Eum, S.; Kwon, H.: Me r-cnn: multi-expert r-cnn for object detection[J]. IEEE Trans. Image Process. 29, 1030–1044 (2019)MathSciNetCrossRef
33.
Zurück zum Zitat Girshick, R.: Fast r-cnn[C]//Proceedings of the IEEE international conference on computer vision, 1440–1448 (2015) Girshick, R.: Fast r-cnn[C]//Proceedings of the IEEE international conference on computer vision, 1440–1448 (2015)
34.
Zurück zum Zitat Ren, S.; He, K.; Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRef Ren, S.; He, K.; Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Trans. Pattern Anal. Mach. Intell. 39(6), 1137–1149 (2017)CrossRef
35.
Zurück zum Zitat He, K., Gkioxari, G., Dollár, P. et al.: Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2961–2969 (2017) He, K., Gkioxari, G., Dollár, P. et al.: Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2961–2969 (2017)
36.
Zurück zum Zitat Chang, Y.L.; Anagaw, A.; Chang, L., et al.: Ship detection based on YOLOv2 for SAR imagery[J]. Remote Sens. 11(7), 786 (2019)CrossRef Chang, Y.L.; Anagaw, A.; Chang, L., et al.: Ship detection based on YOLOv2 for SAR imagery[J]. Remote Sens. 11(7), 786 (2019)CrossRef
37.
Zurück zum Zitat Yang, X.; Sun, H.; Fu, K., et al.: Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks[J]. Remote Sens. 10(1), 132 (2018)CrossRef Yang, X.; Sun, H.; Fu, K., et al.: Automatic ship detection in remote sensing images from google earth of complex scenes based on multiscale rotation dense feature pyramid networks[J]. Remote Sens. 10(1), 132 (2018)CrossRef
39.
Zurück zum Zitat Iglovikov, V.; Shvets, A.; Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation[J]. arXiv preprint, (2018) Iglovikov, V.; Shvets, A.; Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation[J]. arXiv preprint, (2018)
40.
Zurück zum Zitat Lin, D.Y.: Deep unsupervised representation learning for remote sensing images[J]. IEEE Geosci. Remote Sens. Lett. 14(11), 2092–2096 (2016)CrossRef Lin, D.Y.: Deep unsupervised representation learning for remote sensing images[J]. IEEE Geosci. Remote Sens. Lett. 14(11), 2092–2096 (2016)CrossRef
41.
Zurück zum Zitat Mehralian, M.; Karasfi, B.: RDCGAN: unsupervised representation learning with regularized deep convolutional generative adversarial networks[C]//2018 9th conference on artificial intelligence and robotics and 2nd Asia-Pacific international symposium. IEEE, 31–38, (2018) Mehralian, M.; Karasfi, B.: RDCGAN: unsupervised representation learning with regularized deep convolutional generative adversarial networks[C]//2018 9th conference on artificial intelligence and robotics and 2nd Asia-Pacific international symposium. IEEE, 31–38, (2018)
42.
Zurück zum Zitat Deng, Y.; Wang, H.; Liu, S., et al.: Analysis of the ship target detection in high-resolution SAR images based on information theory and Harris corner detection[J]. EURASIP J. Wirel. Commun. Netw. 2018(1), 1–9 (2018)CrossRef Deng, Y.; Wang, H.; Liu, S., et al.: Analysis of the ship target detection in high-resolution SAR images based on information theory and Harris corner detection[J]. EURASIP J. Wirel. Commun. Netw. 2018(1), 1–9 (2018)CrossRef
43.
Zurück zum Zitat Shao, Z.; Wu, W.; Wang, Z., et al.: Seaships: a large-scale precisely annotated dataset for ship detection[J]. IEEE Trans. Multimedia 20(10), 2593–2604 (2018)CrossRef Shao, Z.; Wu, W.; Wang, Z., et al.: Seaships: a large-scale precisely annotated dataset for ship detection[J]. IEEE Trans. Multimedia 20(10), 2593–2604 (2018)CrossRef
44.
Zurück zum Zitat Zhu, J.Y.; Park, T.; Isola, P. et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2223–2232 (2017) Zhu, J.Y.; Park, T.; Isola, P. et al.: Unpaired image-to-image translation using cycle-consistent adversarial networks[C]//Proceedings of the IEEE international conference on computer vision. 2223–2232 (2017)
Metadaten
Titel
A Ship Detection Method in Complex Background Via Mixed Attention Model
verfasst von
Hao Meng
Fei Yuan
Yang Tian
Hongwei Wei
Publikationsdatum
30.10.2021
Verlag
Springer Berlin Heidelberg
Erschienen in
Arabian Journal for Science and Engineering / Ausgabe 8/2022
Print ISSN: 2193-567X
Elektronische ISSN: 2191-4281
DOI
https://doi.org/10.1007/s13369-021-06275-2

Weitere Artikel der Ausgabe 8/2022

Arabian Journal for Science and Engineering 8/2022 Zur Ausgabe

Research Article-Computer Engineering and Computer Science

Probability Quantization Model for Sample-to-Sample Stochastic Sampling

    Marktübersichten

    Die im Laufe eines Jahres in der „adhäsion“ veröffentlichten Marktübersichten helfen Anwendern verschiedenster Branchen, sich einen gezielten Überblick über Lieferantenangebote zu verschaffen.