Skip to main content
Erschienen in:
Buchtitelbild

2021 | OriginalPaper | Buchkapitel

BoundaryNet: An Attentive Deep Network with Fast Marching Distance Maps for Semi-automatic Layout Annotation

verfasst von : Abhishek Trivedi, Ravi Kiran Sarvadevabhatla

Erschienen in: Document Analysis and Recognition – ICDAR 2021

Verlag: Springer International Publishing

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Precise boundary annotations of image regions can be crucial for downstream applications which rely on region-class semantics. Some document collections contain densely laid out, highly irregular and overlapping multi-class region instances with large range in aspect ratio. Fully automatic boundary estimation approaches tend to be data intensive, cannot handle variable-sized images and produce sub-optimal results for aforementioned images. To address these issues, we propose BoundaryNet, a novel resizing-free approach for high-precision semi-automatic layout annotation. The variable-sized user selected region of interest is first processed by an attention-guided skip network. The network optimization is guided via Fast Marching distance maps to obtain a good quality initial boundary estimate and an associated feature representation. These outputs are processed by a Residual Graph Convolution Network optimized using Hausdorff loss to obtain the final region boundary. Results on a challenging image manuscript dataset demonstrate that BoundaryNet outperforms strong baselines and produces high-quality semantic region boundaries. Qualitatively, our approach generalizes across multiple document image datasets containing different script systems and layouts, all without additional fine-tuning. We integrate BoundaryNet into a document annotation system and show that it provides high annotation throughput compared to manual and fully automatic alternatives.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Acuna, D., et al.: Efficient interactive annotation of segmentation datasets with Polygon-RNN++. In: CVPR, pp. 859–868 (2018) Acuna, D., et al.: Efficient interactive annotation of segmentation datasets with Polygon-RNN++. In: CVPR, pp. 859–868 (2018)
2.
Zurück zum Zitat Bonechi, S., Andreini, P., et al.: COCO\(\_\)TS dataset: pixel-level annotations based on weak supervision for scene text segmentation. In: ICANN, pp. 238–250 (2019) Bonechi, S., Andreini, P., et al.: COCO\(\_\)TS dataset: pixel-level annotations based on weak supervision for scene text segmentation. In: ICANN, pp. 238–250 (2019)
3.
Zurück zum Zitat Breuel, T.M.: Robust, simple page segmentation using hybrid convolutional MDLSTM networks. ICDAR 01, 733–740 (2017) Breuel, T.M.: Robust, simple page segmentation using hybrid convolutional MDLSTM networks. ICDAR 01, 733–740 (2017)
4.
Zurück zum Zitat Buss, J.F., Rosenberg, A.L., Knott, J.D.: Vertex types in book-embeddings. Tech. rep., Amherst, MA, USA (1987) Buss, J.F., Rosenberg, A.L., Knott, J.D.: Vertex types in book-embeddings. Tech. rep., Amherst, MA, USA (1987)
6.
Zurück zum Zitat Clausner, C., Antonacopoulos, A., Derrick, T., Pletschacher, S.: ICDAR 2019 competition on recognition of early Indian printed documents-REID2019. In: ICDAR, pp. 1527–1532 (2019) Clausner, C., Antonacopoulos, A., Derrick, T., Pletschacher, S.: ICDAR 2019 competition on recognition of early Indian printed documents-REID2019. In: ICDAR, pp. 1527–1532 (2019)
7.
Zurück zum Zitat Dong, Z., Zhang, R., Shao, X.: Automatic annotation and segmentation of object instances with deep active curve network. IEEE Access 7, 147501–147512 (2019) Dong, Z., Zhang, R., Shao, X.: Automatic annotation and segmentation of object instances with deep active curve network. IEEE Access 7, 147501–147512 (2019)
8.
Zurück zum Zitat Fu, J., Liu, J., Wang, Y., Zhou, J., Wang, C., Lu, H.: Stacked deconvolutional network for semantic segmentation. In: IEEE TIP (2019) Fu, J., Liu, J., Wang, Y., Zhou, J., Wang, C., Lu, H.: Stacked deconvolutional network for semantic segmentation. In: IEEE TIP (2019)
9.
Zurück zum Zitat Garz, A., Seuret, M., Simistira, F., Fischer, A., Ingold, R.: Creating ground truth for historical manuscripts with document graphs and scribbling interaction. In: DAS, pp. 126–131 (2016) Garz, A., Seuret, M., Simistira, F., Fischer, A., Ingold, R.: Creating ground truth for historical manuscripts with document graphs and scribbling interaction. In: DAS, pp. 126–131 (2016)
10.
Zurück zum Zitat Gur, S., Shaharabany, T., Wolf, L.: End to end trainable active contours via differentiable rendering. In: ICLR (2020) Gur, S., Shaharabany, T., Wolf, L.: End to end trainable active contours via differentiable rendering. In: ICLR (2020)
11.
Zurück zum Zitat Gurjar, N., Sudholt, S., Fink, G.A.: Learning deep representations for word spotting under weak supervision. In: DAS, pp. 7–12 (2018) Gurjar, N., Sudholt, S., Fink, G.A.: Learning deep representations for word spotting under weak supervision. In: DAS, pp. 7–12 (2018)
12.
Zurück zum Zitat Harley, A.W., Ufkes, A., Derpanis, K.G.: Evaluation of deep convolutional nets for document image classification and retrieval. In: ICDAR (2015) Harley, A.W., Ufkes, A., Derpanis, K.G.: Evaluation of deep convolutional nets for document image classification and retrieval. In: ICDAR (2015)
13.
Zurück zum Zitat He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017) He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: ICCV (2017)
14.
Zurück zum Zitat He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE TPAMI 37(9), 1904–1916 (2015)CrossRef He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE TPAMI 37(9), 1904–1916 (2015)CrossRef
15.
Zurück zum Zitat Kassis, M., El-Sana, J.: Scribble based interactive page layout segmentation using Gabor filter. In: ICFHR, pp. 13–18 (2016) Kassis, M., El-Sana, J.: Scribble based interactive page layout segmentation using Gabor filter. In: ICFHR, pp. 13–18 (2016)
16.
Zurück zum Zitat Kassis, M., Abdalhaleem, A., Droby, A., Alaasam, R., El-Sana, J.: VML-HD: the historical Arabic documents dataset for recognition systems. In: 1st International Workshop on Arabic Script Analysis and Recognition, pp. 11–14. IEEE (2017) Kassis, M., Abdalhaleem, A., Droby, A., Alaasam, R., El-Sana, J.: VML-HD: the historical Arabic documents dataset for recognition systems. In: 1st International Workshop on Arabic Script Analysis and Recognition, pp. 11–14. IEEE (2017)
17.
Zurück zum Zitat Kesiman, M.W.A., et al.: Benchmarking of document image analysis tasks for palm leaf manuscripts from Southeast Asia. J. Imaging 4(2), 43 (2018)CrossRef Kesiman, M.W.A., et al.: Benchmarking of document image analysis tasks for palm leaf manuscripts from Southeast Asia. J. Imaging 4(2), 43 (2018)CrossRef
19.
Zurück zum Zitat Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017) Kipf, T.N., Welling, M.: Semi-supervised classification with graph convolutional networks. In: ICLR (2017)
20.
Zurück zum Zitat Klette, R., Rosenfeld, A. (eds.): Digital Geometry. The Morgan Kaufmann Series in Computer Graphics. Morgan Kaufmann, San Francisco (2004) Klette, R., Rosenfeld, A. (eds.): Digital Geometry. The Morgan Kaufmann Series in Computer Graphics. Morgan Kaufmann, San Francisco (2004)
21.
Zurück zum Zitat Lais Wiggers, K., de Souza Britto Junior, A., Lameiras Koerich, A., Heutte, L., Soares de Oliveira, L.E.: Deep learning approaches for image retrieval and pattern spotting in ancient documents. arXiv e-prints (2019) Lais Wiggers, K., de Souza Britto Junior, A., Lameiras Koerich, A., Heutte, L., Soares de Oliveira, L.E.: Deep learning approaches for image retrieval and pattern spotting in ancient documents. arXiv e-prints (2019)
22.
Zurück zum Zitat Li, G., Muller, M., Thabet, A., Ghanem, B.: DeepGCNs: can GCNs go as deep as CNNs? In: ICCV, pp. 9267–9276 (2019) Li, G., Muller, M., Thabet, A., Ghanem, B.: DeepGCNs: can GCNs go as deep as CNNs? In: ICCV, pp. 9267–9276 (2019)
23.
Zurück zum Zitat Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: ICCV, pp. 2980–2988 (2017) Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: ICCV, pp. 2980–2988 (2017)
24.
Zurück zum Zitat Ling, H., Gao, J., Kar, A., Chen, W., Fidler, S.: Fast interactive object annotation with curve-GCN. In: CVPR, pp. 5257–5266 (2019) Ling, H., Gao, J., Kar, A., Chen, W., Fidler, S.: Fast interactive object annotation with curve-GCN. In: CVPR, pp. 5257–5266 (2019)
25.
26.
Zurück zum Zitat Ma, L., Long, C., Duan, L., Zhang, X., Li, Y., Zhao, Q.: Segmentation and recognition for historical Tibetan document images. IEEE Access 8, 52641–52651 (2020) Ma, L., Long, C., Duan, L., Zhang, X., Li, Y., Zhao, Q.: Segmentation and recognition for historical Tibetan document images. IEEE Access 8, 52641–52651 (2020)
27.
Zurück zum Zitat Marcos, D., Tuia, D., et al.: Learning deep structured active contours end-to-end. In: CVPR, pp. 8877–8885 (2018) Marcos, D., Tuia, D., et al.: Learning deep structured active contours end-to-end. In: CVPR, pp. 8877–8885 (2018)
28.
Zurück zum Zitat Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. In: Medical Imaging with Deep Learning (2018) Oktay, O., et al.: Attention U-Net: learning where to look for the pancreas. In: Medical Imaging with Deep Learning (2018)
29.
Zurück zum Zitat Pal, K., Terras, M., Weyrich, T.: 3D reconstruction for damaged documents: imaging of the great parchment book. In: Frinken, V., Barrett, B., Manmatha, R., Märgner, V. (eds.) HIP@ICDAR 2013, pp. 14–21. ACM (2013) Pal, K., Terras, M., Weyrich, T.: 3D reconstruction for damaged documents: imaging of the great parchment book. In: Frinken, V., Barrett, B., Manmatha, R., Märgner, V. (eds.) HIP@ICDAR 2013, pp. 14–21. ACM (2013)
30.
Zurück zum Zitat Prusty, A., Aitha, S., Trivedi, A., Sarvadevabhatla, R.K.: Indiscapes: instance segmentation networks for layout parsing of historical Indic manuscripts. In: ICDAR, pp. 999–1006 (2019) Prusty, A., Aitha, S., Trivedi, A., Sarvadevabhatla, R.K.: Indiscapes: instance segmentation networks for layout parsing of historical Indic manuscripts. In: ICDAR, pp. 999–1006 (2019)
31.
Zurück zum Zitat Ribera, J., Güera, D., Chen, Y., Delp, E.J.: Locating objects without bounding boxes. In: CVPR, June 2019 Ribera, J., Güera, D., Chen, Y., Delp, E.J.: Locating objects without bounding boxes. In: CVPR, June 2019
32.
Zurück zum Zitat Saini, R., Dobson, D., et al.: ICDAR 2019 historical document reading challenge on large structured Chinese family records. In: ICDAR, pp. 1499–1504 (2019) Saini, R., Dobson, D., et al.: ICDAR 2019 historical document reading challenge on large structured Chinese family records. In: ICDAR, pp. 1499–1504 (2019)
33.
Zurück zum Zitat Sethian, J.A.: A fast marching level set method for monotonically advancing fronts. PNAS 93(4), 1591–1595 (1996)MathSciNetCrossRef Sethian, J.A.: A fast marching level set method for monotonically advancing fronts. PNAS 93(4), 1591–1595 (1996)MathSciNetCrossRef
34.
Zurück zum Zitat Shahab, A., Shafait, F., et al.: An open approach towards benchmarking of table structure recognition systems. In: DAS, pp. 113–120 (2010) Shahab, A., Shafait, F., et al.: An open approach towards benchmarking of table structure recognition systems. In: DAS, pp. 113–120 (2010)
35.
Zurück zum Zitat Simistira, F., Seuret, M., Eichenberger, N., Garz, A., Liwicki, M., Ingold, R.: DIVA-HisDB: a precisely annotated large dataset of challenging medieval manuscripts. In: ICFHR, pp. 471–476 (2016) Simistira, F., Seuret, M., Eichenberger, N., Garz, A., Liwicki, M., Ingold, R.: DIVA-HisDB: a precisely annotated large dataset of challenging medieval manuscripts. In: ICFHR, pp. 471–476 (2016)
36.
Zurück zum Zitat Slimane, F., Ingold, R., Kanoun, S., Alimi, A.M., Hennebert, J.: Database and evaluation protocols for Arabic printed text recognition. DIUF-University of Fribourg-Switzerland (2009) Slimane, F., Ingold, R., Kanoun, S., Alimi, A.M., Hennebert, J.: Database and evaluation protocols for Arabic printed text recognition. DIUF-University of Fribourg-Switzerland (2009)
37.
Zurück zum Zitat Song, C., Huang, Y., Ouyang, W., Wang, L.: Box-driven class-wise region masking and filling rate guided loss for weakly supervised semantic segmentation. In: CVPR, pp. 3136–3145 (2019) Song, C., Huang, Y., Ouyang, W., Wang, L.: Box-driven class-wise region masking and filling rate guided loss for weakly supervised semantic segmentation. In: CVPR, pp. 3136–3145 (2019)
39.
Zurück zum Zitat Wiggers, K.L., Junior, A.S.B., Koerich, A.L., Heutte, L., de Oliveira, L.E.S.: Deep learning approaches for image retrieval and pattern spotting in ancient documents. ArXiv abs/1907.09404 (2019) Wiggers, K.L., Junior, A.S.B., Koerich, A.L., Heutte, L., de Oliveira, L.E.S.: Deep learning approaches for image retrieval and pattern spotting in ancient documents. ArXiv abs/1907.09404 (2019)
40.
Zurück zum Zitat Yalniz, I.Z., Manmatha, R.: A fast alignment scheme for automatic OCR evaluation of books. In: ICDAR, pp. 754–758 (2011) Yalniz, I.Z., Manmatha, R.: A fast alignment scheme for automatic OCR evaluation of books. In: ICDAR, pp. 754–758 (2011)
41.
Zurück zum Zitat Yang, X., et al.: Learning to extract semantic structure from documents using multimodal fully convolutional neural networks. In: CVPR, pp. 5315–5324 (2017) Yang, X., et al.: Learning to extract semantic structure from documents using multimodal fully convolutional neural networks. In: CVPR, pp. 5315–5324 (2017)
42.
43.
Zurück zum Zitat Zhong, X., Tang, J., Jimeno Yepes, A.: PubLayNet: largest dataset ever for document layout analysis. In: ICDAR, pp. 1015–1022 (2019) Zhong, X., Tang, J., Jimeno Yepes, A.: PubLayNet: largest dataset ever for document layout analysis. In: ICDAR, pp. 1015–1022 (2019)
44.
Zurück zum Zitat Zhou, J., Guo, B., Zheng, Y.: Document image retrieval based on convolutional neural network. In: Advances in Intelligent Information Hiding and Multimedia Signal Processing, pp. 221–229 (2020) Zhou, J., Guo, B., Zheng, Y.: Document image retrieval based on convolutional neural network. In: Advances in Intelligent Information Hiding and Multimedia Signal Processing, pp. 221–229 (2020)
Metadaten
Titel
BoundaryNet: An Attentive Deep Network with Fast Marching Distance Maps for Semi-automatic Layout Annotation
verfasst von
Abhishek Trivedi
Ravi Kiran Sarvadevabhatla
Copyright-Jahr
2021
DOI
https://doi.org/10.1007/978-3-030-86549-8_1

Premium Partner