Skip to main content
Erschienen in: Fire Technology 3/2019

15.12.2018

Particle Tracking and Detection Software for Firebrands Characterization in Wildland Fires

verfasst von: Alexander Filkov, Sergey Prohanov

Erschienen in: Fire Technology | Ausgabe 3/2019

Einloggen

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Detection and analysis of the objects in a frame or a sequence of frames (video) can be used to solve a number of problems in various fields, including the field of fire behaviour and risk. A quantitative understanding of the short distance spotting dynamics, namely the firebrand density distribution within a distance from the fire front and how distinct fires coalesce in a highly turbulent environment, is still lacking. To address this, a custom software was developed in order to detect the location and the number of flying firebrands in a thermal image then determine the temperature and sizes of each firebrand. The software consists of two modules, the detector and the tracker. The detector determines the location of the firebrands in the frame, and the tracker compares the firebrand in different frames and determines the identification number of each firebrand. Comparison of the calculated results with the data obtained by the independent experts and experimental data showed that the maximum relative error does not exceed 12% for the low and medium number of firebrands in the frame (less than 30) and software agrees well with experimental observations for firebrands > 20 × 10−5 m. It was found that fireline intensity below 12,590 kW m−1 does not change significantly 2D firebrand flux for firebrands bigger than 20 × 10−5 m, while occasional crowning can increase the firebrand flux in several times. The developed software allowed us to analyse the thermograms obtained during the field experiments and to measure the velocities, sizes and temperatures of the firebrands. It will help to better understand of how the firebrands can ignite the surrounding fuel beds and could be an important tool in investigating fire propagation in communities.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Anhänge
Nur mit Berechtigung zugänglich
Literatur
2.
Zurück zum Zitat Magidimisha E, Griffith DJ (2017) Remote optical observations of actively burning biomass fires using potassium line spectral emission. In: Proceedings of SPIE—the international society for optical engineering Magidimisha E, Griffith DJ (2017) Remote optical observations of actively burning biomass fires using potassium line spectral emission. In: Proceedings of SPIE—the international society for optical engineering
5.
Zurück zum Zitat Gould J, McCaw W, Cheney N et al (2008) Project vesta: fire in dry eucalypt forest: fuel structure, fuel dynamics and fire behaviour. CSIRO Publishing, ClaytonCrossRef Gould J, McCaw W, Cheney N et al (2008) Project vesta: fire in dry eucalypt forest: fuel structure, fuel dynamics and fire behaviour. CSIRO Publishing, ClaytonCrossRef
10.
Zurück zum Zitat Hammami M, Jarraya SK, Ben-Abdallah H (2011) A comparative study of proposed moving object detection methods. J Next Gener Inf Technol 2:56–68 Hammami M, Jarraya SK, Ben-Abdallah H (2011) A comparative study of proposed moving object detection methods. J Next Gener Inf Technol 2:56–68
11.
Zurück zum Zitat Cheng YH, Wang J (2014) A motion image detection method based on the inter-frame difference method. Appl Mech Mater 490–491:1283–1286 Cheng YH, Wang J (2014) A motion image detection method based on the inter-frame difference method. Appl Mech Mater 490–491:1283–1286
12.
Zurück zum Zitat Chen Z, Ellis T (2014) A self-adaptive Gaussian mixture model. Comput Vis Image Underst 122:35–46CrossRef Chen Z, Ellis T (2014) A self-adaptive Gaussian mixture model. Comput Vis Image Underst 122:35–46CrossRef
13.
Zurück zum Zitat Friedman N, Russell S (1997) Image segmentation in video sequences. In: Proceedings of 13th conference on uncertainty in artificial intelligence Friedman N, Russell S (1997) Image segmentation in video sequences. In: Proceedings of 13th conference on uncertainty in artificial intelligence
14.
Zurück zum Zitat Horn BKP, Schunck BG (1981) Determining optical flow. Artif Intell 17:185–203CrossRef Horn BKP, Schunck BG (1981) Determining optical flow. Artif Intell 17:185–203CrossRef
15.
Zurück zum Zitat Holte MB, Moeslund TB, Fihl P (2010) View-invariant gesture recognition using 3D optical flow and harmonic motion context. Comput Vis Image Underst 114:1353–1361CrossRef Holte MB, Moeslund TB, Fihl P (2010) View-invariant gesture recognition using 3D optical flow and harmonic motion context. Comput Vis Image Underst 114:1353–1361CrossRef
16.
Zurück zum Zitat Fortun D, Bouthemy P, Kervrann C (2015) Optical flow modeling and computation: a survey. Comput Vis Image Underst 134:1–21MATHCrossRef Fortun D, Bouthemy P, Kervrann C (2015) Optical flow modeling and computation: a survey. Comput Vis Image Underst 134:1–21MATHCrossRef
17.
Zurück zum Zitat Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297MATH Cortes C, Vapnik V (1995) Support-vector networks. Mach Learn 20:273–297MATH
18.
Zurück zum Zitat Heisele B, Ho P, Wu J, Poggio T (2003) Face recognition: component-based versus global approaches. Comput Vis Image Underst 91:6–21CrossRef Heisele B, Ho P, Wu J, Poggio T (2003) Face recognition: component-based versus global approaches. Comput Vis Image Underst 91:6–21CrossRef
19.
Zurück zum Zitat Ko BC, Cheong K-H, Nam J-Y (2009) Fire detection based on vision sensor and support vector machines. Fire Saf J 44:322–329CrossRef Ko BC, Cheong K-H, Nam J-Y (2009) Fire detection based on vision sensor and support vector machines. Fire Saf J 44:322–329CrossRef
20.
Zurück zum Zitat Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I511–I518 Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I511–I518
21.
Zurück zum Zitat Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vis 57:137–154CrossRef Viola P, Jones MJ (2004) Robust real-time face detection. Int J Comput Vis 57:137–154CrossRef
22.
Zurück zum Zitat Zhang X-B, Liu W-Y, Lu D-W (2008) Automatic video object segmentation algorithm based on spatio-temporal information. J Optoelectron Laser 19:384–387 Zhang X-B, Liu W-Y, Lu D-W (2008) Automatic video object segmentation algorithm based on spatio-temporal information. J Optoelectron Laser 19:384–387
23.
Zurück zum Zitat Ali I, Dailey MN (2012) Multiple human tracking in high-density crowds. Image Vis Comput 30:966–977CrossRef Ali I, Dailey MN (2012) Multiple human tracking in high-density crowds. Image Vis Comput 30:966–977CrossRef
24.
Zurück zum Zitat Ng CW, Ranganath S (2002) Real-time gesture recognition system and application. Image Vis Comput 20:993–1007CrossRef Ng CW, Ranganath S (2002) Real-time gesture recognition system and application. Image Vis Comput 20:993–1007CrossRef
25.
Zurück zum Zitat Perlin HA, Lopes HS (2015) Extracting human attributes using a convolutional neural network approach. Pattern Recognit Lett 68:250–259CrossRef Perlin HA, Lopes HS (2015) Extracting human attributes using a convolutional neural network approach. Pattern Recognit Lett 68:250–259CrossRef
26.
Zurück zum Zitat Li B, Chellappa R, Zheng Q et al (2001) Experimental evaluation of FLIR ATR approaches—a comparative study. Comput Vis Image Underst 84:5–24MATHCrossRef Li B, Chellappa R, Zheng Q et al (2001) Experimental evaluation of FLIR ATR approaches—a comparative study. Comput Vis Image Underst 84:5–24MATHCrossRef
27.
Zurück zum Zitat Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38:770–773CrossRef Yilmaz A, Javed O, Shah M (2006) Object tracking: a survey. ACM Comput Surv 38:770–773CrossRef
28.
Zurück zum Zitat Veenman CJ, Reinders MJT, Backer E (2001) Resolving motion correspondence for densely moving points. IEEE Trans Pattern Anal Mach Intell 23:54–72CrossRef Veenman CJ, Reinders MJT, Backer E (2001) Resolving motion correspondence for densely moving points. IEEE Trans Pattern Anal Mach Intell 23:54–72CrossRef
29.
Zurück zum Zitat Shafique K, Shah M (2003) A non-iterative greedy algorithm for multi-frame point correspondence. In: Proceedings of the IEEE international conference on computer vision, pp 110–115 Shafique K, Shah M (2003) A non-iterative greedy algorithm for multi-frame point correspondence. In: Proceedings of the IEEE international conference on computer vision, pp 110–115
30.
Zurück zum Zitat Kitagawa G (1987) Non-gaussian state—space modeling of nonstationary time series. J Am Stat Assoc 82:1032–1041MathSciNetMATH Kitagawa G (1987) Non-gaussian state—space modeling of nonstationary time series. J Am Stat Assoc 82:1032–1041MathSciNetMATH
31.
Zurück zum Zitat Welch G, Bishop G (2006) An introduction to the Kalman filter. Practice 7:1–16 Welch G, Bishop G (2006) An introduction to the Kalman filter. Practice 7:1–16
32.
Zurück zum Zitat Muñoz-Salinas R, Aguirre E, García-Silvente M (2007) People detection and tracking using stereo vision and color. Image Vis Comput 25:995–1007MATHCrossRef Muñoz-Salinas R, Aguirre E, García-Silvente M (2007) People detection and tracking using stereo vision and color. Image Vis Comput 25:995–1007MATHCrossRef
33.
Zurück zum Zitat Sifakis E, Tziritas G (2001) Moving object localisation using a multi-label fast marching algorithm. Signal Process Image Commun 16:963–976CrossRef Sifakis E, Tziritas G (2001) Moving object localisation using a multi-label fast marching algorithm. Signal Process Image Commun 16:963–976CrossRef
34.
Zurück zum Zitat Jang D-S, Choi H-I (2000) Active models for tracking moving objects. Pattern Recognit 33:1135–1146CrossRef Jang D-S, Choi H-I (2000) Active models for tracking moving objects. Pattern Recognit 33:1135–1146CrossRef
35.
Zurück zum Zitat Nummiaro K, Koller-Meier E, Van Gool L (2003) An adaptive color-based particle filter. Image Vis Comput 21:99–110MATHCrossRef Nummiaro K, Koller-Meier E, Van Gool L (2003) An adaptive color-based particle filter. Image Vis Comput 21:99–110MATHCrossRef
36.
Zurück zum Zitat Del Bimbo A, Dini F (2011) Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptation. Comput Vis Image Underst 115:771–786CrossRef Del Bimbo A, Dini F (2011) Particle filter-based visual tracking with a first order dynamic model and uncertainty adaptation. Comput Vis Image Underst 115:771–786CrossRef
37.
Zurück zum Zitat Reid DB (1979) An algorithm for tracking multiple targets. IEEE Trans Autom Control 24:843–854CrossRef Reid DB (1979) An algorithm for tracking multiple targets. IEEE Trans Autom Control 24:843–854CrossRef
38.
Zurück zum Zitat Blackman SS (2004) Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp Electron Syst Mag 19:5–18CrossRef Blackman SS (2004) Multiple hypothesis tracking for multiple target tracking. IEEE Aerosp Electron Syst Mag 19:5–18CrossRef
39.
Zurück zum Zitat Tissainayagam P, Suter D (2005) Object tracking in image sequences using point features. Pattern Recognit 38:105–113CrossRef Tissainayagam P, Suter D (2005) Object tracking in image sequences using point features. Pattern Recognit 38:105–113CrossRef
40.
Zurück zum Zitat Polat E, Yeasin M, Sharma R (2003) Robust tracking of human body parts for collaborative human computer interaction. Comput Vis Image Underst 89:44–69MATHCrossRef Polat E, Yeasin M, Sharma R (2003) Robust tracking of human body parts for collaborative human computer interaction. Comput Vis Image Underst 89:44–69MATHCrossRef
41.
Zurück zum Zitat Comaniciu D, Meer P (2002) Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell 24:603–619CrossRef Comaniciu D, Meer P (2002) Mean shift: a robust approach toward feature space analysis. IEEE Trans Pattern Anal Mach Intell 24:603–619CrossRef
42.
Zurück zum Zitat Bugeau A, Pérez P (2009) Detection and segmentation of moving objects in complex scenes. Comput Vis Image Underst 113:459–476CrossRef Bugeau A, Pérez P (2009) Detection and segmentation of moving objects in complex scenes. Comput Vis Image Underst 113:459–476CrossRef
43.
Zurück zum Zitat Leichter I, Lindenbaum M, Rivlin E (2010) Mean Shift tracking with multiple reference color histograms. Comput Vis Image Underst 114:400–408CrossRef Leichter I, Lindenbaum M, Rivlin E (2010) Mean Shift tracking with multiple reference color histograms. Comput Vis Image Underst 114:400–408CrossRef
44.
Zurück zum Zitat McKenna SJ, Jabri S, Duric Z et al (2000) Tracking groups of people. Comput Vis Image Underst 80:42–56MATHCrossRef McKenna SJ, Jabri S, Duric Z et al (2000) Tracking groups of people. Comput Vis Image Underst 80:42–56MATHCrossRef
45.
Zurück zum Zitat Huttenlocher DP, Noh JJ, Rucklidge WJ (1993) Tracking non-rigid objects in complex scenes. In: 1993 IEEE 4th international conference on computer vision, pp 93–101 Huttenlocher DP, Noh JJ, Rucklidge WJ (1993) Tracking non-rigid objects in complex scenes. In: 1993 IEEE 4th international conference on computer vision, pp 93–101
46.
Zurück zum Zitat Li B, Chellappa R, Zheng Q, Der SZ (2001) Model-based temporal object verification using video. IEEE Trans Image Process 10:897–908CrossRef Li B, Chellappa R, Zheng Q, Der SZ (2001) Model-based temporal object verification using video. IEEE Trans Image Process 10:897–908CrossRef
47.
Zurück zum Zitat Kang J, Cohen I, Medioni G (2004) Object reacquisition using invariant appearance model. In: Proceedings—international conference on pattern recognition, pp 759–762 Kang J, Cohen I, Medioni G (2004) Object reacquisition using invariant appearance model. In: Proceedings—international conference on pattern recognition, pp 759–762
48.
Zurück zum Zitat Yilmaz A, Li X, Shah M (2004) Contour-based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans Pattern Anal Mach Intell 26:1531–1536CrossRef Yilmaz A, Li X, Shah M (2004) Contour-based object tracking with occlusion handling in video acquired using mobile cameras. IEEE Trans Pattern Anal Mach Intell 26:1531–1536CrossRef
49.
Zurück zum Zitat Chen Y, Rui Y, Huang TS (2001) JPDAF based HMM for real-time contour tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I543–I550 Chen Y, Rui Y, Huang TS (2001) JPDAF based HMM for real-time contour tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp I543–I550
51.
Zurück zum Zitat Davis JW, Sharma V (2007) Background-subtraction using contour-based fusion of thermal and visible imagery. Comput Vis Image Underst 106:162–182CrossRef Davis JW, Sharma V (2007) Background-subtraction using contour-based fusion of thermal and visible imagery. Comput Vis Image Underst 106:162–182CrossRef
52.
Zurück zum Zitat Sobral A, Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122:4–21CrossRef Sobral A, Vacavant A (2014) A comprehensive review of background subtraction algorithms evaluated with synthetic and real videos. Comput Vis Image Underst 122:4–21CrossRef
53.
Zurück zum Zitat Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 246–252 Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 246–252
54.
Zurück zum Zitat Zivkovic Z, Van Der Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett 27:773–780CrossRef Zivkovic Z, Van Der Heijden F (2006) Efficient adaptive density estimation per image pixel for the task of background subtraction. Pattern Recognit Lett 27:773–780CrossRef
55.
Zurück zum Zitat Park S, Aggarwal JK (2006) Simultaneous tracking of multiple body parts of interacting persons. Comput Vis Image Underst 102:1–21CrossRef Park S, Aggarwal JK (2006) Simultaneous tracking of multiple body parts of interacting persons. Comput Vis Image Underst 102:1–21CrossRef
59.
Zurück zum Zitat OpenCv (2014) OpenCV Library. In: OpenCV website OpenCv (2014) OpenCV Library. In: OpenCV website
60.
Zurück zum Zitat Filkov A, Prohanov S (2015) IRaPaD. Automatic detection for the characteristics of moving particles in the thermal infrared video: No. 2015617763 Filkov A, Prohanov S (2015) IRaPaD. Automatic detection for the characteristics of moving particles in the thermal infrared video: No. 2015617763
61.
Zurück zum Zitat MongoDB (2016) MongoDB architecture guide. MongoDB White Paper 1–16 MongoDB (2016) MongoDB architecture guide. MongoDB White Paper 1–16
Metadaten
Titel
Particle Tracking and Detection Software for Firebrands Characterization in Wildland Fires
verfasst von
Alexander Filkov
Sergey Prohanov
Publikationsdatum
15.12.2018
Verlag
Springer US
Erschienen in
Fire Technology / Ausgabe 3/2019
Print ISSN: 0015-2684
Elektronische ISSN: 1572-8099
DOI
https://doi.org/10.1007/s10694-018-0805-0

Weitere Artikel der Ausgabe 3/2019

Fire Technology 3/2019 Zur Ausgabe