Skip to main content
Erschienen in:
Buchtitelbild

2019 | OriginalPaper | Buchkapitel

Deep Learning Locally Trained Wildlife Sensing in Real Acoustic Wetland Environment

verfasst von : Clement Duhart, Gershon Dublon, Brian Mayton, Joseph Paradiso

Erschienen in: Advances in Signal Processing and Intelligent Recognition Systems

Verlag: Springer Singapore

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

We describe ‘Tidzam’, an application of deep learning that leverages a dense, multimodal sensor network installed at a large wetland restoration performed at Tidmarsh, a 600-acre former industrial-scale cranberry farm in Southern Massachusetts. Wildlife acoustic monitoring is a crucial metric during post-restoration evaluation of the processes, as well as a challenge in such a noisy outdoor environment. This article presents the entire Tidzam system, which has been designed in order to identify in real-time the ambient sounds of weather conditions as well as sonic events such as insects, small animals and local bird species from microphones deployed on the site. This experiment provides insight on the usage of deep learning technology in a real deployment. The originality of this work concerns the system’s ability to construct its own database from local audio sampling under the supervision of human visitors and bird experts.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
[ADCV17]
Zurück zum Zitat Adavanne, S., Drossos, K., Cakir, E., Virtanen, T.: Stacked convolutional and recurrent neural networks for bird audio detection. In: 25th European Signal Processing Conference (EUSIPCO), pp. 1729–1733, August 2017 Adavanne, S., Drossos, K., Cakir, E., Virtanen, T.: Stacked convolutional and recurrent neural networks for bird audio detection. In: 25th European Signal Processing Conference (EUSIPCO), pp. 1729–1733, August 2017
[AVR06]
Zurück zum Zitat Acevedo, M.A., Villanueva-Rivera, L.J.: From the field: using automated digital recording systems as effective tools for the monitoring of birds and amphibians. Wildlife Soc. Bull. 34(1), 211–214 (2006)CrossRef Acevedo, M.A., Villanueva-Rivera, L.J.: From the field: using automated digital recording systems as effective tools for the monitoring of birds and amphibians. Wildlife Soc. Bull. 34(1), 211–214 (2006)CrossRef
[CAP+17]
Zurück zum Zitat Cakir, E., Adavanne, S., Parascandolo, G., Drossos, K., Virtanen, T.: Convolutional recurrent neural networks for bird audio detection. In: 25th European Signal Processing Conference, EUSIPCO 2017, Kos, Greece, 28 August–2 September 2017, pp. 1744–1748 (2017) Cakir, E., Adavanne, S., Parascandolo, G., Drossos, K., Virtanen, T.: Convolutional recurrent neural networks for bird audio detection. In: 25th European Signal Processing Conference, EUSIPCO 2017, Kos, Greece, 28 August–2 September 2017, pp. 1744–1748 (2017)
[CMDA09]
Zurück zum Zitat Celis-Murillo, A., Deppe, J.L., Allen, M.F.: Using soundscape recordings to estimate bird species abundance, richness, and composition. J. Field Ornithol. 80(1), 64–78 (2009)CrossRef Celis-Murillo, A., Deppe, J.L., Allen, M.F.: Using soundscape recordings to estimate bird species abundance, richness, and composition. J. Field Ornithol. 80(1), 64–78 (2009)CrossRef
[HCE+17]
Zurück zum Zitat Hershey, S., et al.: CNN architectures for large-scale audio classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 131–135 (2017) Hershey, S., et al.: CNN architectures for large-scale audio classification. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 131–135 (2017)
[HP17]
Zurück zum Zitat Han, Y., Park, J.: Convolutional neural networks with binaural representations and background subtraction for acoustic scene classification. In: Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE 2017), November 2017 Han, Y., Park, J.: Convolutional neural networks with binaural representations and background subtraction for acoustic scene classification. In: Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017 Workshop (DCASE 2017), November 2017
[KSH+17]
Zurück zum Zitat Kojima, R., Sugiyama, O., Hoshiba, K., Nakadai, K., Suzuki, R., Taylor, C.E.: Bird song scene analysis using a spatial-cue-based probabilistic model. J. Robot. Mechatron. (JRM) 29, 236–246 (2017)CrossRef Kojima, R., Sugiyama, O., Hoshiba, K., Nakadai, K., Suzuki, R., Taylor, C.E.: Bird song scene analysis using a spatial-cue-based probabilistic model. J. Robot. Mechatron. (JRM) 29, 236–246 (2017)CrossRef
[LDM+17]
Zurück zum Zitat Li, J., Dai, W., Metze, F., Qu, S., Das, S.: A comparison of deep learning methods for environmental sound detection. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 126–130, March 2017 Li, J., Dai, W., Metze, F., Qu, S., Das, S.: A comparison of deep learning methods for environmental sound detection. In: 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 126–130, March 2017
[MM18]
Zurück zum Zitat Mayton, B., et al.: Networked sensory landscape: capturing and experiencing ecological change across scales. To appear in Presence (2018) Mayton, B., et al.: Networked sensory landscape: capturing and experiencing ecological change across scales. To appear in Presence (2018)
[PP16]
Zurück zum Zitat Paradiso, J.: Our extended sensoria - how humans will connect with the internet of things. Next Step Exponential Life Open Mind Collect. 1(1), 47–75 (2016) Paradiso, J.: Our extended sensoria - how humans will connect with the internet of things. Next Step Exponential Life Open Mind Collect. 1(1), 47–75 (2016)
[XHW+17]
Zurück zum Zitat Xu, Y., et al.: Unsupervised feature learning based on deep models for environmental audio tagging. IEEE/ACM Trans. Audio Speech Lang. Process. 25(6), 1230–1241 (2017)CrossRef Xu, Y., et al.: Unsupervised feature learning based on deep models for environmental audio tagging. IEEE/ACM Trans. Audio Speech Lang. Process. 25(6), 1230–1241 (2017)CrossRef
Metadaten
Titel
Deep Learning Locally Trained Wildlife Sensing in Real Acoustic Wetland Environment
verfasst von
Clement Duhart
Gershon Dublon
Brian Mayton
Joseph Paradiso
Copyright-Jahr
2019
Verlag
Springer Singapore
DOI
https://doi.org/10.1007/978-981-13-5758-9_1

Premium Partner