Skip to main content

2025 | OriginalPaper | Buchkapitel

Comparing Training of Sparse to Classic Neural Networks for Binary Classification in Medical Data

verfasst von : Laura Erhan, Antonio Liotta, Lucia Cavallaro

Erschienen in: Advances in Mobile Computing and Multimedia Intelligence

Verlag: Springer Nature Switzerland

Aktivieren Sie unsere intelligente Suche, um passende Fachinhalte oder Patente zu finden.

search-config
loading …

Abstract

Sparse Neural Networks are increasing in popularity and provide the opportunity for compact and efficient models for resource-constrained environments which are expanding as the number of IoT devices is increasing and as the Edge Computing and Fog paradigms are gaining traction. We investigate and evaluate sparsifying the training of Convolutional Neural Networks for the task of binary classification on medical datasets. We considered low (i.e., \(28\times 28\)) grey-scale resolution images that are memory-friendly and suitable for storing and analysing on lightweight devices. We found out that high sparsification strategies (above 75%) can achieve comparable performances with that of the fully connected counterpart while allowing for a reduction in inference time and peak memory usage, beneficial for resource-constrained environments part of Edge Computing. It is important to note that, as might be expected, after 90% sparsity, the performance can oscillate, and the results can vary significantly.

Sie haben noch keine Lizenz? Dann Informieren Sie sich jetzt über unsere Produkte:

Springer Professional "Wirtschaft+Technik"

Online-Abonnement

Mit Springer Professional "Wirtschaft+Technik" erhalten Sie Zugriff auf:

  • über 102.000 Bücher
  • über 537 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Maschinenbau + Werkstoffe
  • Versicherung + Risiko

Jetzt Wissensvorsprung sichern!

Springer Professional "Technik"

Online-Abonnement

Mit Springer Professional "Technik" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 390 Zeitschriften

aus folgenden Fachgebieten:

  • Automobil + Motoren
  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Elektrotechnik + Elektronik
  • Energie + Nachhaltigkeit
  • Maschinenbau + Werkstoffe




 

Jetzt Wissensvorsprung sichern!

Springer Professional "Wirtschaft"

Online-Abonnement

Mit Springer Professional "Wirtschaft" erhalten Sie Zugriff auf:

  • über 67.000 Bücher
  • über 340 Zeitschriften

aus folgenden Fachgebieten:

  • Bauwesen + Immobilien
  • Business IT + Informatik
  • Finance + Banking
  • Management + Führung
  • Marketing + Vertrieb
  • Versicherung + Risiko




Jetzt Wissensvorsprung sichern!

Literatur
1.
Zurück zum Zitat Ahmad, S., Shakeel, I., Mehfuz, S., Ahmad, J.: Deep learning models for cloud, edge, fog, and IoT computing paradigms: survey, recent advances, and future directions. Comput. Sci. Rev. 49, 100568 (2023)MathSciNetCrossRef Ahmad, S., Shakeel, I., Mehfuz, S., Ahmad, J.: Deep learning models for cloud, edge, fog, and IoT computing paradigms: survey, recent advances, and future directions. Comput. Sci. Rev. 49, 100568 (2023)MathSciNetCrossRef
2.
Zurück zum Zitat Ajani, T.S., Imoize, A.L., Atayero, A.A.: An overview of machine learning within embedded and mobile devices-optimizations and applications. Sensors 21(13), 4412 (2021)CrossRef Ajani, T.S., Imoize, A.L., Atayero, A.A.: An overview of machine learning within embedded and mobile devices-optimizations and applications. Sensors 21(13), 4412 (2021)CrossRef
3.
Zurück zum Zitat Cavallaro, L., Serafin, T., Liotta, A.: Miniaturisation of binary classifiers through sparse neural networks. Numer. Comput. Theory Algorithms NUMTA 2023, 74 (2023) Cavallaro, L., Serafin, T., Liotta, A.: Miniaturisation of binary classifiers through sparse neural networks. Numer. Comput. Theory Algorithms NUMTA 2023, 74 (2023)
4.
Zurück zum Zitat Changpinyo, S., Sandler, M., Zhmoginov, A.: The power of sparsity in convolutional neural networks. arXiv preprint arXiv:1702.06257 (2017) Changpinyo, S., Sandler, M., Zhmoginov, A.: The power of sparsity in convolutional neural networks. arXiv preprint arXiv:​1702.​06257 (2017)
6.
Zurück zum Zitat Garg, A., Mago, V.: Role of machine learning in medical research: a survey. Comput. Sci. Rev. 40, 100370 (2021)MathSciNetCrossRef Garg, A., Mago, V.: Role of machine learning in medical research: a survey. Comput. Sci. Rev. 40, 100370 (2021)MathSciNetCrossRef
7.
Zurück zum Zitat Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., Peste, A.: Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. 22(1) (2021) Hoefler, T., Alistarh, D., Ben-Nun, T., Dryden, N., Peste, A.: Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks. J. Mach. Learn. Res. 22(1) (2021)
8.
Zurück zum Zitat Li, Z., Liu, F., Yang, W., Peng, S., Zhou, J.: A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 33(12), 6999–7019 (2021)MathSciNetCrossRef Li, Z., Liu, F., Yang, W., Peng, S., Zhou, J.: A survey of convolutional neural networks: analysis, applications, and prospects. IEEE Trans. Neural Netw. Learn. Syst. 33(12), 6999–7019 (2021)MathSciNetCrossRef
9.
Zurück zum Zitat Liu, S., et al.: The unreasonable effectiveness of random pruning: return of the most naive baseline for sparse training. In: International Conference on Learning Representations (2022) Liu, S., et al.: The unreasonable effectiveness of random pruning: return of the most naive baseline for sparse training. In: International Conference on Learning Representations (2022)
10.
Zurück zum Zitat Mocanu, D.C., et al.: Sparse training theory for scalable and efficient agents. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2021, pp. 34–38. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2021) Mocanu, D.C., et al.: Sparse training theory for scalable and efficient agents. In: Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2021, pp. 34–38. International Foundation for Autonomous Agents and Multiagent Systems, Richland (2021)
11.
Zurück zum Zitat Rajaraman, S., et al.: Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ 6, e4568 (2018)CrossRef Rajaraman, S., et al.: Pre-trained convolutional neural networks as feature extractors toward improved malaria parasite detection in thin blood smear images. PeerJ 6, e4568 (2018)CrossRef
12.
Zurück zum Zitat Shehab, M., et al.: Machine learning in medical applications: a review of state-of-the-art methods. Comput. Biol. Med. 145, 105458 (2022)CrossRef Shehab, M., et al.: Machine learning in medical applications: a review of state-of-the-art methods. Comput. Biol. Med. 145, 105458 (2022)CrossRef
13.
Zurück zum Zitat Yang, J., et al.: MedMNIST v2 - a large-scale lightweight benchmark for 2D and 3D biomedical image classification. Sci. Data 10(1), 41 (2023)CrossRef Yang, J., et al.: MedMNIST v2 - a large-scale lightweight benchmark for 2D and 3D biomedical image classification. Sci. Data 10(1), 41 (2023)CrossRef
Metadaten
Titel
Comparing Training of Sparse to Classic Neural Networks for Binary Classification in Medical Data
verfasst von
Laura Erhan
Antonio Liotta
Lucia Cavallaro
Copyright-Jahr
2025
DOI
https://doi.org/10.1007/978-3-031-78049-3_10