Ausgabe 3/2023
Inhalt (11 Artikel)
Explanation Paradigms Leveraging Analytic Intuition (ExPLAIn)
Nils Jansen, Gerrit Nolte, Bernhard Steffen
Algebraically explainable controllers: decision trees and support vector machines join forces
Florian Jüngermann, Jan Křetínský, Maximilian Weininger
Algebraic aggregation of random forests: towards explainability and rapid evaluation
Frederik Gossen, Bernhard Steffen
Forest GUMP: a tool for verification and explanation
Alnis Murtovi, Alexander Bainczyk, Gerrit Nolte, Maximilian Schlüter, Bernhard Steffen
Towards rigorous understanding of neural networks via semantics-preserving transformations
Maximilian Schlüter, Gerrit Nolte, Alnis Murtovi, Bernhard Steffen
First three years of the international verification of neural networks competition (VNN-COMP)
Christopher Brix, Mark Niklas Müller, Stanley Bak, Taylor T. Johnson, Changliu Liu
Analysis of recurrent neural networks via property-directed verification of surrogate models
Igor Khmelnitsky, Daniel Neider, Rajarshi Roy, Xuan Xie, Benoît Barbot, Benedikt Bollig, Alain Finkel, Serge Haddad, Martin Leucker, Lina Ye
The power of typed affine decision structures: a case study
Gerrit Nolte, Maximilian Schlüter, Alnis Murtovi, Bernhard Steffen
Decision-making under uncertainty: beyond probabilities
Thom Badings, Thiago D. Simão, Marnix Suilen, Nils Jansen
An overview of structural coverage metrics for testing neural networks
Muhammad Usman, Youcheng Sun, Divya Gopinath, Rishi Dange, Luca Manolache, Corina S. Păsăreanu
Analyzing neural network behavior through deep statistical model checking
Timo P. Gros, Holger Hermanns, Jörg Hoffmann, Michaela Klauck, Marcel Steinmetz