Abstract Neural Networks

Sotoudeh, Matthew and Thakur, Aditya V.
27th Static Analysis Symposium (SAS), 2020

Deep Neural Networks (DNNs) are rapidly being applied to safety-critical domains such as drone and airplane control, motivating techniques for verifying the safety of their behavior. Unfortunately, DNN verification is NP-hard, with current algorithms slowing exponentially with the number of nodes in the DNN. This paper introduces the notion of Abstract Neural Networks (ANNs), which can be used to soundly overapproximate DNNs while using fewer nodes. An ANN is like a DNN except weight matrices are replaced by values in a given abstract domain. We present a framework parameterized by the abstract domain and activation functions used in the DNN that can be used to construct a corresponding ANN. We present necessary and sufficient conditions on the DNN activation functions for the constructed ANN to soundly over-approximate the given DNN. Prior work on DNN abstraction was restricted to the interval domain and ReLU activation function. Our framework can be instantiated with other abstract domains such as octagons and polyhedra, as well as other activation functions such as Leaky ReLU, Sigmoid, and Hyperbolic Tangent.

PDF     Springer©    

@inproceedings{SAS20b,
  author = {Sotoudeh, Matthew and Thakur, Aditya V.},
  title = {Abstract Neural Networks},
  booktitle = {27th Static Analysis Symposium (SAS)},
  year = {2020},
  publisher = {Springer},
  doi = {10.1007/978-3-030-65474-0_4}
}