Computing Linear Restrictions of Neural Networks

Sotoudeh, Matthew and Thakur, Aditya V.
Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS), 2019

A linear restriction of a function is the same function with its domain restricted to points on a given line. This paper addresses the problem of computing a succinct representation for a linear restriction of a piecewise-linear neural network. This primitive, which we call ExactLine, allows us to exactly characterize the result of applying the network to all of the infinite points on a line. In particular, ExactLine computes a partition of the given input line segment such that the network is affine on each partition. We present an efficient algorithm for computing ExactLine as well as several applications. First, we show how to exactly determine decision boundaries of an ACAS Xu neural network. Next, we demonstrate how to exactly compute integrated gradients, which are commonly used for neural network attributions, allowing us to empirically quantify the error of the approximation techniques currently used and propose an improved method. Finally, we use ExactLine to empirically investigate a well-known hypothesis about adversarial examples, and identify interesting properties of adversarially-trained networks.

PDF    

@inproceedings{NeurIPS2019,
  author = {Sotoudeh, Matthew and Thakur, Aditya V.},
  title = {Computing Linear Restrictions of Neural Networks},
  booktitle = {Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems (NeurIPS)},
  year = {2019}
}