2017 Poster Sessions : Verifying Deep Neural Networks

Student Name : Guy Katz
Advisor : None
Abstract:
Deep neural networks have emerged as a widely used and effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present a novel, scalable, and efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique can also be used to measure a network's robustness to adversarial inputs. Our approach is based on the simplex method, extended to handle the non-convex Rectified Linear Unit (ReLU) activation function, which is a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a prototype deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu). Results show that our technique can successfully prove properties of networks that are an order of magnitude larger than the largest networks verified using existing methods.

[Based on joint work with Clark Barrett, David Dill, Kyle Julian and Mykel Kochenderfer. https://arxiv.org/abs/1702.01135

Bio:
Guy Katz is a postdoctoral fellow at Stanford university, working with Prof. Clark Barrett. He received his Ph.D. at the Weizmann Institute in 2015. His research interests lie at the intersection between Software Engineering and Formal Methods, and in particular in the application of formal methods to lightweight parallel programming models.