Research
I lead the Davis Automated Reasoning Group (DARG), a team of researchers developing tools and techniques for developing and deploying correct software. The use of software is prevalent in our daily lives, and is increasingly used in safety and security-critical systems. Software systems come in a variety of shapes and sizes, from firmware for IoT devices consisting of thousands of lines of code to operating systems consisting of millions of lines of code to deep neural networks with billions of parameters. Each such system has different performance, security, safety, and functional requirements. Our research develops theoretical formalisms as well as practical tools to tackle these challenges, with the goal of developing foundational research and transitioning it to industry to have real-world impact.
A short description of some of our current research can be found below; more details can be found in the publications. Our software tools are available on GitHub and listed here. I discussed some of our projects as well as my research philosophy in Episode 7 of the Build Better Systems Podcast; interview was in Aug. 2020. Some online presentations related to our research are listed here.
Analysis and Repair of Deep Neural Networks
Deep neural networks (DNNs) have become the state-of-the-art in a variety of applications including image recognition and natural language processing. Moreover, they are increasingly used in safety and security-critical applications such as autonomous vehicles. This project aims to improve our understanding of DNNs by developing techniques to analyze, verify, and repair them.
Related publications: PLDI 2024 Tutorial SAIV 2024 PLDI 2023 STTT 2023 PLDI 2021 TACAS 2021 SAS 2020 SRDM 2019 NeurIPS 2019
Neuro-Symbolic Program Analysis
Neuro-symbolic program analysis augments static program analysis with machine learning to find bugs in large systems software, such as the Linux kernel.
Related publications: SOAP 2024 OOPSLA Onward! 2020 FSE 2019 FSE 2018 ICSE 2018
Software
- PRoViT: Provable Repair of Vision Transformers
- APRNN: Architecture-Preserving Provable Repair of Deep Neural Networks
- PRDNN: Provable Repair of Deep Neural Networks
- SyReNN: Symbolic Representation of Neural Networks
- PIKOS: Parallel Abstract Interpreter
- MIKOS: Memory-Efficient Abstract Interpreter
- DICE*: Verified Measured Boot
- Sifter: Analogy making in software engineering
- EESI: Effective Error Specification Inference
- Func2Vec: Function Embedding
Industrial Impact
- In collaboration with Microsoft Research, our verified DICE implementation (USENIX Security 2021) has been deployed on Microsoft Azure.
- Our improved method for computing Integrated Gradients (NeurIPS 2019) is now being used by Google, who were the original inventors of the technique.
- Our parallel fixpoint algorithm (POPL 2020) has been integrated into two open-source industrial abstract interpreters: NASA IKOS and Facebook SPARTA.
- Our neuro-symbolic program analysis tools (FSE 2019, FSE 2018) have found over 200 bugs in critical open-source projects, including the Linux kernel, OpenSSL, mbedTLS, Pidgin, and netdata.
Online Talks
- Provable Repair of Deep Neural Networks at PLDI 2021; here is a 5-minute version of the same presentation.
- Safe Machine Learning and Artificial Intelligence at the Undergraduate Research Competition 2021 at UC Davis.
- SyReNN: A Tool for Analyzing Deep Neural Networks at TACAS 2021.
- DICE*: A Formally Verified Implementation of DICE Measured Boot at USENIX Security 2021.
- Understanding and Repairing Deep Neural Networks at Waterloo ML+Logic Online Colloquium 2020.
- Efficient Fixpoint Computation for Abstract Interpretation, Workshop on Research Highlights in Programming Languages at FSTTCS 2020.
- Memory-efficient Fixpoint Computation at SAS 2020.
- Deterministic Parallel Fixpoint Computation at POPL 2020.