Topics in Automated Reasoning and Program Analysis
ECS 289C (Programming Languages and Compilers)
Instructor: Aditya Thakur (email@example.com)
Lecture Time and Location: TR 4:40 - 6:00 pm, Cruess Hall 107
Office Hours Time and Location: Friday 1:30 - 2:30 pm, Academic Surge Building 2073
This course will cover the theory and practice of automated reasoning and program analysis, with a focus on connections between these research areas. Topics in automated reasoning may include: propositional satisfiability, theorem provers, separation logic, and model checking. Topics in program analysis may include: static program analysis (abstract interpretation), dynamic program analysis, automated testing, and symbolic execution.
The course will draw primarily on research readings, and it will contain a mix of lectures and student paper presentations; evaluation will be based on these presentations as well as a course project. This course will provide a reasonable overview of the state-of-the-art in automated reasoning and program analysis; thus, this course would benefit graduate students interested in pursuing research in these areas. Moreover, the course project can be tailored to an area of personal interest that might benefit from automated reasoning and program analysis techniques.
Static Analysis At Google
Software bugs cause critical problems for software companies and their users. For example, a bug in Apple’s SSL implementation (“goto fail”) caused it to accept invalid SSL certificates. Heartbleed allowed attackers to read the memory contents of any server that used OpenSSL. A bug related to date formatting caused a large-scale Twitter outage. Many of these bugs are statically detectable, and in fact are obvious with a close reading of the code, yet they make it into production software anyway.
Previous work has reported on reasons why engineers don’t use static analysis tools or why they ignore their warnings. These reasons include: (1) Tools are not integrated into the workflow, (2) warnings are not actionable, (2) engineers do not trust the tool, (3) “survivor bias” means the bug does not manifest in practice, (4) fixing the bug is too expensive or risky, and (5) the tool’s explanation doesn’t adequately explain the problem.
In this talk, I will describe how we have applied these lessons, as well as lessons from Google’s previous experience with the FindBugs static analysis tool, to build a successful static analysis infrastructure that is used daily by the almost all engineers at Google. Our tools scale to tens of thousands of engineers and hundreds of millions of lines of code, and detect over 2500 bugs per day, before the problematic code is checked into the codebase. The bugs our tools detect are fixed by engineers, by their own choice, rather than because of a mandate from managers, and we solicit and address feedback to ensure that our tools are providing value to users.
About the Speaker
Eddie Aftandilian is a Senior Software Engineer at Google, where he leads the Java Source Tooling, Compilation and Analysis team. He completed his PhD at Tufts University, working with Sam Guyer to help Java developers better understand the memory behavior of their programs. His team helps Google’s developers write better Java code – more correct, more readable, and with better performance. Their static analysis tool, Error Prone, is integrated into Google’s build system and code review tool and catches hundreds of bugs each day.
Informal Methods for Large Code, Formal Methods for Small Code
Galois uses a wide variety of program analysis techniques across our projects. In this talk, I will describe some work we have done at opposite ends of the spectrum of automated program reasoning. First, I will describe work we have done on information flow analysis in Android applications with the aim of identifying colluding applications. This was a large scale analysis necessitating trade-offs and careful scoping. Second, I will describe work we have done to formally verify parts of s2n, which is an open source implementation of TLS. This work was much more focused on smaller pieces of code, but provides significantly stronger guarantees.
About the Speaker
Dr. Tristan Ravitch is a research lead at Galois, which is a company dedicated to building trustworthy systems. He is currently working on binary analysis and verification projects targeting embedded and control systems. He has also worked on static analysis of Android applications and privacy-preserving computation. Dr. Ravitch completed his Ph.D. at the University of Wisconsin in 2013.