Program
08:00-09:00am | Breakfast |
09:00-09:10am | Welcome (Belltown) |
09:10-10:00am |
Invited Talk Chair: Bor-Yuh Evan Chang Sriram Rajamani (Microsoft Research India) |
10:30-11:20am |
Invited Talk Chair: Ranjit Jhala Andrey Rybalchenko (Microsoft Research Cambridge and Technische Universität München) |
11:20-11:45am |
Shuying Liang, Matthew Might, and David Van Horn. AnaDroid: Malware Analysis of Android with User-supplied Predicates. |
12:00-01:30pm | Lunch |
01:30-02:20pm |
Invited Talk Chair: Werner Dietl Alex Aiken (Stanford University) |
02:20-02:45pm |
Aditya Thakur, Akash Lal, Junghee Lim, and Thomas Reps. PostHat and All That: Automating Abstract Interpretation. |
03:15-04:05pm |
Invited Talk Chair: Ben Hardekopf Julian Dolby (IBM T.J. Watson Research Center) |
04:05-04:20pm | Conclusion |
Abstracts
Sriram Rajamani
Probabilistic Programming: A Program Analysis Perspective
Probabilistic models, particularly those with causal dependencies, can be succinctly written as probabilistic programs. Recent years have seen a proliferation of languages for writing such probabilistic programs, as well as tools and techniques for performing inference over these programs. In this talk, we show that static and dynamic program analysis techniques can be used to infer quantities of interest to the machine learning community, thereby providing a new and interesting domain of application for program analysis. In particular, we show that static analysis techniques inspired by dataflow analysis and iterative refinement techniques can be used for Bayesian inference. We also show that dynamic analysis techniques inspired by directed testing and symbolic execution can be used for sampling probabilistic programs.
Joint work with Aditya Nori, Andy Gordon, Arun Chaganty, Akash Lal, Johannes Borgstorm and Guillaume Claret.
Andrey Rybalchenko
Solving Quantified Horn Clauses
Quantified Horn clauses can be used to represent proof obligations for a variety of verification and synthesis tasks, e.g., proving existential temporal properties, solving games, and dealing with container data structures. We discuss application scenarios for quantified Horn clauses and briefly overview solving approaches, as well as experience with extending our solver ARMC to support quantification.
Joint work with Tewodros Beyene, Nikolaj Bjorner, Swarat Chaudhuri, Ken McMillan, and Corneliu Popeea.
Alex Aiken
Using Learning Techniques in Invariant Inference
Arguably the hardest problem in automatic program analysis is designing appropriate techniques for discovering loop invariants (or, more generally, for recursive procedures). Certainly, if invariants are known, the rest of the analysis problem becomes easier. This talk presents a family of invariant inference techniques based on using test cases to generate an underapproximation of program behavior and then using machine learning algorithms to generalize the underapproximation to an invariant. These techniques are simpler, much more efficient, and appear to be more robust than previous approaches to the problem. If time permits, some open problems will also be discussed.
Julian Dolby
The End of Pointer Analysis?
Pointer analysis means computing an approximation of the possible objects to which any program variable may refer; it has traditionally been done by conservatively approximating all possible data flow in the program, resulting in a conservative approximation of the objects held by any variable. This has always been a bit fake---no tools soundly approximates all possible reflective and JNI behavior in Java, for instance---but even the comforting illusion of soundness has become unsustainable in the world of framework- and browser-based Web applications. The frameworks are built on ubiquitous complex reflective behavior, and the browser appears as a large, complex, poorly-specified native API; the frameworks and the applications themselves are written in JavaScript, the lingua franca of the Web, the dynamic nature of which give pointer analysis no help. Whether this world can be analyzed soundly is perhaps technically still an open problem, but the prognosis seems grim at best.
We have been exploring deliberately unsound analyses which make no attempt to approximate all possible data flow in a program; certain constructs are ignored not because they are unimportant, but simply because they are too hard. The tradeoff is now between how much can we ignore and still provide useful information versus how little can we ignore and still be tractable in practice. The good news so far is that there appear to be good tradeoffs, at least for a range of applications supporting IDE services. I will discuss recent and ongoing work in providing key information for IDE services: callgraphs and smart completions.