Thu 19 Jan 2023 10:20 - 10:45 at Grand Ballroom A - Synthesis II Chair(s): Benjamin Delaware

\emph{Library learning} compresses a given corpus of programs by extracting common structure from the corpus into reusable library functions. Prior work on library learning suffers from two limitations that prevent it from scaling to larger, more complex inputs. First, it explores too many candidate library functions that are not useful for compression. Second, it is not robust to syntactic variation in the input.

We propose \emph{library learning modulo theory} (LLMT), a new library learning algorithm that additionally takes as input an equational theory for a given problem domain. LLMT uses e-graphs and equality saturation to compactly represent the space of programs equivalent modulo the theory, and
uses a novel \emph{e-graph anti-unification} technique to find common patterns in the corpus more directly and efficiently.

We implemented LLMT in a tool named \textsc{babble}. Our evaluation shows that \textsc{babble} achieves better compression orders of magnitude faster than the state of the art. We also provide a qualitative evaluation showing that \textsc{babble} learns reusable functions on inputs previously out of reach for library learning.

Thu 19 Jan

Displayed time zone: Eastern Time (US & Canada) change

10:20 - 12:00
Synthesis IIPOPL at Grand Ballroom A
Chair(s): Benjamin Delaware Purdue University
10:20
25m
Talk
babble: Learning Better Abstractions with E-Graphs and Anti-unification
POPL
David Cao University of California at San Diego, Rose Kunkel University of California at San Diego, Chandrakana Nandi Certora, Max Willsey University of Washington, Zachary Tatlock University of Washington, Nadia Polikarpova University of California at San Diego
DOI Pre-print
10:45
25m
Talk
Combining Functional and Automata Synthesis to Discover Causal Reactive Programs
POPL
Ria Das Stanford University, Joshua B. Tenenbaum Massachusetts Institute of Technology, Armando Solar-Lezama Massachusetts Institute of Technology, Zenna Tavares Basis; Columbia University
DOI
11:10
25m
Talk
Comparative Synthesis: Learning Near-Optimal Network Designs by Query
POPL
Yanjun Wang Amazon Web Services, USA, Zixuan Li Purdue University, Chuan Jiang Purdue University, Xiaokang Qiu Purdue University, Sanjay Rao Purdue University
DOI
11:35
25m
Talk
Top-Down Synthesis for Library Learning
POPL
Maddy Bowers Massachusetts Institute of Technology, Theo X. Olausson Massachusetts Institute of Technology, Lionel Wong Massachusetts Institute of Technology, Gabriel Grand Massachusetts Institute of Technology, Joshua B. Tenenbaum Massachusetts Institute of Technology, Kevin Ellis Cornell University, Armando Solar-Lezama Massachusetts Institute of Technology
DOI Pre-print