Thu 19 Jan 2023 16:00 - 16:25 at Grand Ballroom A - Automatic Differentiation Chair(s): Ningning Xie

Optimizing the expected values of probabilistic processes is a central problem in computer science and its applications, arising in fields ranging from artificial intelligence to operations research to statistical computing. Unfortunately, automatic differentiation techniques developed for deterministic programs do not in general compute the correct gradients needed for widely used solutions based on gradient-based optimization.

In this paper, we present ADEV, an extension to forward-mode AD that correctly differentiates the expectations of probabilistic processes represented as programs that make random choices. Our algorithm is a source-to-source program transformation on an expressive, higher-order language for probabilistic computation, with both discrete and continuous probability distributions. The result of our transformation is a new probabilistic program, whose expected return value is the derivative of the original program’s expectation. This output program can be run to generate unbiased Monte Carlo estimates of the desired gradient, that can be used within the inner loop of stochastic gradient descent. We prove ADEV correct using logical relations over the denotations of the source and target probabilistic programs. Because it modularly extends forward-mode AD, our algorithm lends itself to a concise implementation strategy, which we exploit to develop a prototype in just a few dozen lines of Haskell (https://github.com/probcomp/adev).

Thu 19 Jan

Displayed time zone: Eastern Time (US & Canada) change

15:10 - 16:25
Automatic DifferentiationPOPL at Grand Ballroom A
Chair(s): Ningning Xie Google Brain / University of Toronto
15:10
25m
Talk
You Only Linearize Once: Tangents Transpose to Gradients
POPL
Alexey Radul Google Research, Adam Paszke Google Research, Roy Frostig Google Research, Matthew J. Johnson Google Research, Dougal Maclaurin Google Research
DOI
15:35
25m
Talk
Efficient Dual-Numbers Reverse AD via Well-Known Program Transformations
POPL
Tom Smeding Utrecht University, Matthijs I. L. Vákár Utrecht University
DOI Pre-print
16:00
25m
Talk
ADEV: Sound Automatic Differentiation of Expected Values of Probabilistic ProgramsDistinguished Paper
POPL
Alexander K. Lew Massachusetts Institute of Technology, Mathieu Huot University of Oxford, Sam Staton University of Oxford, Vikash K. Mansinghka Massachusetts Institute of Technology
DOI Pre-print