The Languages for Inference (LAFI) workshop aims to bring programming-language and machine-learning researchers together to advance all aspects of languages for inference.

Topics include but are not limited to:

  • Design of programming languages for statistical inference and/or differentiable programming
  • Inference algorithms for probabilistic programming languages, including ones that incorporate automatic differentiation
  • Automatic differentiation algorithms for differentiable programming languages
  • Probabilistic generative modelling and inference
  • Variational and differential modeling and inference
  • Semantics (axiomatic, operational, denotational, games, etc) and types for inference and/or differentiable programming
  • Efficient and correct implementation
  • Applications of inference and/or differentiable programming
You're viewing the program in a time zone which is different from your device's time zone change time zone

Sun 15 Jan

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 10:30
First SessionLAFI at Scollay
Chair(s): Steven Holtzen Northeastern University, Christine Tasson Sorbonne Université — LIP6
09:00
5m
Day opening
Opening Comments
LAFI
Christine Tasson Sorbonne Université — LIP6, Steven Holtzen Northeastern University
09:05
60m
Keynote
Introduction to the tensor-programs framework, a PL approach that helps analyse theoretical properties of deep learning.Boston
LAFI
A: Hongseok Yang KAIST; IBS
10:10
10m
Talk
Exact Inference for Discrete Probabilistic Programs via Generating FunctionsParis
LAFI
A: Fabian Zaiser University of Oxford, C.-H. Luke Ong University of Oxford
File Attached
10:20
10m
Talk
Exact Probabilistic Inference Using Generating FunctionsBoston
LAFI
A: Lutz Klinkenberg RWTH Aachen University, Tobias Winkler RWTH Aachen University, Mingshuai Chen RWTH Aachen, Joost-Pieter Katoen RWTH Aachen University
File Attached
11:00 - 12:30
Second SessionLAFI at Scollay
Chair(s): Steven Holtzen Northeastern University, Christine Tasson Sorbonne Université — LIP6
11:00
20m
Talk
What do posterior distributions of probabilistic programs look like?Boston
LAFI
Mathieu Huot University of Oxford, A: Alexander K. Lew Massachusetts Institute of Technology, Vikash K. Mansinghka Massachusetts Institute of Technology, Sam Staton University of Oxford
File Attached
11:20
10m
Talk
Semantics of Probabilistic Program TracesBoston
LAFI
Alexander K. Lew Massachusetts Institute of Technology, A: Eli Sennesh Northeastern University, Jan-Willem Van De Meent University of Amsterdam, Vikash Mansinghka Massachusetts Institute of Technology
File Attached
11:30
10m
Talk
A convenient category of tracing measure kernelsBoston
LAFI
A: Eli Sennesh Northeastern University, Jan-Willem Van De Meent University of Amsterdam
File Attached
11:45
5m
Talk
Random probability distributions as natural transformationsParis
LAFI
A: Victor Blanchi ENS Paris, Hugo Paquet University of Oxford
File Attached
11:50
5m
Talk
Static Delayed Sampling for Probabilistic Programming LanguagesParis
LAFI
A: Gizem Caylak KTH Royal Institute of Technology, Daniel Lundén KTH Royal Institute of Technology, Viktor Senderov Naturhistoriska riksmuseet, David Broman KTH Royal Institute of Technology
11:55
5m
Talk
Denotational semantics of languages for inference: semirings, monads, and tensorsOnline
LAFI
Cristina Matache University of Edinburgh, A: Sean Moss University of Oxford, Sam Staton University of Oxford, Ariadne Si Suo University of Oxford
12:10
5m
Talk
Separated and Shared Effects in Higher-Order LanguagesBoston
LAFI
A: Pedro Henrique Azevedo de Amorim Cornell University, Justin Hsu Cornell University
12:15
5m
Talk
On Iteration in Discrete Probabilistic ProgrammingBoston
LAFI
A: Mateo Torres-Ruiz , Robin Piedeleu University of Oxford, Alexandra Silva Cornell University, Fabio Zanasi University College London
File Attached
12:20
5m
Talk
Bit-Blasting Probabilistic ProgramsBoston
LAFI
A: Poorva Garg University of California, Los Angeles, Steven Holtzen Northeastern University, Guy Van den Broeck University of California at Los Angeles, Todd Millstein University of California at Los Angeles
File Attached
12:25
5m
Talk
πMPC: Automatic Security Proofs for MPC ProtocolsBoston
LAFI
A: Mako P. Bates University of Vermont, Joseph P. Near University of Vermont
14:00 - 15:30
Third SessionLAFI at Scollay
Chair(s): Steven Holtzen Northeastern University, Christine Tasson Sorbonne Université — LIP6
14:00
20m
Talk
The Variable Elimination Algorithm as a Let-Term RewritingParis
LAFI
Thomas Ehrhard CNRS and University Paris Diderot, Claudia Faggian Université de Paris & CNRS, A: Michele Pagani IRIF - Université de Paris Cité
14:20
20m
Talk
Contextual source code AD transformations for sum typesOnline
LAFI
Adam Paszke Google Research, A: Gordon Plotkin Google
File Attached
14:45
5m
Talk
Pitfalls of Full Bayesian Inference in Universal Probabilistic ProgrammingOnline
LAFI
A: Tim Reichelt University of Oxford, C.-H. Luke Ong University of Oxford, Tom Rainforth Department of Statistics, University of Oxford
File Attached
14:50
5m
Talk
∂ is for Dialectica: typing differentiable programmingOnline
LAFI
A: Marie Kerjean CNRS, Université Sorbonne Paris Nord, Pierre-Marie Pédrot INRIA
15:00
5m
Talk
On the Reparameterisation Gradient for Non-Differentiable but Continuous ModelsBoston
LAFI
C.-H. Luke Ong NTU, A: Dominik Wagner University of Oxford
File Attached
15:05
5m
Talk
Partial Evaluation of Forward-Mode Automatic DifferentiationBoston
LAFI
A: Oscar Eriksson KTH Royal Institute of Technology, Viktor Palmkvist KTH Royal Institute of Technology, David Broman KTH Royal Institute of Technology
15:10
5m
Talk
Distribution Theoretic Semantics for Non-Smooth Differentiable ProgrammingBoston
LAFI
Pedro Henrique Azevedo de Amorim Cornell University, A: Christopher Lam University of Illinois at Urbana-Champaign
15:15
5m
Talk
New foundations for probabilistic separation logicBoston
LAFI
A: John Li Northeastern University, Amal Ahmed Northeastern University, USA, Steven Holtzen Northeastern University
File Attached
15:20
5m
Talk
Verified Reversible Programming for Verified Lossless CompressionBoston
LAFI
A: James Townsend University of Amsterdam, Jan-Willem Van De Meent University of Amsterdam
15:25
5m
Talk
Towards type-driven data-science in Idris
LAFI
Ohad Kammar University of Edinburgh, Katarzyna Marek University of Edinburgh, Minh Nguyen University of Bristol, Michel Steuwer University of Edinburgh, Jacob Walters University of Edinburgh, Robert Wright The University of Edinburgh, UK
16:00 - 18:00
Poster SessionLAFI at Scollay
Chair(s): Steven Holtzen Northeastern University

Accepted Papers

Title
A convenient category of tracing measure kernelsBoston
LAFI
File Attached
Bit-Blasting Probabilistic ProgramsBoston
LAFI
File Attached
Contextual source code AD transformations for sum typesOnline
LAFI
File Attached
Denotational semantics of languages for inference: semirings, monads, and tensorsOnline
LAFI
Distribution Theoretic Semantics for Non-Smooth Differentiable ProgrammingBoston
LAFI
Exact Inference for Discrete Probabilistic Programs via Generating FunctionsParis
LAFI
File Attached
Exact Probabilistic Inference Using Generating FunctionsBoston
LAFI
File Attached
∂ is for Dialectica: typing differentiable programmingOnline
LAFI
πMPC: Automatic Security Proofs for MPC ProtocolsBoston
LAFI
New foundations for probabilistic separation logicBoston
LAFI
File Attached
On Iteration in Discrete Probabilistic ProgrammingBoston
LAFI
File Attached
On the Reparameterisation Gradient for Non-Differentiable but Continuous ModelsBoston
LAFI
File Attached
Opening Comments
LAFI
Partial Evaluation of Forward-Mode Automatic DifferentiationBoston
LAFI
Pitfalls of Full Bayesian Inference in Universal Probabilistic ProgrammingOnline
LAFI
File Attached
Random probability distributions as natural transformationsParis
LAFI
File Attached
Semantics of Probabilistic Program TracesBoston
LAFI
File Attached
Separated and Shared Effects in Higher-Order LanguagesBoston
LAFI
Static Delayed Sampling for Probabilistic Programming LanguagesParis
LAFI
The Variable Elimination Algorithm as a Let-Term RewritingParis
LAFI
Towards type-driven data-science in Idris
LAFI
Verified Reversible Programming for Verified Lossless CompressionBoston
LAFI
What do posterior distributions of probabilistic programs look like?Boston
LAFI
File Attached

Call for Papers

=====================================================================

                 Call for Extended Abstracts

                          LAFI 2023
         POPL 2023 workshop on Languages for Inference

                        January 15, 2023
          https://popl23.sigplan.org/home/lafi-2023

           Submission deadline on October 28, 2022 (EXTENDED) !

==================================================================== ** Invited Speaker

Hongseok Yang, professor at the School of Computing, KAIST, Korea

Submission Summary

Deadline: October 28, 2022 (AoE) (EXTENDED)

Link: https://lafi23.hotcrp.com/

Format: extended abstract (2 pages + references)

Call for Extended Abstracts

Inference concerns re-calibrating program parameters based on observed data, and has gained wide traction in machine learning and data science. Inference can be driven by probabilistic analysis and simulation, and through back-propagation and differentiation. Languages for inference offer built-in support for expressing probabilistic models and inference methods as programs, to ease reasoning, use, and reuse. The recent rise of practical implementations as well as research activity in inference-based programming has renewed the need for semantics to help us share insights and innovations.

This workshop aims to bring programming-language and machine-learning researchers together to advance all aspects of languages for inference. Topics include but are not limited to:

  • design of programming languages for inference and/or differentiable programming;

  • inference algorithms for probabilistic programming languages, including ones that incorporate automatic differentiation;

  • automatic differentiation algorithms for differentiable programming languages;

  • probabilistic generative modeling and inference;

  • variational and differential modeling and inference;

  • semantics (axiomatic, operational, denotational, games, etc) and types for inference and/or differentiable programming;

  • efficient and correct implementation;

  • and last but not least, applications of inference and/or differentiable programming.

We expect this workshop to be informal, and our goal is to foster collaboration and establish common ground. Thus, the proceedings will not be a formal or archival publication, and we expect to spend only a portion of the workshop day on traditional research talks. Nevertheless, as a concrete basis for fruitful discussions, we call for extended abstracts describing specific and ideally ongoing work on probabilistic and differential programming languages, semantics, and systems.

Submission guidelines

Submission deadline on October 28, 2022 (AoE) (EXTENDED)

Submission link: https://lafi23.hotcrp.com/

Anonymous extended abstracts are up to 2 pages in PDF format, excluding references.

In line with the SIGPLAN Republication Policy, inclusion of extended abstracts in the program is not intended to preclude later formal publication.

Remote participation policy

We plan to coordinate with the POPL conference on remote participation. We would like to have remote participation even if the workshop happens in person. Our plan is to create an inclusive environment that does not demand traveling if needed.

Hongseok Yang

Introduction to the tensor-programs framework, a PL approach that helps analyse theoretical properties of deep learning.

While deep learning has many remarkable success stories, finding a satisfactory mathematical explanation on why it is so effective is still considered an open challenge. One recent promising direction for this challenge is to analyse the mathematical properties of neural networks in the limit where the widths of hidden layers of the networks goes to infinity. Researchers were able to prove highly-nontrivial properties of such infinitely-wide neural networks, such as the gradient-based training achieving the zero training error (so that it finds a global optimum), and the typical random initialisation of those infinitely-wide networks making them so called Gaussian processes, which are well-studied random objects in machine learning, statistics, and probability theory.

In this talk, I will introduce Greg Yang’s tensor-programs framework, which has led to substantial generalisations of prior mathematical results on infinitely-wide neural networks. The framework specifies a programming language for expressing computations of neural networks that are parameterised by the widths of those networks. Although simple, the language is expressive enough to cover both forward and backward computations of networks of nearly all architectures.The most important part of The framework is the so called master theorem which says that every program in the framework’s language has a well-defined limit as the widths of the associated network go to infinity, and furthermore the limit can even be defined inductively over the syntax of the program. The tensor-programs framework has been used to generalise results on infinitely-wide neural networks from a few simple network architectures to nearly all architectures.

The goal of my talk is to introduce a possibly-interesting new research topic for PL researchers. I will not assume any prior knowledge on theories of neural networks, in particular, those related to infinitely-wide neural networks and Greg Yang’s tensor programs. At the end of the talk, I will briefly mention a few research opportunities for PL researchers.

We encourage to attend in-person LAFI’s workshop in Boston. Yet for presenters and attendees that are not able to travel (for any various reason - visa, work or family constraints, environmental concern, …), we propose an hybrid meeting with a mix of in person and virtual talks.

Registration is Mandatory, with a virtual option with 6 days pass with reduced fees for Paris and Online Attendees (the Airmeet link will be sent few days in advance).

Join the slack channel for remote and asynchronous interactions during LAFI’s meeting.

Boston - In-Person Attendees and Speakers

Location
Room Scollay,
50 Park Plaza
Boston
Massachusetts, United States

Paris - Attendees and Speakers

We propose a bilocated event with speakers and attendees gathering in Université de Paris Cité.

Location
Salle Leduc, RDC,
43 rue des Saints Pères,
75006 Paris,
Métro Saint Germain des prés
France

To access the building, you need to be on the list of Paris participants and to present an ID: join Paris local event

Online Attendees and Speakers

Complementary information for Virtual Attendees and Virtual Speakers is now available.

Questions? Use the LAFI contact form.