Mon 16 Jan 2023 14:00 - 15:00 at Arlington - Keynote and best paper Chair(s): Michael Emmi

Datasets can be biased due to societal inequities, human biases, under-representation of minorities, etc. Our goal is to prove that models produced by a learning algorithm are robust in their predictions to potential dataset biases. This is a challenging problem: it entails learning models for a large, or even infinite, number of datasets, ensuring that they all produce the same prediction.

In this talk, I will show how we can adapt ideas from program analysis to prove robustness of a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. We evaluate our approach on datasets that are commonly used in the fairness literature, and demonstrate our approach’s viability on a range of bias models.

Mon 16 Jan

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Keynote and best paperVMCAI at Arlington
Chair(s): Michael Emmi Amazon Web Services
What Can Program Analysis Say About Data Bias?
Aws Albarghouthi University of Wisconsin-Madison
Bayesian parameter estimation with guarantees via interval analysis and simulation
Luisa Collodi University of Florence