With the wide spread of deep learning and gradient descent inspired optimization algorithms, differentiable programming has gained traction. Nowadays it has found applications in many different areas as well, such as scientific computing, robotics, computer graphics and others. One of its notoriously difficult problems consists in interpreting programs that are not differentiable everywhere.
In this work we define $\lambda_\delta$, a core calculus for non-smooth differentiable programs and define its semantics using concepts from distribution theory, a well-established area of functional analysis. We also show how $\lambda_\delta$ presents better equational properties than other existing semantics .