Learning probabilistic termination proofs

Abate A, Giacobbe M, Roy DS

We present the first machine learning approach to the termination analysis
of probabilistic programs. Ranking supermartingales (RSMs) prove that
probabilistic programs halt, in expectation, within a finite number of steps.
While previously RSMs were directly synthesised from source code,
our method learns them from sampled execution traces.
We introduce the neural ranking supermartingale:
we let a neural network fit
an RSM over execution traces and then we
verify it over the source code using satisfiability modulo theories (SMT);
if the latter step produces a counterexample, we generate from it
new sample traces and repeat learning in a
counterexample-guided inductive synthesis loop,
until the SMT solver confirms the validity of the RSM.
The result is thus a sound witness of probabilistic termination.
Our learning strategy is agnostic to the source code and its
verification counterpart supports the widest range of probabilistic
single-loop programs that any existing tool can handle to date.
We demonstrate the efficacy of our method over a range of benchmarks that
include linear and polynomial programs with discrete, continuous, state-dependent,
multi-variate, hierarchical distributions, and distributions with
undefined moments.