Burning Cost is a set of open-source Python libraries for UK personal lines pricing actuaries. The name comes from a basic actuarial concept: burning cost is the pure loss experience rate — actual losses on a risk as a proportion of the subject premium — used for experience rating. We build tools for the problems where Emblem, Radar, and Akur8 stop — causal inference, proxy discrimination auditing, conformal prediction, model governance.

The name is also a philosophy. Simple, direct, no mystification. That is how we think about tooling.

Built by pricing practitioners who have worked across UK personal lines motor and home books.


What we have built

34 Python libraries covering the full pricing workflow. See the full library index with pip install commands.

UK pricing teams have adopted GBMs (CatBoost is now the dominant choice for most new builds) but many are still taking GLM outputs to production because the GBM outputs are not in a form that rating engines, regulators, or pricing committees can work with. The tools here are about closing that gap — from raw data through to a signed-off rate change with an audit trail. All of it runs on Databricks.

Data & Validation

Model Building

Interpretation

Tail Risk & Distributions

Commercial

Compliance & Governance

Infrastructure


The problem we are solving

UK pricing teams have been building GBMs for years, mostly CatBoost. The models are better than the production GLMs. But many teams are still taking the GLM to production, because the GBM outputs are not in a form that a rating engine, regulator, or pricing committee can work with.

The issue is not technical skill. It is tooling. There is no standard Python library that extracts a multiplicative relativities table from a GBM. There is no standard library that does temporally-correct walk-forward cross-validation with IBNR buffers. There is no standard library that builds a constrained rate optimisation a pricing actuary can challenge. There is no standard library that generates a PRA SS1/23-compliant model validation report.

We wrote those libraries because we needed them. Then we kept going. Everything is built to run on Databricks — that is where UK pricing teams are working, and where our research demonstrates its best practice.


Built for real portfolios

Our benchmarks use synthetic data with known parameters because that is the only way to measure bias — you need ground truth. But the libraries are designed for messy real-world data: fractional exposures from mid-term adjustments, IBNR-contaminated accident years, missing NCD values, vehicle group code changes across ABI revisions, and duplicate records from system migrations. Every API accepts exposure offsets as a first-class parameter. Every model handles missing values through CatBoost’s native treatment rather than requiring imputation. If your portfolio does not look like np.random.default_rng(42), that is what these tools are built for.


Get in touch

Start a conversation on GitHub Discussions — that’s where we discuss new features, answer questions, and take feedback. For everything else: pricing.frontier@gmail.com.

If you need help getting the libraries into production — adapting examples to your data schema, navigating model risk sign-off, or building a compliant audit trail — see Work with Us.