This page exists because pricing actuaries searching for open-source insurance pricing tools deserve a straight answer, not marketing.

Burning Cost is on the forefront of machine learning and data science research in UK personal lines insurance. We help teams adopt best practice, best-in-class tooling, and Databricks — 43 open-source Python libraries covering the full pricing workflow. We are not trying to compete with Emblem, Radar, Akur8, or DataRobot. Those tools have real strengths: polished UIs, enterprise support contracts, integration with downstream systems, and regulatory track records with insurers who do not want to maintain Python infrastructure.

What we offer is different: research-backed methodology, transparent implementations, version-controllable outputs, and specific focus on UK regulatory requirements. If you are a pricing team working in Python or Databricks, Burning Cost covers the actuarial gaps that general ML libraries do not.

If you need a hosted GUI, enterprise support, or a system that non-technical pricing managers can use without code, the commercial platforms are probably right for you. Both things can be true.


What the commercial platforms do well

Before the comparison table, the honest version of what you are trading away if you go open-source:

Emblem (WTW): Mature GLM platform with decades of actuarial workflow built in. The factor review UI, one-way and two-way analyses, and the signed-off model export workflow are genuinely good. Many UK insurers have 10+ years of Emblem models in production. Migration is not trivial.

Radar (Earnix): Strong on the commercial optimisation side. Rate change simulation and price optimisation tooling that connects to production rating systems. Enterprise support and integration with downstream workflow systems.

Akur8: Machine learning model building with a GUI that lets actuaries interact with GBM outputs without writing code. Good for teams that want GBM predictive power without committing to a Python workflow. Growing adoption in UK and European markets.

DataRobot: General AutoML platform used by some insurers for pricing. Broad model coverage, explainability tools, and MLOps infrastructure. Not insurance-specific, but the enterprise deployment capabilities are mature.

None of these are bad tools. The question is whether the licence cost, vendor dependency, and opaque methodology are the right trade for your team.


Feature comparison

The table below maps pricing workflow areas to Burning Cost libraries and notes what commercial platforms typically offer. For commercial tools, we describe capabilities as they are generally understood — we do not have access to their current feature sets, pricing, or implementation details.

Feature area Burning Cost (open-source) Commercial platforms (typically)
GLM modelling statsmodels, scikit-learn TweedieRegressor with insurance-cv for correct temporal splits GUI-driven GLM with built-in one-way/two-way analysis and factor sign-off workflow
GBM modelling CatBoost, LightGBM, XGBoost with shap-relativities for factor table output Varies — Akur8 and DataRobot offer GBM with GUI review; Emblem GBM support has historically been limited
Interpretable deep learning insurance-anam — actuarial neural additive model with per-feature shape functions Limited native support; typically requires custom integration
Cross-validation insurance-cv — walk-forward splits with configurable IBNR buffers, sklearn-compatible scorers Some platforms implement temporal splits; IBNR handling varies
Prediction intervals insurance-conformal — distribution-free finite-sample coverage guarantees; insurance-quantile — quantile and expectile GBMs Typically point predictions; interval estimation not standard
Rate optimisation rate-optimiser — LP efficient frontier with movement caps and GIPP constraints; insurance-optimise — SLSQP for large factor spaces Radar/Earnix specifically targets rate optimisation with integrated demand modelling; Emblem has optimisation add-ons
Demand and price elasticity insurance-demand — conversion and retention modelling; insurance-elasticity — causal DML elasticity estimation Typically available as part of commercial optimisation modules; methodology often opaque
Model validation insurance-validation — structured PRA SS1/23 reports covering nine sections, HTML and JSON output Validation reporting features vary; PRA SS1/23 alignment is not typically an explicit feature
Model monitoring insurance-monitoring — exposure-weighted PSI/CSI, A/E ratios, Gini drift z-tests with scheduled alerts Monitoring dashboards are common in enterprise platforms; insurance-specific metrics vary
Causal inference insurance-causal — double machine learning for deconfounding; insurance-elasticity — CausalForestDML price elasticity Not typically offered natively; DataRobot has some causal tooling
Spatial rating insurance-spatial — BYM2 postcode-level models borrowing strength from neighbouring areas GIS and spatial smoothing tools exist in some platforms; BYM2 specifically is uncommon
Fairness / discrimination insurance-fairness — proxy discrimination auditing mapped to FCA Consumer Duty requirements Fairness tooling is an emerging area; FCA-specific mapping is generally not standard
Model governance insurance-mrm — ModelCard, ModelInventory, GovernanceReport for PRA SS1/23 compliance Enterprise platforms typically include governance workflows; PRA SS1/23 alignment varies
Deployment insurance-deploy — champion/challenger with shadow mode, rollback, and ICOBS 6B.2 audit trail Enterprise deployment and A/B testing frameworks are standard in larger platforms
Thin-data segments credibility — Buhlmann-Straub; bayesian-pricing — hierarchical Bayes with PyMC 5 Credibility weighting is standard in Emblem; Bayesian methods less common
Interaction detection insurance-interactions — CANN, NID, and SHAP-based interaction tests Two-way analysis standard in GLM platforms; automated detection less common
Synthetic data insurance-synthetic — vine copula portfolio generation; insurance-datasets — UK motor synthetic data Not typically included; usually requires separate data management tooling
Licence cost Free. MIT licence. No usage caps. Typically five- to six-figure annual licence costs; pricing is negotiated per contract
Support GitHub issues, documentation, community. No SLA. Commercial SLAs, dedicated support, implementation consultancy
UK regulatory specifics FCA GIPP (PS21/5), FCA Consumer Duty, PRA SS1/23, ICOBS 6B.2 constraints built into relevant libraries UK regulatory features vary by vendor; verify specifics with each vendor

What we do not cover

Being honest about gaps matters more than padding the feature list.

Rating engine integration. Commercial platforms often integrate directly with rating engines (Majesco, Guidewire, Duck Creek). Our libraries produce outputs in standard formats (pandas DataFrames, CSV, JSON) but do not have native connectors to these systems.

GUI. If your pricing team needs actuaries to interact with models without writing Python, Burning Cost is not the right choice. We are a code-first toolkit.

Reserving. We do not cover claims reserving. For open-source reserving tools, look at chainladder-python.

Enterprise support. We do not offer SLAs, implementation consulting, or guaranteed response times. GitHub issues are monitored but there is no commercial support contract.

Data infrastructure. Burning Cost assumes you already have your data in a usable form. We do not provide ETL, data cataloguing, or data quality tooling.


Who this is for

Burning Cost makes sense if:

It probably is not the right choice if:


Getting started

All 43 libraries are on PyPI. Install any of them individually:

pip install shap-relativities
pip install insurance-cv
pip install rate-optimiser

The full library index lists every library with pip install commands, links to GitHub repos, and links to relevant blog posts. Each library ships with a Databricks notebook demo on synthetic UK motor data.

If you are moving from Emblem to Python, the training course covers the transition — GLMs in Python, GBM pricing, SHAP relativities, conformal prediction intervals, and constrained rate optimisation. Twelve modules, free and open, written for pricing actuaries who already know what they are doing.


Frequently asked questions

Is there a free alternative to Emblem for insurance pricing?

Yes, with caveats. Burning Cost covers the statistical modelling workflow that Emblem handles: GLMs, GBMs, factor tables, cross-validation, interaction detection, and credibility weighting. What you lose is Emblem’s GUI, its tight integration with WTW consulting workflows, and its track record with UK insurers’ model governance teams. If your team works in Python and your model governance process is comfortable with code-based evidence, the open-source route is viable.

Is there a free alternative to Radar for rate optimisation?

rate-optimiser and insurance-optimise cover constrained rate change optimisation — efficient frontier between loss ratio targets and movement caps, with GIPP constraints. What Radar/Earnix specifically offers around demand integration and rating engine connectivity is harder to replicate without custom work.

Can Python replace Akur8?

For the modelling part, yes. shap-relativities gives you GBM factor tables in GLM format. insurance-anam gives you interpretable shape functions per rating factor. The difference is that Akur8 provides a GUI where non-technical actuaries can interact with the model outputs without writing code. If your team can work in Jupyter or Databricks notebooks, Python is a reasonable substitute. If you need the GUI, it is not.

What about DataRobot?

DataRobot is a general AutoML platform, not insurance-specific. For pure model-building capability it competes in a broad ML sense. It lacks the actuarial-specific tooling: walk-forward CV with IBNR, Tweedie/Gamma objectives correctly specified for claims data, factor tables in actuarial format, PRA SS1/23 validation reports, or FCA-specific fairness auditing. Burning Cost fills those gaps. DataRobot’s strength is its MLOps infrastructure, which is more mature than what we offer.


All libraries are at github.com/burning-cost. Questions or corrections: pricing.frontier@gmail.com.