predeval Documentation Status Updates

This software is built to identify changes in a model output before evaluation data becomes available.

For example, if you create a churn model, you will have to wait X number of weeks before learning whether users churned (and can evaluate your churn model predictions).

This software will not guarantee that your model is accurate, but it will alert you if your model’s outputs (i.e., predictions) are different from what they have been in the past. A model’s output can pass predeval tests and be inaccurate and a model’s output can fail predeval and be accurate. That said, unexpected changes in model outputs likely represent a change in accuracy.


Installation is described here:

Example Usage

Examples can be found here:

API Documentation

Documentation of the software can be found here:


Info about contributing can be found here:


Changelog can be found here:


Info about contributors can be found here:

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.