predeval

https://img.shields.io/pypi/v/predeval.svg https://img.shields.io/travis/dvatterott/predeval.svg Documentation Status https://codecov.io/gh/dvatterott/predeval/branch/master/graph/badge.svg Updates

This software is built to identify changes in a model output before evaluation data becomes available.

For example, if you create a churn model, you will have to wait X number of weeks before learning whether users churned (and can evaluate your churn model predictions).

This software will not guarantee that your model is accurate, but it will alert you if your model’s outputs (i.e., predictions) are different from what they have been in the past. A model’s output can pass predeval tests and be inaccurate and a model’s output can fail predeval and be accurate. That said, unexpected changes in model outputs likely represent a change in accuracy.

Installation

Installation is described here: https://predeval.readthedocs.io/en/latest/installation.html

Example Usage

Examples can be found here: https://predeval.readthedocs.io/en/latest/usage.html

API Documentation

Documentation of the software can be found here: https://predeval.readthedocs.io/en/latest/api.html

Contributing

Info about contributing can be found here: https://predeval.readthedocs.io/en/latest/contributing.html

Changelog

Changelog can be found here: https://predeval.readthedocs.io/en/latest/history.html

Credits

[ ~ Dependencies scanned by PyUp.io ~ ]

Info about contributors can be found here: https://predeval.readthedocs.io/en/latest/authors.html

This package was created with Cookiecutter and the audreyr/cookiecutter-pypackage project template.