Model drift occurs when the accuracy of predictions produced from new input values “drifts” from the performance during the training period. Two main categories of model drift are:
As might be expected, both types of drift lead to model performance degradation.
Machine learning models can produce predictive results that change, or “drift,” compared to the original parameters that were set during training time. Identifying the category of drift will dictate the corrective measures required to bring prediction performance back to an acceptable level. Those corrective measures can include retraining the existing model with new data or replacing the existing model with a new one that performs better.
It is critical to monitor performance for model drift to sustain accurate predictions and thereby motivate continued adoption of AI across an organization. Businesses may employ a variety of techniques to monitor model performance, including:
Machine learning models in production can be monitored live through SaaS applications that are built on top of the C3 AI Platform. Depending on the use case, different logic can be built within the C3 AI platform to handle model drift, raise trigger-based alerts, swap out models based on performance, retrain the model, or stop using predictions for a live application until the model is updated.
This website uses cookies to facilitate and enhance your use of the website and track usage patterns. By continuing to use this website, you agree to our use of cookies as described in our Privacy Policy.