Precision is one indicator of a machine learning model’s performance – the quality of a positive prediction made by the model. Precision refers to the number of true positives divided by the total number of positive predictions (i.e., the number of true positives plus the number of false positives). In a customer attrition model, for example, precision measures the number of customers that the model correctly predicted would unsubscribe divided by the total number of customers the model predicted would unsubscribe.
While a perfect machine learning classifier model may achieve 100 percent precision and 100 percent recall, real-world models never do. Models inherently trade off between precision and recall. Typically, the higher the precision, the lower the recall, and vice versa. In the customer attrition example mentioned above, a model that is tuned for high precision – each prediction is a high-quality prediction – will usually have a lower recall; in other words, the model will not be able to identify a large portion of customers who will actually unsubscribe.
The C3 AI Platform and C3 AI Applications provide extensive capabilities to build and optimize machine learning models performance, including precision, recall, and other parameters. These include capabilities, for example, to measure and optimize model performance metrics such as F1 Score and Receiver Operating Characteristic (ROC) Curve.
This website uses cookies to facilitate and enhance your use of the website and track usage patterns. By continuing to use this website, you agree to our use of cookies as described in our Privacy Policy.