A loss function maps a scenario (one or many values) onto a real number that represents the loss/cost/risk/error of that scenario. In the context of machine learning and deep learning, if a set of predictions (a scenario) is far away from the set of true labels, the loss function should output a higher value, and vice versa.
In the context of machine learning and deep learning, there are widely used loss functions for common learning problems such as regression and classification. For regression problems, some examples of loss functions include mean square error loss, mean absolute error loss, and quantile loss. For classification problems, some examples of loss functions include hinge loss, cross-entropy loss, and KL (Kullback-Leibler) divergence loss.
A loss function helps to formulate a learning problem as an optimization problem. By choosing the correct loss function, minimizing (optimizing) it will likely lead to a good outcome (parameters) for the learning problem.
In addition, knowing how to customize a loss function specifically for the learning problem will help learning move in the correct direction. For example, instead of a monotonic loss function, the user could customize a loss function of risk alerts so that the loss is high if the alert is either too early or too late and the loss is low if the alert is within a defined period.
The C3 AI Suite offers commonly used loss functions as MLScoringMetrics that are well-integrated with components such as model training and model tuning. In addition, customized loss function can be constructed easily by extending the base MLScoringMetric type within the C3 type system. These capabilities allow users to develop or choose the correct loss functions for their learning problems seamlessly.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.