Another critical factor to consider during the algorithm prototyping stage is preparing for reviews by model risk committees or model audits that may be required post-production.
The best strategy here is to put in place repeatable processes that thoroughly assess machine learning models before they are used in production environments and that continue to maintain models that are open to audit and management after they are deployed and operating in production.
If you do not already have one, we recommend instituting a model review board or process to inspect algorithm details and pipelines before the machine learning model goes into production. The process does not have to be onerous. At its core, the prototyping team should prepare a written summary and presentation of the model and prototyping process, along with documentation.
The prototyping team should present:
The review board then must approve the project before the model is implemented in production. A clearly defined process like this helps the project team think critically about the machine learning problem and ensures that models are properly screened to mitigate potential risks.
The prototyping team should also ensure that appropriate information – including, for example, true/false positives and evidence packages – are logged and scored when the model is running in production. This will ensure that an audit trail of the model’s performance can be readily performed.
To simplify tasks around post-production model reviews and audits, the best prototyping teams and prototyping/production operating software tools will automate or build in the data management, model management, AI/ML operations, and model audit processes as part of their software pipelines.
This website uses cookies to facilitate and enhance your use of the website and track usage patterns. By continuing to use this website, you agree to our use of cookies as described in our Privacy Policy.