AI and ML systems need ongoing oversight to ensure their performance remains ethical, optimal and functioning within an anticipated operational threshold. System decisions, algorithms and data sources also need to be systematically evaluated to ensure compliance with internal policies or external regulations, ethical standards and organizational objectives.

In combination, the importance of doing both continuous monitoring and auditing is to assure performance. To make sure the system is performing as expected, you need some form of risk mitigation to help identify risks early. Are there biases? Are the productions incorrect? Are you potentially having data privacy issues?

And lastly, you need to take steps to make sure the public has trust in the system. Continuous monitoring and auditing is another means of assuring trust to key stakeholders that the system is functioning and there is accountability for it.

Following are key steps and strategies that need to be taken to implement effective monitoring and auditing:

* Set out clear metrics and KPIs to define what successful operation of the AI and ML model means.  These metrics should provide reasonable insights around things such as accuracy, fairness, privacy or any other essential criteria.

* Figure out how to implement real-time monitoring tools. There is a lot of software out there that can track the system’s operation in real time.  You want to make sure it is able to flag anomalies, alert changes in performance, and detect change in usage or patterns. This will allow you to set alerts based on your monitoring criteria.

* Have an independent party conduct regular audits. You don’t want the team that’s created the model and put it into production to do the audit. You want an internal or external audit group who can take an unbiased look. If it’s an internal group, it must have the right level of expertise so it doesn’t have to rely on the AI team to understand what’s going on. You want unbiased auditors who can review the usage of algorithms, the data sources, the decision-making process, and whether it is compliant with regulations and ethical standards the organization has defined.

* Establish a continuous loop to give the AI and ML teams feedback from monitoring and auditing. Put a mechanism in place to action and follow up on any issues that might be found.

* Set out guidelines for transparency and reporting. Reporting should go to stakeholders and cover things such as the validity of data sources, any findings, and any potential biases. Accountability requires that any findings go to the right level, and not just to the team that’s designing and operating the model. Perform an ethical compliance check to make sure operations are following ethical guidelines. That would involve assessing the model for its decisions to make sure there is no potential bias or discrimination inherent in the system.

* Assess data quality. Continuously monitor to assure that data sources used in the model are still relevant and that there is quality control around those data sources. If you are using bad data to train an AI model, rest assured you’re going to have a bad outcome.

* Implement security monitoring to detect any potential breaches, vulnerabilities or abuse of the model.

* Make sure the auditing and monitoring team is trained by the appropriate external sources to spot potential issues. They need to be updated on the latest regulations, and understand what best practices are in the use of AI and ML.

Continuous monitoring and regular auditing are key parts of AI and ML. It’s not something you do once and hope for the best.