The rapid adoption of artificial intelligence and machine learning yields tremendous benefits. But as with any transformational technology that can affect human lives and societal structures, there are attendant governance challenges.
Effective governance of AI and ML requires a blueprint to ensure these technologies are used safely, ethically, and responsibly. Understanding the risks associated with these technologies, such as biases, potential misuse, and privacy concerns, is essential. A governance framework will help ensure our organizations have transparency and accountability in their implementation of AI and ML, and they promote the responsible use of these technologies to avoid misuse or unintended consequences.
Having a framework also helps to build trust among the general public and the organization’s stakeholders regarding the deployment of AI and ML. You need to have a standard against which you will be measured.
Key components you need for an effective AI/ML governance framework include:
* Clear objectives. There should be well-defined goals and principles to ensure that any AI or ML introduced is fair, reduces bias, and adheres to the ethical principles you define.
* Clearly defined roles and responsibilities. You want to make sure that you delineate the roles and responsibilities of those involved in developing, deploying, monitoring, and testing AI models.
* Data management. Guidelines on data collection have to be clearly spelled out. What data are being collected? How are data being stored? How are data being processed? How are they being used?
*Implementing transparency. How do you document the processes? How do you document the algorithms and the data sources that are used? This will help you explain the model and potentially explain decisions it may make if you’re called before a board of directors, congressional committee, or some other regulatory or governing body. You need to be able to reconstruct what happened, not just from a regulatory point of view, but to ensure there’s nothing wrong with the model.
*Ethical considerations. How do you avoid harm? How do you prevent discrimination and ensure the model produces some societal benefit?
*Regular monitoring reporting. You need to inventory all of your AI/ML applications. You need to be able to evaluate their impacts and ascertain whether they are working as expected. And you need to report these findings to the relevant governance team so it can understand how things are working.
You also want to establish a channel to receive continuous feedback from end users and stakeholders to understand whether the model works as expected or how it could be improved.
If you’re in a regulated industry and your AI and ML have some unwanted effect, that monitoring and reporting process can act as a flight recorder that allows you to retrace how the decision was made.
Training and education. When organizations rushed to the cloud, there were many mishaps. Information was being exposed publicly because people rapidly entered the field without understanding the nuances. We must apply lessons from that experience when introducing AI and ML. All members of the project team need to have the required knowledge, and they need to be aware of what the governance and ethics criteria are. If they haven’t been trained before, you must provide them with the training.
A word of reassurance: There are some noteworthy governance frameworks out there. You don’t have to build your frameworks from scratch. Existing blueprints include:
* The European Commission’s ethics guidelines for trustworthy AI, which focuses on respect for laws, regulations, ethical principles, and values, as well as the robustness of the system.
* The OECD AI principles, adopted by more than 40 countries, focus on respecting human rights, values, and diversity as you implement these models.
* The Montreal Declaration for a Responsible Development of AI focuses on autonomy, privacy, and other aspects of AI models to ensure they do the right thing.
* Ethically Aligned Design for AI focuses on the ethical aspects of designing autonomous systems, such as considering human rights, well-being, and transparency.
To ensure your governance framework is being implemented correctly, you need the involvement of a broad range of stakeholders: technologists, business leaders, ethicists, and end users. And because the technology is changing so rapidly, you also need regular review to ensure that the model is still valid and to examine whether there is a better way to do things.
Ensure you have well-documented use cases or specific guidelines for adopting different AI models. A healthcare application would have more stringent data protection and privacy measures than an AI that will help optimize a portfolio.
A practical and comprehensive governance framework will show you’re a good steward of the technology – and that you’re implementing it for the right reasons.