When using artificial intelligence and machine learning models, transparency is of utmost importance.

It is essential to have clarity and openness around how these systems operate. Decision-making processes and algorithms must be clear and understandable not only to the data scientists who create them, but to a wide variety of people, regardless of their technical background. You shouldn’t need to be a PhD to understand what’s going on.

Transparency is important for three reasons: trust-building, accountability and ethical insurance.

When users understand how decisions are made, they’re likely to feel more comfortable with what the system is doing and actually use it. A method should be in place to track any problems and hold relevant parties accountable. And when stakeholders are thoughtful about the decisioning and how systems work, and there are no hidden agendas, that sets an ethical bar.

Achieving transparency in these systems is a multifaceted process. 

The first thing is to have explainable AI. You need to invest in the development of models that are capable of providing insights behind the decisions that are made. You can’t expect everybody to trust what’s going on in the black box. 

The second component is clear documentation and communication. That means comprehensive design documents, and comprehensive documentation around the development of the system. What data sources does it use, and why does it use them? How do the key components of the system function? Any links or dependencies to other systems or anything that’s open source should be noted as well. Documentation should be written in a manner that is accessible to a non-technical audience so they can understand how the system operates.

Stakeholder engagement is important throughout the process. First, you want to define who your stakeholders are. They are the people you regularly engage with — your customers, your regulators, your internal staff.  You need to have that defined and have a process for capturing their feedback and addressing any concerns they might have.

Just as you have a security audit, so, too, should you have what I call an ethical audit that attests the system is operating in an ethical and unbiased manner. This type of audit would review and report how decisions are made by the AI. Usually this would be done by a trusted third party that’s used to dealing with AI and ML. Their brief would be to make sure that no biases have been introduced, and to review the governance and all the documentation that goes into transparency and come up with their own conclusion. This is not something you want to do internally, because people who build and use these models often get too close to them. You need an outside opinion.

Transparent data practices are another critical component. The documentation process has to be open about data sources, but you also have to demonstrate how data are going to be used and how you’re going to protect the user’s privacy. You should have clear privacy practices that account for your use of AI and ML. When thinking about privacy, we tend to think about it in the common construct of what information are we collecting, how is that information being used, and do we declare how it’s being used. If we’re using it for AI and ML, we need to be clear about that.

The office that’s doing privacy for your organization needs to have a good understanding of how that model works. And more broadly, you want to make sure that anyone who’s working with AI and ML is trained appropriately. By that I mean not only technology and security training, but also training on what it means to be transparent so that anyone who is part of the development team understands what’s required of them, and not just from a data science perspective.

The last component is regulatory compliance, or making sure you’re adhering to regulations that mandate certain levels of transparency. One such example would be GDPR. You would have to be capable of explaining in plain English how an automated decision was made with regard to a specific individual and how it impacted them.

Transparency is going to be the heart of AI and ML. It really underpins everything because  transparency leads to trust, to accountability and to ethical practices.

Otherwise you’re talking about developing Skynet.