With the race to integrate artificial intelligence moving at top speed, people are always talking about the need to protect against the security threats and compliance challenges inherent in this transformative technology – but without much actionable advice. Recently I came across an article written by Mark Orsi, CEO of the Global Resilience Federation, that talks about tactical things we could do, and I’d like to share key guidance points.

The Reference Shelf

One thing I found especially important was the reference shelf this article offered. When weighing how to embed security by design across AI project management, security professionals can find guidance in NIST AI RMF 1.0; the MITRE ATLAS Framework; Microsoft Responsible AI Standard (v2); OWASP AI Security and Privacy Guide; ISO/IEC 23894:2023 and DIS 5338; ENISA Cybersecurity of AI and Standardization; and regulations such as the EU AI Act or AI Bill of Rights.

From these materials and other sources, the article proposes a program of action along the following categories:

Risk & Compliance

Risk and compliance management for AI systems requires its own specialized better practices because of the systems’ complex and opaque decision-making processes.

* Develop an AI risk assessment framework Understand the AI threat profile, evaluate cybersecurity considerations, and conduct risk assessments for AI model use cases. Draft guidelines, checklists and templates that can be used to assess the risks associated with different phases of the AI lifecycle.

* Document and communicate Establish formal documentation processes to capture AI-associated risks. Develop clear risk profiles and baseline levels of risk across the organization. Communicate these risks to relevant stakeholders.

* Know the regulatory landscape Understand the specific regulations that apply to your industry geography and use cases. Stay informed about potential regulations that might affect your integration of AI. Monitor regulatory developments, engage with industry and legal experts, and actively participate in relevant industry forums.

* Integrate security and privacy into the AI lifecycle in order to mitigate risk without disrupting development. Try to involve the security team in initial design meetings, and offer data science teams regular training on security and privacy better practices, threat awareness and compliance requirements so they can incorporate security measures into their activities.

Policy & Governance

Since AI will be used across different business units, ownership of the organization’s policies on AI security should be the responsibility of the highest level of leadership, and not the CISO. Policies should draw from “responsible AI” principles set down in standards such as the NIST AI RMF, and be drafted by general counsel, with input from the CISO organization.

* Prepare education and awareness materials to establish a baseline understanding of risks and get policy buy-in across the organization. Take inventory of known AI security risks and tailor your education materials to your audiences’ level of understanding.  Provide accessible awareness and risk management sessions.

AI Bill of Materials

Increasing visibility into where AI resides within the organization is critical to allowing the organization to implement appropriate controls.

* Form a steering committee of cross-functional stakeholders – ideally sponsored by the board — to oversee the documentation, review and approval of AI use cases.

* Explore technical means to detect AI use in the organization. This could include blocking AI applications to see if employees are trying to use them, or automated discovery to compel business units to submit use cases for formal review and approval.

* Incorporate questions on vendor use of AI in third-party risk assessments, including AI-based products and services from fourth parties.

* Share with employees the potential risk exposures to the organization by introducing AI.

* Update relevant policies by getting the input of multiple stakeholder groups.

* Update third-party due diligence to stay on top of how third parties are implementing security safeguards.

* Consider leveraging federated models and firewalls, and explore privacy-preserving model learning to avoid the potential exposure of sensitive information.

Trust & Ethics

It’s critical to codify where responsibility lies. The security team is not responsible for deciding what data is put into the model, but it can be designated as responsible for advising the business on potential threats to the confidentiality and integrity of data.

The CIA triad – confidentiality, integrity and availability – should be applied to AI models in order to support ethical and trustworthy AI. Apply data governance and controls continuously, and limit access to training data sets.

AI will be driving a business revolution. Security and assurance considerations can’t be afterthoughts. 

 

Read the Practitioners’ Guide to Managing AI Security here – built in collaboration amongst GRF, KPMG and security practitioners from more than 20 leading companies, think tanks, academic institutions, and industry organizations.