Shadow IT presents a grab bag of risks, and artificial intelligence is only making it riskier.

Shadow IT refers to employees implementing tools that their IT department isn’t working with and doesn’t know they’re using. Because the company isn’t monitoring or managing these tools, if they’re attacked, your company’s IT department won’t even know because it has no controls around them.

AI has compounded that risk because it is present in applications that companies didn’t even expect it to be in.

A prime example is LinkedIn, which now has an AI feature that lets people write articles and create copy. But users have to create prompts that enable the AI to do its thing, and they could easily embed confidential information in those prompts, which are being stored who knows where. They could be offering information to third parties without your company knowing.

I worry that in the future, we’re going to have a massive breach of ChatGPT data because it was stored somewhere in the cloud, and ended up unsecured through misconfiguration, which happens all the time.

The potential for a significant security breach is huge because there are no controls in this case. Users will be blissfully unaware, because as far as they’re concerned, the product is doing what they expected it to.

Security practitioners have been dealing with the issues around shadow IT for a while, and it’s a main reason why the average company doesn’t allow individual employees to be administrators of their own machines. That way they can’t install software.

But now, it doesn’t even matter. Most applications are being delivered via the cloud, and you can’t block the internet. So now you have users who have access to a wide swath of applications that help them do their work, but create unmeasured IT risk for the company.

Imagine a Zoom call between a lawyer and client, and the lawyer decides to use Zoom’s AI to capture the contents of what should be a privileged attorney-client conversation. It gets stored at some unknown location. And tomorrow Zoom has a security breach that includes AI transcripts, and data from people who used the translate feature.

You have all of this data that could get stolen. And if an IT organization isn’t aware that their users are using that feature, there’s no way to monitor the risk.

Companies are embedding AI in their applications to beat the competition to the punch, often without regard for the security implications. For CISOs, that means another layer of block and tackle.

The typical controls you can put in place to prevent users from leveraging unapproved apps on their machines won’t work anymore. You have to find other methods. These include:

* Initiate conversations and training
* Check if the AI can be turned off
* Articulate the risk to the executive team so you can consider other technologies
* Take the time to catalog products
* Have conversations with management about the risks of those products
* Understand the security controls that are in place, and be able to articulate those controls so you have at least a basic understanding of how risky those applications are to your organization.

The biggest part of all this is actually on your CIO and IT department. When users have to find another piece of software to do their work, it’s because the product suite they’ve been offered doesn’t work for them. To be clear, shadow IT is a result of tools deployed to the user base that either does not fit their workstyle or lacks features and functionality that they need to be productive. This is one area where CIOs and CISOs can work together to understand the needs of the user base and deploy solutions (or create pathways for secured alternatives) that meet users where they are. This is especially true given today’s remote work reality.

For example, I used Canva often, which is a very popular marketing application, instead of the tools offered by a former employer, because the tools they offered weren’t as good. I wanted to use something more effective. I needed to use something that improved my personal productivity instead of having to work through tools that didn’t meet my needs. 

So if users don’t want to use the tools the company is providing – the tools that are secured and get the attention of the security organization — the company needs to change course. 

I believe that we are just a quarter or two away from a major security breach involving captured data used for AI translation purposes. I’m sure it’s going to occur, and a lot of people are going to be shocked that their information was leaked in a way they didn’t foresee.

CISOs and CIOs, therefore, have to work together closely to ensure that applications and tools meet users’ needs to reduce shadow IT as much as possible, and to introduce AI to the organization in a secure and consumable way.