There are lots of examples of how the quick adoption of a technology can be quite dangerous when you haven’t dotted every “i” and crossed every “t.”

Think about the early days of password managers, when they were deployed on computers as point solutions with no centralization. The first time a user forgot their password, the database with all the passwords was lost. 

The same risk of quickly adopting a technology before its implications are fully understood holds true with regard to ChatGPT.

ChatGPT opens up a lot of opportunities. It can help with research, write papers, build documents, and even create works of art. It can also transform the way customers interact with the companies they buy from, enabling those interactions to be more pointed and precise, while providing a greater level of quality in terms of personal touch. But as we’ve already seen, it can also be twisted for malicious ends, by adversaries who have far greater resources than we do.

One of the realities of these programs is that while they can give you answers, they’re also mining your machine and looking for terms, and trying to figure out if there is PII data and send it somewhere. There are many opportunities for platforms like this to become silent malware that doesn’t outwardly impact the user, but instead works quietly in the background, transferring or leaking information.

Another thing security departments have to be heavily focused on is the potential for people to ask questions that might involve PII, or NPI, or company or trade secrets. The platform might also be used to create a creative work that may have factual discrepancies based on the data sources the software would pull from.

This technology is more like an avalanche than something controllable. So as companies start looking at how to leverage Chat GPT and its variants, it’s important for security practitioners to take the time to study the platform, and have some answers on consumption.

Because it’s open source, the platform has numerous branches. Which type of software and which branches should be allowed? You need to investigate where the branches came from to ensure that what you’re pulling off of a public domain source is in fact valid, and that the checksum does match.

How are you going to prevent people from installing open source tools on their machines that could be harmful to the environment? How are you going to account for the work that people do with AI if something draws the ire of a regulator or consumer watch group? You won’t be able to tell them your source for creating a document was AI, because that will never fly.

You’re going to want to sandbox the application to understand how it works, get a handle on risks, and be able to explain to people how to use it.

Security practitioners and CIOs have to be on top of how this technology gets implemented, put down rails and solve for issues in the platform that impact the business. Business continuity is huge here. CHatGPT is not self-contained. It relies on the ability to crawl the internet. If there are internet outages, the platform’s functionality would be affected, and that’s something you have to account for. You have to make sure the business is able to continue to run and thrive.

The platform has a tremendous capability to transform the way businesses do business. But new doesn’t always equate to great, and new tends to equate to risky. When companies are thinking about leveraging platforms like this, they have to take into account the relativistic risk of employing those technologies and the risk of misconfiguring those technologies, which enables abuse. Understanding the risks, and communicating them clearly to your company is essential.