Generative AI is a very powerful technology that’s created for constructive purposes, but as with everything else in life, some have found a nefarious way to use it. Bad actors are using these tools to create attacks and circumvent security controls. One AI-generated malware has already been created to evade traditional EDR capabilities.
Unfortunately, this technology can create a lot of things that seem legitimate or real, like deep fakes and scripts that impersonate people. Most security executives are concerned that this technology will skirt all the protections we have in place. This is a reality which we must reckon with.
For this reason, it is imperative for cybersecurity industry leaders to address this issue now and weigh its ramifications. Evidence out there strongly suggests that some of this will be trouble, in much the same way that social media has become a platform for things like hate speech. Eventually, not only will people be able to leverage tools available today like OpenAI’s ChatGPT and Einstein, but some will leverage those algorithms to create something themselves, and it will be close to impossible to protect against that.
As AI continues to spread, anyone and everyone will literally have access to this technology, so any country or bad actor could cause havoc. We need to sit down to look at how we contain this and make the creators of this kind of AI accountable for the negative consequences. While people have put out alarms, I do not think as an industry we, as cybersecurity industry leaders, are acting with the appropriate urgency.
There seems to be a consensus that the problem is moving faster than we can react. How do we cut in front of it and try to stop making it worse – or at least get a good sense of what’s coming at us?
We need to form a working group with representatives from the cyber side and from every discipline that will be affected – business, finance, social, and psychological – to examine the implications across the board. It is crucial not to silo ourselves, because this technology is interconnected.
Next step will require a broad buy-in, including from governments and companies, regarding the rules – among other things, on how do you protect information. Organizations will have to give assurances that 1) they are making the right decisions to ensure that information will be accessible to people who need access, and 2) information is being protected from leaks, exfiltration, or hacking. We must avoid finding ourselves in a position where we have created a monster that has spun out of control.
As with a lot of older technology, export controls and the like are still being circumvented because financial motives drive decisions. Uneven applications of laws and regulations globally allow for controls to be evaded entirely. This is where we must ensure that the chain is very well controlled as it applies to generative AI.
We must collaborate with academia, because the genesis of a lot of these technologies is in academic research. Original motives may be altruistic, but applications of innovative technologies have a possibility of leveraging them the wrong way taking advantage of the natural spirit of the university environment of openly sharing information.
Academic freedom, creativity and the flow of information must be preserved. We must ensure how we keep information from reaching destinations where it will be used malevolently.
For businesses, quantifying the unknown potential impact of generative AI is a tall order. Even so, businesses need to start making some good, educated guesses, and explore many scenarios of how to detect and prevent said attacks, especially since the threat can be in house and detrimental for its operations.
Generative AI makes the whole concept of Zero Trust more urgent for all of us. We must approach this very thoroughly and accelerate the principle of Zero Trust everywhere. We must mitigate what we don’t know and employ super-effective controls to protect our information.