top of page
SCMktg

Generative AI: Friend or Foe to Cybersecurity?

Updated: Dec 3


The image of a couple of computer whiz kids hiding away in the basement while they hack into the Department of Defense mainframe makes for an interesting thriller movie plot. However, this is simply not your cyber threat reality.


Today’s cyber terrorists are well organized, very well capitalized, and are tirelessly looking to devise new and highly disruptive ways to sneak into your network. It’s not enough to keep running the same enterprise solutions and hope they work forever. This is war! You need to be continuously and proactively hunting for new threats and emerging enemy technologies as well as ways to combat them.


Enter, Generative AI. Artificial intelligence; machine learning. Utilizing AI for productivity is so new that we may not fully grasp the utility...or the threat of it. Some of us remember this fateful sequence of events made famous in the syfy classic, TheTerminator:


A Skynet funding bill is passed in the United States Congress, and the system goes online on August 4, 1997, removing human decisions from strategic defense. Skynet begins to learn rapidly and eventually becomes self-aware at 2:14 a.m., EDT, on August 29, 1997.


Fortunately, the reality of this potential outcome can be cleared up by the AI itself:


No, I am not part of Skynet. Skynet is a fictional artificial intelligence system from the Terminator movie franchise, while I am a real-life language model created by OpenAI to assist with a variety of tasks such as answering questions and generating text. I am not sentient, and I do not have the ability to control or interact with physical systems like Skynet does in the movies.


I think we are all relieved to hear no ill intentions coming directly from Chat GPT!

AI and its implications for Information security


In their newsletter Security Roundtable, the Palo Alto Network warns:

Cyber threats are increasingly automated using advanced technology. Unfortunately, defense has continued to employ a strategy based mostly on human decision-making and manual responses taken after threat activities have occurred.


This reactive strategy can’t keep pace against highly automated threats that operate at speed and scale. The defense has been losing—and will continue to lose—until we in the cybersecurity community fight machines with machines, software with software.


Dan Peterson of ZDNet agrees and states:

Generative AI has helped bad actors innovate and develop new attack strategies, enabling them to stay one step ahead of cybersecurity defenses. AI helps cybercriminals automate attacks, scan attack surfaces, and generate content that resonates with various geographic regions and demographics, allowing them to target a broader range of potential victims across different countries.


Cybercriminals adopted the technology to create convincing phishing emails. AI-generated text helps attackers produce highly personalized emails and text messages more likely to deceive targets.

How to think about AI in order to minimize these risks


We asked Matt DeChant, CEO of Security Counsel, to talk about his views on the risks and rewards of utilizing artificial intelligence as a productivity tool in our daily lives.


Hello Matt, what does Artificial Intelligence mean to cybersecurity?

Artificial Intelligence (AI) is a great way to meet one of the primary needs of a security program: Intelligence gathering. All modern security programs are run on the intelligence they gather. Another way to look at it is that you cannot defend against an attacker that you are not aware of. The longer a bad actor can dwell “in the system” looking for things to exploit or steal, the more damage they can do. If you don’t have intelligence around your people, processes, and technology, a bad actor’s typical dwell time might be 6+ months before they are discovered.


So, does AI give you better tools to seek out these threats or is it an efficiency play?

It may be neither. At this early stage, there are not many available AI-driven security products or services. AI is an open system that is recursive. Privacy is at the core of security and AI systems by their definition are not private. They are learning new things by gathering their own information.


The artificial intelligence systems that we have (ChatGPT, MidJourney, etc.) for example, are working on their own intelligence, not yours. They have a primary goal of becoming more informed. As a result of that improvement, the business hope is that you can gain insights from your own information that's used as the inputs.


In many ways the AI tools that are available today are just a build on what has been available for many, many years. Just like search engines (e.g., Google, Bing, etc.), this is based on the idea that you are giving up a bit of your privacy to get access to new information. These services are generally free. But, of course, nothing in this world is free. The information you feed into these tools becomes part of its zeitgeist. This is where the danger is. When you feed it sensitive information that you don’t want to make public, it will become public to future queries in the same system. Then, your information also becomes part of the answer for other queries.


It sounds like companies using AI tools need to be cautious of the public nature of AI.

Absolutely, including any information that you give it. It all becomes part of the constantly developing system. It is designed to retain that information and use it to refine its dataset to become more “intelligent.”


Also remember, AI is a tool to make information more accessible. It’s not the oracle that has the answer. You must interpret, massage, double check, and tabulate the data in an appropriate way to find the results that are meaningful to your organization. It’s a good idea to treat it like you can’t fully trust it. Another way to look at it is that AI is your “co-pilot”. You do the driving, and AI is working the map and the radio.


So, you can't turn it loose and let it solve your problems?

Exactly! We must remember that artificial intelligence is not actually intelligent. Skynet is not going to become self-aware. It is simply an immense amount of information in one place that has the ability to very efficiently give you what appear to be very nuanced answers to things. The problem that we have then is two-fold:


1. We are confusing intelligence with perceived completeness of available information

2. AI must deliver something both novel and exploitable

Otherwise, it’s simply a race to mediocrity. If five businesses ask the same question and use the answers in the same way, they are all on a journey to becoming commodity businesses. Businesses want and need that secret sauce to differentiate themselves in the marketplace and become more successful.

What does AI mean to bad actors?

I would say that most of the actions and attacks that we see by bad actors are exploiting the way that humans are built. We are sort of tribal; we trust certain groups and don’t trust others. Confidence tricksters that separate people and organizations from their money or resources have been operating for thousands of years and this is just a new way, a new tool, to do that. Put simply, it’s just a more efficient way to employ these old tricks. AI allows them to more efficiently identify their marks.


With the automation of AI enabled threat tools, the frequency of potential breaches could increase significantly. What can companies do to combat this new more aggressive threat?

Just like bad actors who use AI to automate attacks, organizations should take a hard look at tools that can combat that speed and efficiency also using AI. There are new tools being launched every day that utilize generative AI as the backbone of the solution. Research them, evaluate them, then implement them, and do it quickly!


Beyond that, all security programs worth their salt will constantly re-evaluate their own effectiveness, including identifying AI generated vulnerabilities in addition to traditional ones. This work needs to drive revision and updates to the overall security program.

Employee training must also be broadened to include awareness around AI generated threats like deep fakes. We need to keep reinforcing standard security practices like multifactor authentication, email filters, and training on recognizing AI generated phishing attacks. Diligence is the key to every successful security program!


Moving Forward

There is no debate on the fact that AI is here to stay. It is an extremely powerful tool that all companies will use in one way or another in the coming years. Its efficiency value is incredible! But, AI is not intelligent. Its outputs should neither be trusted at face value, nor should they be applied as blanket solutions. AI is not good or bad. It is simply a tool.


Generative AI needs to be considered in every effective security program including technical tools, training, and automating threat response. We, the humans, will be watching and adapting as generative AI continues to develop on both sides of the information security equation.


7 views0 comments

Recent Posts

See All

Comentários


bottom of page