Headlines about ChatGPT and the updated GPT-4 are everywhere. Even with new updates, these models still hallucinate, and unfortunately, so do people writing articles about this technology. There is quite a bit of circular reporting on this topic especially related to software development and security. I’ve seen outrageous claims about these tools that are more unbelievable than an episode of CSI Cyber. Rather than talk about why we are good at fooling ourselves when making predictions, let’s move past the hype and focus on reality.

 

Reality Check

Let’s have a bit of a reality check about LLMs and their capabilities in general. As a security professional, I’m tech agnostic. I don’t love or hate any technology. I feel this is my job to help people innovate at scale using the technology they prefer. Overall, I’m optimistic about the use of machine learning and deep learning as approaches to solve security challenges and help security professionals accomplish their goals. We are seeing some of this today. Given the scale of the problems and the amount of data, it’s just not possible for a human to dig through all of this data. We need better tools to automate activities and increase our effectiveness. So, I’m on board.

Unfortunately, when it comes to transformer-based LLMs, many have been spouting opinions of a world-changing future, with these tools being touted as impactful on humanity as the printing press. The reality is much more mundane. We continue to see article after article filled with opinions from people who’ve never used these tools, all parroting talking points from something they’ve previously read. When you dig beneath the surface and look at the examples given, they often include research and lab experiments that don’t reflect real-world scenarios. This parroting creates a strange false consensus. Max Fischer, in his book The Chaos Machine, said, “When people think something has become a matter of consensus, psychologists have found, they tend not only to go along, but to internalize that sentiment as their own.” Nobody gets in trouble or loses repetitional points for saying the same thing as everyone else, and so it continues.

When you dig beneath the surface and look at the examples given, they often include research and lab experiments that don’t reflect real-world scenarios.

Time for a step back. Pundits making these claims are purely guessing because these systems aren’t capable of doing what they claim today. They are filling in the gaps about very massive and real problems which haven’t been solved today with transformer-based LLMs. Arguably, there are some of these problems which may not be solvable at all, and this is supported by AI experts. Hence the guessing. If you believe everything is just a matter of scale, then you are probably on the overly optimistic side because why wouldn’t we be able to just make things bigger? For example, would a trillion-parameter model solve all of the current problems? I believe these problems run far deeper than scale. That doesn’t mean these systems can’t be useful, but it depends on the problem and the situation.

Memorization is impressive when a human does it, but not so much when a computer does it. We know that machine learning and deep learning systems have a tendency to memorize their training data. The impact of memorization is a problem with generalizing to new problems or situations. There’s some evidence that GPT-4 also memorizes its training data. This memorization means that if you rephrase a question on one of those impressive tests, GPT-4 will get the answer wrong. Tell this to all of the folks that think LLMs are actually performing reasoning. Do you really want ChatGPT to be your lawyer now?

Besides, the reality of being a security professional isn’t sitting around answering CISSP questions all day. Our job is to apply knowledge to specific situations where it makes sense. But one thing is for sure, humans continue to be bad at constructing tests, and we continue to be fooled by Stochastic Parrots.

Before we start digging into the claims of supercharged attackers or criminal usage, keep in mind It’s not that ChatGPT is incapable of some of these tasks. It’s that it’s not particularly good at some tasks and far from the best tool for the job for others. Now, let’s look at some of the security-focused claims and issues.

It’s not particularly good at some tasks and far from the best tool for the job for others.

Supercharged Attacker Claims

Countless articles have been published about ChatGPT supercharging attackers, making them more efficient and allowing them to operate above their skills and capabilities. The reality just doesn’t line up. Anyone who’s used these systems knows they make mistakes, and confidently. You often need to rephrase the prompt and make adjustments, piecing together a more complex piece of software. You need to have the skills and experience to understand when the system isn’t giving you the correct output. So, in summary, as of today, this isn’t allowing attackers to operate above their skill level. The NCSC has a similar assessment. Besides, if these tools supercharged attackers, well, they’d kind of be supercharged now, and we just aren’t seeing any evidence of that.

Writing Malware

Yes, there have been people who’ve written malware and bypassed EDR with ChatGPT. This sounds impressive and potentially scary on the surface, but digging into the details, it becomes less so. You typically see the code output is in something like Python, a language not typically used for malware. There was also a bunch of trial and error to get the system to properly generate the output. So, these experiments were done for research purposes, specifically to use ChatGPT for the novelty factor. Certainly interesting and great research, but not the most real-world examples of an attacker usage in the wild. Marcus Hutchins has a good breakdown of this.

Finding Vulnerabilities

ChatGPT can read and understand code, and it’s possible to feed a code snippet into the tool and have it find a vulnerability, or maybe not. This functionality is incredibly hit or miss. It seems possible to find some common coding flaws in common languages some of the time and miss things other times. This certainly isn’t as effective or accurate as a dedicated security tool, so it begs the question of why people talk so much about this fact in the first place.

Enhanced Phishing Claims

Another example I see is that attackers can use tools like ChatGPT to conduct better phishing attacks. In reality, this doesn’t seem to pan out. The whole point of creating a convincing phishing email is the “convincing” part. You want to take an official email communication with all of its formatting and make it look as indistinguishable from an official communication as possible. This isn’t something that ChatGPT does.

There are also guardrails in place that prevent you from creating these emails and creating them at scale. Sure, with some work, the guardrails can be bypassed, but all of this means that ChatGPT is a less-than-ideal tool for phishing tasks. With a copy of an official email and a translation tool, you can get much further. Beyond intuition, there’s also some data to support that humans still write better phishing emails.

 

Real Risks

Just because a system like ChatGPT doesn’t supercharge less experienced attackers doesn’t mean that there aren’t real risks from these systems, just maybe not the ones you are thinking about.

The Real Risk to Security: Application Integration

Application integration of LLMs is the largest risk facing security. Today, tools like ChatGPT don’t really “do” much. They can’t reach out into the world and make things happen like scheduling an appointment, ordering groceries, or changing the channel on your television, but that’s about to change. OpenAI announced ChatGPT plugins. This opens up many more possibilities and I can’t wait to see all of the new attacks that result from this increased attack surface.

With the integration of these tools into your applications and search engines, you open up a whole new world of attack surface, turning a previously robust application into one with a wide-open door.

With LLMs, the surface of all possible issues isn’t known at deployment time. It’s not obvious all of the ways the system can fail, failure, in this case, can now affect your application. These unknowns can manifest into security issues in strange ways that you may not be able to fix because these calls are made over an API to a 3rd party provider. I’ve described this as having a single interface with an unlimited number of undocumented protocols, all waiting to be exploited. Wait until someone tricks a previously robust application into ordering a pizza on a celebrities credit card.

A single interface with an unlimited number of undocumented protocols, all waiting to be exploited.

We are already getting a glimpse at this future. Where people are planting prompts in the wild, waiting for these tools to encounter them. A kind of reverse prompt injection. You never know where these prompts may end up. Wait until attackers inject prompts into log files hoping that security tools parsing the data will run into it.

Be extremely cautious when considering the integration of these tools into your applications. Ask yourself what features you are getting and if they are worth the risk for the application. Err on the side of caution and try to use API calls with more restricted functionality, especially if you are just using text summarization or some other specific feature and don’t need the full chat experience.

Privacy and the Unavoidable Chatification of Everything

One thing that we can all agree on is that current LLM technology is terrible for privacy. The data you provide to these systems is verbosely logged, sent to a 3rd party, and potentially viewed by humans as part of the additional training process. There’s nothing stopping your input from being used for further training, being evaluated to improve the system, or ending up on Facebook. Your data is handled with fewer security and privacy protections than it normally would under these conditions. To be used in this context means it needs to be unencrypted, have less access control, and potentially have more copies because people, teams, and systems need access.

Your data is handled with fewer security and privacy protections than it normally would under these conditions.

Today, these features are an opt-in inside of products, but in the inevitable chatification of everything, they may be on by default, making it much harder to control from a security perspective. Think about just how much sensitive data about an organization could be gleaned from having access to data from everyone’s Microsoft Word documents. You are giving a 3rd party visibility into your organization’s workings and trusting them completely with it.

Misinformation

Misinformation risks fall into a strange area. I look at these from two different perspectives, one is the industrialization of misinformation by malicious actors, and the other one is small-scale one-off misinformation, either accidentally by the model itself or on purpose by an individual.

First off, I don’t see the OpenAI product ChatGPT industrializing misinformation at scale for bad actors. The guardrails in place as well as the As-a-Service hosting method, make it unlikely and more of an annoyance for malicious use in this context. But there are other LLMs without such guardrails in place that could be self-hosted and potentially used for this purpose. For example, Meta’s LLaMA model was leaked online and could be repurposed for this activity.

On the second point, we are entering an era where there are no guarantees that these models won’t be self-referential. Meaning there’s no meaningful way to tell if a new model trained on text content isn’t learning based on its previous output or another LLM’s output. This is concerning, especially for use cases where people are trying to use these systems as truth machines. Even small pieces of misinformation can become impactful when re-learned back into a model.

We are entering an era where there are no guarantees that these models won’t be self-referential.

Steve Bannon had a famous quote that is applicable here. He said the goal was to “Flood the zone with…” Well, you know. There is a real risk that LLMs without guardrails could very well be the perfect tool for this task. Time will tell.

 

Realities

Okay, so ChatGPT doesn’t pose a weapons-grade update to attackers, and it won’t supercharge inexperienced attackers to operate above their skill level, but that doesn’t mean that it’s useless in the context of information security. There are many different tasks and efficiency gains for security and development teams.

I believe we are at the beginning of what people will use these tools for. For the most part, ChatGPT and other LLMs are useful for things like text summarization tasks as well as text generation tasks. If you think about it, it makes sense, they are trained with examples of written language. This is why they can imitate writing style so well.

We’ve also seen that these tools have the ability to deal with programming languages in both writing and understanding code. People have used them to help understand obfuscated code and create small programs quickly. Advancements in this area are happening fast.

We often have to deal with large amounts of data, so tasks like parsing texts, summarizing content, writing documentation, and many other tasks are ones that you may find will assist you and your team be much more productive. It may surprise you to learn that we didn’t need ChatGPT for these. These were tasks that LLMs were pretty good at long before ChatGPT.

Just like on the attacker side, your staff needs to have the experience to know when the tool isn’t outputting the right data. Keep in mind that small-scale experiments and successes rarely have applicability in the real world. So, do your own experiments applied to your own use cases and evaluate the results accordingly.

If you have development teams using these tools, it’s important to ensure you have security built into your development processes to catch the potential issues from these tools. There’s no guarantee that the output is safe. For more information, download the Kudelski Security Research white paper Addressing Risks from AI Coding Assistants here: https://resources.kudelskisecurity.com/en/kudelski-security-ai-coding-assistants

 

Conclusion

Overall, I think we should prepare to be surprised, for better or worse. Even though these tools don’t seem to supercharge attackers, we should be mindful of the opportunities and pitfalls they present for defenders. They can open our applications and organizations up to additional attacks and privacy issues, but they can also lend themselves to being a productivity boost and help us streamline activities. With the right balance, we can have the best of both worlds. We just need to ignore the hype and focus on the realities.

Was this article helpful?