Gen AI Security – Why it’s so important to protect more than just the technology

I want you to think about this statement for a minute:

     We take basic security for granted in customer experience.

When you just thought of customer experience, do you first think of the people you interact with when you walk into a store? The person on the other end of the phone call? Or do you primarily think of the website or app interface on your mobile device? The majority of customer interaction is still done with another person, with technology helping augment the research, purchase, or finding of those people to help us. What does this have to do with security or generative AI though?

Now I want you to imagine walking into a store. You walk up to the very pleasant sales associate and ask about what colors they have the shirt in the window in. As they start to answer, you ask about the different sizes of pants on the rack in the back. Then you immediately ask about the cost of the jackets on the wall, the next nearest store, etc… After the 2nd or 3rd interruption the sales associate is likely to stop being very pleasant, and you’ll be asked to leave the store.

There’s an analogy here in cyber security – rate limiting on your website. If a bot starts spamming your website, there are a plethora of techniques, with varying degrees of success, to detect and stop it. Just like that sales associates, who might have given a few pieces of information early in the conversation, would have stopped after it was clear this was not an ordinary interaction. But what about your generative AI-powered chat interface?

     Current Generative AI guardrails lack “common sense”. Protecting against attacks like jailbreaks, prompt injection, and scope drift are necessary, but not sufficient.

Today, generative AI powered customer experience applications are rapidly replacing or augmenting those first lines of customer interaction. This is a good thing, as it frees up people to focus on the more difficult tasks and thing that build better customer relationships. What we have yet to do, though, is codify the common sense that people bring to those interactions. Understanding what “normal” behavior is, based on years of talking to customers as they search for their favorite item in the store, doesn’t exist for most generative AI tools yet. And while you might lean on heuristics to see what’s “abnormal” from the training data, there’s a lot of risks and biases there already identified.

It’s also not just the common sense people bring – we’re relearning common web security lessons again. Chances are your website has some sort of bot detection, distributed denial of service (DDoS) protection, and logs that cover each interaction at a granular level and as an aggregate. However, those protections are not built-in to many generative AI implementations natively, and in fact some implementations actually bypass those protections. For example, implementations of chat bots using WebSockets can bypass rate limits put on your website, especially if the address of the chat bot’s interface can be discovered.

In a charming boutique, a stylish woman passionately explains the intricacies of jewelry selection to a customer, showcasing elegant options for a special occasion.

But all is not lost. First, we have to acknowledge these shortcomings, and identify ways to address them. This involves adding intelligence and security around business logic to ensure the effectiveness and safety of generative AI. I am partial to using the NIST Cyber Security Framework and starting with the Govern and Detect functions both in per-production and as part of security operations. {sales} This is one area we at Generative Security are focused on, with testing of common industry attacks against business logic and public information. {/sales} Second, we have to realize that generative AI uses in customer experience bridge the gap between people and technology in ways we didn’t expect or understand at first. As such, we need to bring people to the forefront of security, integrating human “common sense” into the development and deployment of generative AI behavior. This can involve training AI models to recognize context and make judgments beyond across multiple prompting sessions and outside narrow windows of behavior. By incorporating human expertise, we can enhance the capabilities of generative AI and reduce the risk of unintended consequences.

The leads for this effort though aren’t your traditional security team. It’s the Loss Prevention teams, the Fraud teams, the (non-technical) Risk Management teams that need to lead this charge. Because just like you wouldn’t put a sales associate on the floor or a customer service representative on the phone without adequate training, you need to treat their generative AI bot with the same consideration.

About the author

Michael Wasielewski is the founder and lead of Generative Security. With 20+ years of experience in networking, security, cloud, and enterprise architecture Michael brings a unique perspective to new technologies. Working on generative AI security for the past 2 years, Michael connects the dots between the organizational, the technical, and the business impacts of generative AI security. Michael looks forward to spending more time golfing, swimming in the ocean, and skydiving… someday.