Unlock Gen AI Value Faster by Making Security Your Ally

Let’s jump to the punchline – business leaders and AI developers are in the unique position to prove they have considered meaningful security in their generative AI powered applications before security teams have fully defined the technical controls. How? By strategically focusing security considerations on tangible risks to the application use cases and their outcomes. You can make security your ally by demonstrating you are protecting the risk to what matters most – your data. In doing so, you can actually accelerate your time to value with more confidence and security assurances.

Today, while standards and compliances are still being defined, it can be very difficult for security folks to both articulate the security threats inside generative AI applications and understand the controls and levers generative AI offers to deal with them. Especially with such a new field, and a non-deterministic one at that, security teams are playing catch up. As a result, it makes sense they might be seen as slowing things down. But AI and applications teams are on the front lines, working directly with the data that fuels these powerful new tools. Data scientists in particular are uniquely positioned to understand the sensitivity of this information and the potential impact if it were misused, even if they aren’t the threat specialists.

There’s a unique opportunity for application teams to demonstrate a commitment to security by focusing on the data risk, and the mechanisms they can put in to address the abuse of data shared by the generative AI tool. In doing so, they can earn the trust of the security teams that see security being taken seriously and real world risks being mitigated, even if all the of the technical controls aren’t in place yet.

Here’s where business and product owners can take the lead. They are the ones who own the data and system risk after all, right? Instead of waiting for security to define each control, for each model, for each use case, for each customer (and so on), you can shift the conversation. Focus not on technical controls, but on real-world risk specific to the data and use cases you are implementing in your generative AI powered application.

Ask the questions that matter to the business:

  • What kind of data is this AI system accessing?
  • Could it expose sensitive customer or proprietary information?
  • Can I demonstrate that the risk to the data is being mitigated or managed?

By framing the discussion around business risk, not just technical compliance, you give security something they can measure now, while they work on the technical controls. And because your teams know the data the best, you can bring early threat models against the data to partner with the security teams. You can also use these threat models to define the right guardrails inside your generative AI models beyond the “best practice” ones you may find from the model providers (like here, here, and here). This fosters trust, which can help streamline the approval process, getting your valuable applications into the hands of your customers and employees faster.

We know why this speed matters. Failure to move from development to production for generative applications can cost up to $5M in lost investment, and successful deployments can lead to 50% to 70% faster order time for retailers and 50% reduction in cost with automation.

So what should you do? In the absence of concrete and standardized security control guidance, the next best approach is two fold: 1. Think like an attacker and test against threats to your data through your application, and 2. Assume your protections will fail and have logging and automated responses ready for when it does. This requires putting the right tooling in place early on, and implementing them into your development pipeline as well as your operational rigor. In the deployment pipeline, you want to test for malicious prompts to ensure the guardrails you’ve implemented catch the known bad behaviors in the wild today. <sales> Another great example is how Generative Security can help you integrate a testing platform into your model deployment pipeline and test for threats to the data. Not just with the same jailbreaks and prompt injections every one else is doing, but with industry specific and use case specific Abuse Cases directly related to your chat bots. </sales> But assuming threats are evolving faster than we can keep up (a safe bet), you also want to monitor your active application and the prompts for signs of abuse. Large amounts of data being exfiltrated, query speed faster than reasonable, and prompts that trip sensors even if they aren’t sure why. Then build automation to both address the threat and feed a feedback mechanism into new guardrails.

The landscape of generative AI security is still in its infancy, and will continue to evolve for a while. When your AI and applications teams are empowered to take ownership of data risk and demonstrate real-world security measures, you’re not just appeasing concerns – you’re providing the concrete assurance your security teams, and the business overall, need to confidently move forward. This proactive approach allows you to capitalize on the transformative power of generative AI with speed and confidence, turning a potential obstacle into a strategic advantage.

About the author

Michael Wasielewski is the founder and lead of Generative Security. With 20+ years of experience in networking, security, cloud, and enterprise architecture Michael brings a unique perspective to new technologies. Working on generative AI security for the past 2 years, Michael connects the dots between the organizational, the technical, and the business impacts of generative AI security. Michael looks forward to spending more time golfing, swimming in the ocean, and skydiving… someday.