Top 3 lessons generative AI security can learn from the cloud

For anyone who’s been in the industry for a while, we know that IT is cyclical. The technology changes, but the patterns remain the same. During the early cloud days, I heard from former COBOL and Unix programmers about the progression from renting time on early mainframes to personal computers, and back to running on someone else’s computer again. And they weren’t wrong. But it’s not just the technology. As an industry we tend to have to relearn the early lessons from these technological evolutions. Noisy neighbors, shared resource contention, and security issues unique to shared services came back all over again, and we invented “new” ways to solve with them (again).

Today, generative AI is having its cloud moment. Enterprises are finding ways to both increase their bottom line while also saving money in the process. And just like with Cloud adoption, security is often left behind, forced to play catch up at best, and at worst fighting to slow down adoption because of a lack of understanding. But the good news is we don’t have to start from scratch. Let’s take a look at the Top 3 hard learned security lessons from cloud adoption that we can repurpose and “jump the S-curve” with generative AI security.

1. Shared Responsibility and DevSecOps – Security can’t do it alone (or dictate to others)

One of the early struggles with cloud adoption was a strong understanding of who was responsible for what, leading to security folks being the blocker to adoption more often than not. Physical security was pretty obvious, but then things got muddy. Patching, encryption, not having wide open data stores, and don’t think to mention compliance. Shared Responsibility (and later Shared Fate) helped to create a template of understanding that people could reference and make better decisions on. Fast forward a few years and we see another shift in security with DevSecOps. Often delivered from the cloud, the integration of security responsibility into the DevOps team, and more importantly the integration of security tooling into the software deployment lifecycle again shifted responsibility for security. Through the recent evolutions the role of security professionals has changed dramatically. Control definition is now down in concert with business executives focused on more than just Confidentiality, security approvals can be accelerated more with automation and less with a person reviewing a configuration, and threat remediation is done with the application owners right there with the security folks.

As generative AI continues to gain adoption, the role of security needs to again morph. We’ll do another blog to talk about the statistics, but anecdotally we can all point to people using generative AI at work when they’re not supposed to, or tools using generative AI that we didn’t know about until it was already in the environment. Let alone discussing the cool proof of concept that got fast tracked into production before security got a good look at the details. This all sounds pretty familiar. What we can do now, though, is embrace the successes of secure cloud adoption by taking a collaborative approach to generative AI. Security can’t do it alone, but with peers from Corporate Risk, with the building of multi-functional governance teams, and the integration of security tooling into the model development and deployment lifecycles we can catch up a lot faster. Eric Brandwine did a fantastic talk referencing the mission statement for the Security team at AWS in 2017, and how maximizing the customer value was just as much a part of the job of security as it was a part of the service teams. Teams responsible for generative AI security are learning this lesson again 7+ years later.

Artists wooden table with paints and colored paper. Art and artwork concept. Hands hold colorful markers and draw. Markers in male and female hands draw cute drawings on white paper

2. Shadow IT is now Shadow AI – But we have the tools to adapt

Speaking of governance, if you lived through the original adoption of Cloud, you likely shudder at the thought of Shadow IT. Folks using their credit cards to build applications on some completely unattached platform with no oversight while corporate moved back and forth with no visibility. It happened for a reason though, as new innovations were born and value skyrocketed (for a while). And if we’re honest, security folks aren’t what did the best job of stopping or finding Shadow IT, it was the finance teams. Large credit card reimbursements and cloud costs going well beyond expectations were the far more impactful signal than IT could quickly find. So to solve for the problem while still keeping the benefits, companies implemented governance. Endorsed by the hyperscalers, cloud adoption frameworks implemented governance methodologies not just to benefit security, but a wide range of stakeholders. But this was the lifeline security needed to force relevance even in organizations where they didn’t have the most power.

Surprise, surprise, Shadow AI is now rearing its ugly head. But once again, security teams have an opportunity to not be the Dr. No’s of the organization, but enablers focused on maximizing business value. Companies like Capgemini are already seeing and sharing the value of proper governance around generative AI to both deliver business outcomes while also protecting the brand reputation for those delivering the capabilities. And while most organizations have stopped individuals from using their own credit cards to buy cloud and AI services, that doesn’t mean lines of business, product teams, and individual developers aren’t taking advantage of AI services that could put your business at risk. A blanket block though won’t work today. There are too many ways around it, and too much risk to NOT using generative AI and falling behind your competition. So security needs to partner with peers in finance, in corporate risk, and in IT to build comprehensive guardrails that enable innovation while protecting the company’s data, reputation, and bottom line.

3. Visibility, Good Hygiene, then Operations

Just as Cloud Access Security Brokers (CASBs) evolved into what they are today, there will be an evolution into the security tooling for generative AI. If we follow the trends from cloud security, we see it begins with getting visibility into the usage of IaaS, PaaS, then SaaS services, followed by securing access to these services through network and identity controls, and evolving into controls for those services themselves. Next, we see the evolution of security control visibility and implementation. Early open source tools like Cloud Custodian provided analysis of foundational controls in cloud environments based on published best practices, and later tools like Prowler took it to another level becoming more prescriptive and advanced beyond baseline infrastructure controls. For generative AI, we are starting to see this play out again in the early stages. Many companies are still working on detection of generative AI in their environment just so they can get a handle on how big their risk exposure is. And once they gain visibility, they need to implement security controls to secure that access. We already see companies like Palo Alto trying to build Posture Management for AI into existing tooling. However, if we again look into the past, for these early days it’s often more cost efficient and effective to compile your own tool suite than to buy from a single vendor.

Once you gain the right visibility and access, consistently getting the right security implemented was the next problem. Early cloud days struggled with this as people would build their own solutions from scratch, inconsistently applying security best practices even if they were aware of them. The advent of Infrastructure as Code (IaC) levelled up security’s ability to empower teams to be secure on Day 1. From my experience, the security teams that sat with the infrastructure and development teams and collaborate on implementing the right security controls early had the most success, not only in getting security implemented but also in building great relationships with their peers that reduced the risk of Shadow IT and surprises later on. Feedback loops were then implemented to ensure both teams had open communication mechanisms to improve their services. Once again, we will need to see the same collaboration in generative AI. Adding security after the fact has never worked, and with generative AI the risks are amplified because so much money goes into the early training and development that it’s impossible to cost-effectively apply security later on. But taking lessons from IaC, we see opportunities to empower data scientists and application developers with the right security early on. Data classification tagging, baseline generative AI guardrails, secure architectural patterns, and automated security testing including fuzzing and prompt testing are essential early tools security teams must collaborate on to get early adoption and accelerate generative AI implementations.

Then, once you have your visibility and your ability to consistently implement security controls, we turn to secure operations. For early cloud days, this took a lot longer than we may want to admit. Baseline service like AWS CloudTrail existed off the bat, but native cloud security tooling required you to build your own analysis engines, or integrate with existing SIEM tools. It wasn’t until much later that tools like GuardDuty or Security Command Center came around. Fast forward 2+ years, and today GPC and Azure are consolidating this operational capabilities into their SIEM/SOAR solutions Security Operations (formerly Chronicle)and Sentinel respectively. This follows the larger industry trend of SIEM/SOAR integrations. Unfortunately, I think it’s too soon to know what security operations will look like in generative AI. In organizations where the application teams own the generative AI development, we can lean on traditional DevSecOps practices and help train SOC analysts on how to properly analyze and decode threats to AI models as well as the applications hosting them. But if the ownership sits with a data science team, we don’t have those same bridges or muscle memory to work with. So there will be growing pains – but one obvious conclusion is that security teams will not be able to do this alone.

There is a lot more we can cover about how to take the lessons from cloud security. But looking back at the evolution of cloud security it’s important we don’t try to everything all at once. If we do, we will get overwhelmed, security teams will alienate those who are building generative AI solutions, and we will build on shaky foundations that will incur more tech debt than is necessary. So first, start by building bridges with your peers so you’re working collaboratively, not at odds with each other. Second, gain visibility in your environments to understand not just your security exposure, but also where generative AI is actually driving success so you can focus on the right solutions instead of every problem. And third, start with the basics around control visibility, proper standards, and consistent implementation in the services being used. Implementing these lessons from cloud security adoption will help you jump to S-curve of security and accelerate both you adoption of generative AI and your security of it.

About the author

Michael Wasielewski is the founder and lead of Generative Security. With 20+ years of experience in networking, security, cloud, and enterprise architecture Michael brings a unique perspective to new technologies. Working on generative AI security for the past 2 years, Michael connects the dots between the organizational, the technical, and the business impacts of generative AI security. Michael looks forward to spending more time golfing, swimming in the ocean, and skydiving… someday.