In our previous blog, we explored the Platform-level risks associated with generative AI, emphasizing the importance of securing the models themselves as well as the broader ecosystem in which they operate. Today, we shift our focus to Systemic level risks, which encompasses both technical and non-technical challenges. While the systemic technical risks are significant, it is likely the non-technical risks that will pose more existential risk to organizations in the future.
So what is a systemic risk to begin with? Most definitions of systemic risk focus around the risk of collapse of an entire system or structure because of the collapse of a single participating entity. For finance, we can look back in 2008 at how underlying weakness (sub-prime mortgages) directly caused the failure of one institution which then cascaded into a failure impacting the entire global economy. Steve Jones at Capgemini has been talking about how enterprises need to consider the systemic risks associated with generative AI technologies, including the need for potentially an AI Resources Department.
Similar to how HR manages employee risk (on behalf of the company, not the employee), there’s a need for a single lens looking across all of the functions, compliances, security, and lifecycle of generative AI agents in the enterprise. Why, you may ask. Imagine a large enterprise with several product teams or lines of business, each with their own generative AI implementations and governance around them. Each team may be managing their risk, such as data risks, legal exposure, and regulatory compliance, at an acceptable level. However, the aggregate of the legal and financial risks may exceed the overall risk appetite of the enterprise as a whole. As another consideration, each team might be appropriately using only the data necessary to support their customers, but when all of the data is combined, the organization as a whole may run afoul of compliances like the EU AI Act, GDPR, or other consumer protection schemes.
We can also see scenarios where AI agents are embedded throughout the organization serving the same roles as individuals. Imagine a scenario where bots replace or support humans in customer support and sales roles. The chat bots can handle repetitive tasks, freeing up human employees to focus on more strategic activities. However, the bots are not incentivized the same way people are, and ethical collaboration could present significant challenges. For example, consider a customer support bot that is programmed to prioritize efficiency and “know” what metrics it’s expected to hit, like time to close.
While it may resolve issues quickly, it could also alienate customers who feel their concerns are not being genuinely addressed, and actively fight against the human agent who might cause it to deviate outside acceptable metric levels. This would not be a fault, but instead would be “by design”, and as a result could have company-wide implications despite the chat bot existing solely as a part of the support function. Each of these considerations are not technical in nature and need to be tracked and addressed at an organizational governance level, similar to the “AI Resources Department” Steve Jones talks about.
There are also systemic technical risks as well. One top of the obvious common components like Identity, Network, and Infrastructure, there’s a lot of risk around data. If we take the example where multiple products or lines of business are accessing pieces of the same or adjacent data stores, we can see scenarios where the aggregate data represents a greater risk than any one team may realize. Let’s say inside a pharmacy one team uses anonymized data on customer locations to share how many customers are near each store, a second team uses anonymized sales data to share top sellers, and a third team uses anonymized data mapping prescribed family planning medications as part of a government grant. Each data set is anonymized, but if you combine them and can query all three at once using natural language, the barrier to start identifying stores where sensitive medication is most frequently dispensed and what populations are most expected to take them becomes negligible. This could lead to a data leak or the de-anonymization of data by third parties, violating data protection regulations.
Another technical risk is global compliance. When training underlying models that will be deployed globally, you can trace data used for training from its origin to ensure residency and privacy compliance. However, if different lines of business, different countries, or different products are training their own models with their own data, ensuring that subsequent consolidations or expansions of those models becomes very daunting. A simple misstep in one region could result in global legal repercussions, fines, and reputational damage.
The key takeaway from all of this is that you need to look at the holistic implementation of generative AI in your organization, and not just consider each implementation as an isolated consideration. Especially as AI agents grow in use in organizations, having top-level governance to manage the sprawl and the associated risks is critical to ensuring you stay on top of the systemic level risks this disruptive technology introduces. Not to introduce a new idea at the end of the blog, but for those of us who lived through early cloud adoption, much of this might sound familiar, and we can (re)learn some lessons from that technology shift as well. We’ll dive into that in another blog post.
If you’re interested in having a conversation about systemic risk in your organization, don’t hesitate to reach out to us at questions@generativesecurity.ai. While we don’t have a solution (yet) to solve this, we’re happy to help you formulate your thoughts and introduce you to some of our partners who can help. In the next blog we’ll shift to the bottom of our diagram and start focusing on Gen AI In Security. We’ll talk about the Empowerment of Security teams, especially the SOC, in generative AI. Look forward to seeing you there.

About the author
Michael Wasielewski is the founder and lead of Generative Security. With 20+ years of experience in networking, security, cloud, and enterprise architecture Michael brings a unique perspective to new technologies. Working on generative AI security for the past 2 years, Michael connects the dots between the organizational, the technical, and the business impacts of generative AI security. Michael looks forward to spending more time golfing, swimming in the ocean, and skydiving… someday.