The Risk Wheelhouse
The Risk Wheelhouse is designed to explore how RiskTech is transforming the way companies approach risk management today and into the future. The podcast aims to provide listeners with valuable insights into integrated risk management (IRM) practices and emerging technologies. Each episode will feature a "Deep Dive" into specific topics or research reports developed by Wheelhouse Advisors, helping listeners navigate the complexities of the modern risk landscape.
The Risk Wheelhouse
S6E9: Why Legacy Risk Platforms Break Under AI Pressure
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
A slick AI demo can make any risk platform look like the future, but architecture is destiny. We unpack the dangerous boardroom illusion where leaders treat radically different “AI GRC” products as interchangeable, then we map what is actually changing under the hood in governance, risk, and compliance technology. If you are a CRO, CISO, chief compliance officer, or audit leader signing multi-year renewals, this conversation is about avoiding the most expensive misread of the AI disruption curve.
We walk through the three tiers of enterprise software that shape risk outcomes: system of record, system of engagement, and the emerging system of action. From there, we explain why classic workflow automation is so vulnerable: it is rigid, stateless, and provides no cognitive value once generative AI agents can read unstructured evidence directly, synthesize context, and update the compliance record without a human-friendly interface.
Next we zoom in on agentic GRC, why it delivers real ROI, and why it still hits a hard boundary. Risk reasoning lives across four integration points: policies, goals, processes, and assets. A policy-focused agent can be brilliant and still remain blind to strategic objectives, operational workflows, and technology asset exposure. We use the AuditBoard to Optro rebrand and Optro’s AI governance acquisition as a real-time case study of vendors trying to cross that boundary, then we compare structural proximity advantages held by platforms rooted in ITSM and ERP.
Finally, we define the destination: fully stateful autonomous IRM that connects GRC, ERM, ORM, and TRM into one governed decision architecture. We introduce the agent proliferation paradox, the city grid metaphor for risk agency, and the four hard procurement questions that keep you out of the integration trap. If this helps you pressure test a vendor claim or reframe your roadmap, subscribe, share the episode with a risk leader, and leave a review with the toughest question you ask in pitches.
Visit www.therisktechjournal.com and www.rtj-bridge.com to learn more about the topics discussed in today's episode.
Subscribe at Apple Podcasts, Spotify, or Amazon Music. Contact us directly at info@wheelhouseadvisors.com or visit us at LinkedIn or X.com.
Our YouTube channel also delivers fast, executive-ready insights on Integrated Risk Management. Explore short explainers, IRM Navigator research highlights, RiskTech Journal analysis, and conversations from The Risk Wheelhouse Podcast. We cover the issues that matter most to modern risk leaders. Every video is designed to sharpen decision making and strengthen resilience in a digital-first world. Subscribe at youtube.com/@WheelhouseAdv.
As you look at your own tech stack right now, especially if you are sitting in the C suite, like if you are a chief risk officer, a CI So, a chief compliance officer, or, you know, a VP of audit, there is this very specific, very dangerous illusion happening across the boardroom table when you evaluate new technology. Oh, absolutely.
Sam Jones:It's a massive blind spot, right? Just
Ori Wellington:picture the scene. You are sitting across from a vendor, and they are pitching you this brand new, supposedly AI driven risk platform. They are throwing around these high altitude phrases like system of action or autonomous decision making. Yeah, the buzz words are everywhere they are, and the demo they show you looks absolutely incredible, right? It's it's writing control tests on the fly. It's synthesizing these dense, 400 page regulatory updates from the EU in seconds. And it's mapping those updates directly to your internal policies, which looks
Sam Jones:like magic to a buyer exactly.
Ori Wellington:You are just mesmerized. So you sign the multi million dollar contract, you think you just bought it the future. But the reality is, if you don't intimately understand the underlying architecture of what you just purchased, well, you haven't bought the future at all. No, not even close. You have essentially just bought a very fast, very expensive train that is locked onto a fixed track, and it's a track that the market is currently in the process of tearing up. It it is,
Sam Jones:without a doubt, the single most expensive miscalculation a risk leader can make in today's environment. I mean, we are looking at a market that is fundamentally misreading the disruption curve. You have these highly sophisticated organizations, right? And they're treating radically different AI risk platforms as if they are entirely interchangeable,
Ori Wellington:like it's all just AI, so it's all the same.
Sam Jones:Yes, they operate under this wild assumption that, well, adding a generative AI agent to a legacy workflow tool is the exact same thing as buying a purpose built, natively autonomous system and structurally mathematically even it absolutely is not. The underlying infrastructure of risk technology is shifting beneath our feet right now, and the buyers who don't understand the basic physics of this shift, they are going to find themselves holding millions of dollars in technical debt within like the next 24 months, which is
Ori Wellington:exactly why we need to map this out today. Because the governance, risk and compliance market, the GRC market, has hit a massive architectural inflection point. And our mission for this deep dive is to map out the AI disruption curve hitting risk tech and to clearly explain the direct investment implications that most buyers are just
Sam Jones:completely missing. It's vital we get into the weeds on this.
Ori Wellington:It is so we are basing today's analysis on this highly critical research note titled The path to autonomous IRM is published by wheelhouse advisors on March 16, 2026, and I want to emphasize to you the listener, this isn't just theoretical whiteboard architecture. We're talking about here. No, we
Sam Jones:have actual, real world, high profile market signals
Ori Wellington:to decode right? The biggest one being the recent massive rebrand of audit board to optro.
Sam Jones:We really need to be incredibly clear about that rebrand right out of the gate, because it sets the stakes for everything we are going to discuss today. I mean, when a platform of that size, with that much legacy market penetration just completely sheds its corporate identity to become optro and explicitly claims that AI is fundamentally transforming GRC, you have to pay attention. You have to recognize that this is not a marketing pivot. It is not just a fresh coat of paint to sell more seats. That is the highest profile public signal we have that the very category of agentic GRC is now a defined, unavoidable architectural reality, its validation of the disruption curve. But, and this is the absolute trap for the buyer. If you take these bold marketing declarations at face value without ruthlessly interrogating the data models that live beneath them, you are walking blind into a massive integration failure.
Ori Wellington:Let's start by framing the overarching architecture. Then, because to understand what optro is attempting to navigate and what you as a buyer are trying to navigate, we have to look at the fundamental architecture of enterprise software over the last three decades, right? We have to look at the map exactly. The premise here is that market advancement doesn't happen in these giant, magical leaps. It happens across bridges. And here is the critical rule of infrastructure we need to establish. Each bridge carries the structural foundation of the system It spans from. That is the golden rule, right there. So according to the research, enterprise software has historically been organized into three distinct systems. Let's start with the bedrock, the system of record. If we go back to say, the early 2000s what exactly was a system of record in the context
Sam Jones:of risk, system of record is your absolute foundation. Structurally, it is designed to store, organize and retrieve structured information period. Think about the early iterations of platforms like RSA Archer or metric stream, or even before that, when risk was just managed in these massive, brittle Microsoft Access databases.
Ori Wellington:Oh, man, the access days, right?
Sam Jones:Nightmare. But system of record. Is essentially just a digital filing cabinet. Its entire value proposition to the enterprise was based on reliability, immutability and auditability, like what specific evidence was collected on Tuesday, which policy did the employee attest to, what control was tested, and what was the binary result pass or fail?
Ori Wellington:Very deterministic,
Sam Jones:highly deterministic. It relies on relational databases, neat rows and columns. And the defining characteristic of a system of record is that it is entirely static. The infrastructure, the data schema, the way it expects to receive information, it's exactly the same on day 1000 as it is on day one. It does not learn. It merely holds.
Ori Wellington:It's the ultimate ledger. But you know, as enterprise complexity exploded in the 2010s nobody wanted to just stare at a static ledger all day, the friction of getting data into that Ledger was basically paralyzing organizations. Compliance teams were drowning, drowning in spreadsheets in endless email chains, just trying to gather the evidence to put into the record. So the software market built the second tier. Yeah, the system of engagement. And importantly, they built it directly on top of the record.
Sam Jones:Yes, the presentation layer, right?
Ori Wellington:This is where we got the beautiful user interfaces, the dynamic dashboards, the task queues, the automated approval workflows. It was the girification of risk management. Essentially, it was designed to help human beings actually interact with that static data without completely losing their minds.
Sam Jones:But we have to look at the mechanics of what a system of engagement actually does, because this is where the illusion of Progress begins. For a lot of buyers, a system of engagement feels highly active. It sends you a push notification, right? It escalates a ticket from a level one analyst to a level two manager. It turns a dashboard light from red to green. When a task is done,
Ori Wellington:it feels like it's doing work.
Sam Jones:It feels like work, but underneath that shiny interface, it is fundamentally static and stateless. At its core, it is merely adding a variable interaction layer on top of a rigid, unchanging record. And here is the most crucial distinction for our AI discussion today, in a system of engagement, the human being remains the sole reasoning layer.
Ori Wellington:Okay, unpack that. The human is the reasoning layer, right?
Sam Jones:The system routes the task. It basically moves the digital paper from your desk to my desk, but the human must read the paper, apply contextual judgment and make the actual decision. The system has zero cognition. It only has routing rules. If this, then that exactly, which brings us to the third category, the emerging tier that is currently terrifying and thrilling, the market all at once, the system of action. Right?
Ori Wellington:This is where AI actually steps in to drive operational and strategic decisions in near real time. A system of action doesn't just wrote a task to a human for a decision, the sister itself feeds risk intelligence directly into the business flow and executes
Sam Jones:the action. It bypasses the human bottleneck entirely, entirely.
Ori Wellington:So we have these three distinct architectures, record, engagement, action. What's fascinating here is how the wheelhouse, IRM 50, AI disruption risk index maps current risk technology to these systems because it reveals an incredibly harsh truth for buyers who are heavily invested in legacy
Sam Jones:tech right now, it is a brutal truth, particularly for procurement teams who maybe just signed three year renewals. Let's break down the disruption exposure. The index looks at workflow automation, which is essentially any platform whose primary function is bridging the system of record into the system of engagement. If your tool's main selling point is that it automatically routes assessments and manages approval cues, your AI disruption risk is classified as H, I, G, H, like capital H, capital I, capital G, capital H, you are in the immediate crosshairs of obsolescence.
Ori Wellington:Now wait, let me push back on this classification of workflow automation as high GH risk. Yeah, because I want to play devil's advocate for the CRO or the VP of audit. Listening right now. Sure go for if you are sitting in that seat, you are dealing with big four auditors constantly, right? You were dealing with SOC two, type two compliance. You were dealing with ISO 2027, one certifications. Auditors, by their very nature, absolutely despise dynamic systems. They hate them, right? They want a deterministic, predictable paper trail. Workflow Automation gives exactly that. It's a rigid track. You know exactly what happens at Step A, Step B and Step C every single time. So isn't this edit nature the absolute predictability of a workflow tool? Isn't that exactly what makes it feel safe and necessary for Enterprise Compliance? Isn't that static nature a feature not a bug?
Sam Jones:It is a feature for the legacy auditor, absolutely, but it is a terminal vulnerability for the business trying to survive an AI transition. That feeling of safety you just described that is precisely the illusion we are warning against today. How so? Because safe, rigid infrastructure is exactly what AI breaks first. Let's look at the mechanics. Workflow Automation is entirely context agnostic. When a legacy workflow tool moves a ticket regarding, say, a critical cloud vulnerability, the system doesn't actually understand what a cloud vulnerability is. It just sees a ticket Exactly. It doesn't know if it's a minor misconfiguration or a catastrophic zero day export that's going to sink the company. It only knows that ticket type C must be routed to user group four based on a rigid, predefined rule. It is fundamentally stateless,
Ori Wellington:meaning, what? In this context, a
Sam Jones:stateless system processes a transaction then immediately forgets it. It's like a vending machine. You put in $1 you get a soda, the machine resets. It has no memory of what you bought yesterday, and it cannot predict
Ori Wellington:what you want tomorrow. So it has zero persistent memory of the operational environment, zero.
Sam Jones:Now introduce generative AI agents into that enterprise. When autonomous agents arrive, they do not need fixed tracks to navigate data. They don't need to log into a beautiful UI dashboard. They don't wait patiently in Task Queues. Agents can connect directly to the unstructured data, the raw AWS logs, the slack conversations, the raw regulatory text. They can read it themselves. They can synthesize the context, apply judgment and maintain the compliance record entirely without the infrastructure that was built for human mediated Record Management.
Ori Wellington:This is a critical point. The agents bypass the engagement interface entirely. I mean, the engagement layer was built because humans couldn't read machine data efficiently, but AI agents can read machine data perfectly,
Sam Jones:exactly when an AI agent can go directly to the raw unstructured data, understand it deeply, and update the system of record on its own. The entire system of engagement, that workflow tool you paid millions of dollars for just to reduce human friction, it becomes obsolete overnight. You don't need a UI if a human isn't the one doing the reading, the agents commoditize the record layer and completely bypass the human engagement interface simultaneously, so that static, stateless nature of workflow automation doesn't protect it. It actually accelerates its own disruption, because it offers absolutely zero cognitive value to the agent that is a
Ori Wellington:massive architectural paradigm shift. It completely invalidates the ROI models of most traditional GRC purchases 100% so if workflow auto workflow automation carries a high GH disruption risk, what about the next phase? The research defines the next architectural bridge as agentic GRC. This bridge is the system of engagement toward the system of action, and the disruption risk index classifies agentic GRC as carrying moderate disruption risk. Why moderate? I mean, why isn't it low risk? If it's utilizing these new AI
Sam Jones:agents, it is classified as moderate because agentic GRC is dynamic and somewhat stateful. Instead of just routing dumb tasks, it actually replaces the human reasoning layer with AI agents that can ingest unstructured data, adapt their behavior mid workflow based on what they find and apply real contextual judgment, which sounds great it is that is a genuine massive advance over workflow automation, but this is the absolute core of the bridge metaphor we talked about. Agentic. GRC carries the structural architecture of the system of engagement with it the agents are reasoning, yes, but because of the underlying data models they are built on. They are inherently bounded by the specific domain they operate in. They are isolated pockets of intelligence.
Ori Wellington:We need to dive deeply into this idea of being bounded because to a buyer to like a compliance manager who has been doing manual reviews for a decade, an AI that can read a dense, 400 page regulatory update from the SEC instantly understand it and automatically map it to the company's internal control framework. That feels like magic. It feels boundless. It feels boundless. It feels like the AI can do anything, but you are saying it is structurally restricted
Sam Jones:by its architecture. It is strictly restricted. And to understand why, we have to look at the enterprise as a whole, not just the compliance department, the wheelhouse, IRM navigator model maps this out beautifully. Risk technology is not a single monolith. An enterprise operates across four distinct operational contexts or integration points where reasoning needs to occur. We call these the four agentic bridges. Okay, I don't
Ori Wellington:want to just list these off like a textbook. I want us to build an operational narrative around them so the listener can actually see where these boundaries end. Let's use a real world scenario. Let's say a massive enterprise like a global financial services firm decides to deploy a new custom built generative AI coding assistant across its entire engineering team. 10,000 developers are suddenly using an LLM to write production code. That is a massive enterprise event. How does that event trigger the four agentic bridges? Let's start with
Sam Jones:the first one, agentic GRC. Okay, perfect scenario. So the engineers deploy the Gen AI coding assistant in the agentic GRC bridge. The integration point is policies. The AI agents operating on this bridge are tasked purely with obligations, controls and compliance frameworks. So the agentic GRC system looks at this deployment and asks, Does this new coding assistant violate our internal acceptable use policy? Does it comply with the EU AI act? Do we have a mapped control for AI generated code review the agent reasons perfectly, but strictly within the boundary of regulatory compliance? Right?
Ori Wellington:But the business doesn't just run on compliance. Which brings us to the second bridge, agentic erm, or enterprise risk management. Right?
Sam Jones:In agentic erm, the integration point is goals. The agents reasoning here do not care about the text of the EU AI act. They care about strategic objectives and corporate risk appetite. So the ERM agent looks at the deployment of the coding assistant and asks our strategic goal for q3 is to increase product delivery speed by 20% to capture market share. Does this AI tool accelerate that goal, and does the risk of the AI generating faulty code exceed our board approved risk appetite for product failure? The context is entirely strategic performance.
Ori Wellington:Okay, and then we hit the ground floor, the actual day to day operations, the third bridge, agentic ORM or operational risk management.
Sam Jones:Here the integration point is processes. The ORM agents are focused on the actual workflows of the business. The ORM agent asks if the engineers are using this AI to write code. How does this alter our daily code commit process? Does it fundamentally change the peer review workflow? If the AI causes a massive bug that takes down the customer portal, what is the exact operational issue? Remediation path? It is reasoning about friction and flow on the factory floor, so to speak.
Ori Wellington:And finally, the fourth Bridge, which might be the most acute in this specific scenario, agentic TRM, or technology
Sam Jones:risk management. The integration point for TRM is assets. We are talking about hard technology exposures, network vulnerabilities, vendor security, posture, identity access management. The TRM agent looks at the new coding assistant and asks, Does this LLM have access to our AWS production environment? Is it inadvertently ingesting personally identifiable customer data. What are the specific cyber vulnerabilities introduced by giving this vendor's API access to our secure network? So we have
Ori Wellington:policies, goals, processes and assets, four distinct integration points, four entirely different languages of risk. And here is where it gets incredibly important for the buyer. The research makes a point that I think is going to be a very bitter pill for a lot of tech vendors to swallow. It states that agentic GRC is boundary. The fact that your shiny new GRC agent only understands the policy domain and is completely blind to the goals, processes and assets domains is not a design flaw. It's not a bug that a vendor can just engineer away with a software patch next quarter. It is the correct architecture for this specific bridge.
Sam Jones:It is a fundamental truth of data architecture. You cannot just command a GRC agent which is trained on policy documents and regulatory schemas to suddenly understand the complex relational database of your enterprise strategy or your network topography,
Ori Wellington:because the data models don't align
Sam Jones:the structural context determines the decision. It is identical to how human specialists are organized in the real world. Think about it, a brilliant, high priced compliance lawyer who understands every single nuance of a data privacy policy. That lawyer cannot just step onto the manufacturing floor and substitute for an operational manager who understands the day to day supply chain processes, right? Nor can that lawyer step into the security operations center and substitute for a CISO who understands cloud asset vulnerabilities, a GRC agent reasoning about policy simply cannot substitute for an ERM agent reasoning about strategic enterprise loss. The underlying data models are entirely incompatible.
Ori Wellington:Yet when we look at the market context provided in the sources, the venture capital investment and the vendor roadmaps are overwhelmingly, almost dangerously hyper concentrated at that single agentic GRC bridge. Oh, absolutely, they're building a massive bridge to nowhere. Look at the evidence in the market right now. You have compliance raising a $20 million series, a led by Google Ventures in February 2026 specifically to deploy over 30 GRC focused AI agents for things like third party risk and high pi workflows. You have anecdotes launching agent studio in January 2026 for no code custom agents, mostly aimed at audit readiness. You have metric stream framing almost their entire 2026 product outlook around agentic AI for compliance,
Sam Jones:the smart money is absolutely pouring into the policy integration point. And let's be fair, there is a very logical short term reason for it, which is automating audit, field work, control, mapping and regulatory change management yields immediate, highly measurable ROI, you can literally calculate the manual hours saved. It is highly attractive to a CFO
Ori Wellington:looking for cost reductions, but that is exactly why we are having this conversation. If all this smart money is pouring into agentic GRC and a CRO buys into this hype, like if they spend $5 million on the best agentic GRC platform available, thinking, they have comprehensively solved their overarching AI risk problem. What is the operational blind spot they are left with? Their blind spot
Sam Jones:is total cross of vein, paralysis. Let's say you buy that elite agenda, GRC platform. It is dynamic. It is stateful within the policy domain, but because it structurally cannot natively communicate with the other three domains, goals, processes, assets, your organizational maturity is short. Structurally capped, kept aware the IRM navigator curve explicitly calls this being capped at the embedded stage. Let's go back to our scenario if the new gen AI coding assistant suddenly introduces a massive vulnerability into an AWS server and asset issue at the TRM level, your new agentic GRC platform might eventually flag a control failure a week later when a scheduled policy scan runs, but it cannot automatically reason about how that specific server failure impacts your strategic quarterly revenue goals at the ERM level, or trigger an emergency operational shift on the developer floor at the ORM level, the GRC AI is blind to the rest of the business. You still require a human being to manually pull reports from four different systems, sit in a meeting and connect those dots. You have optimized one single bridge, but you haven't connected the city. You still have human bottlenecking at the macro level, which brings us
Ori Wellington:perfectly to the catalyst of this entire architectural debate, the real time case study of market architecture playing out in front of us, the audit board to Opto rebrand, because we shouldn't treat this just as a vendor review. We need to treat this as a live demonstration of a massive platform hitting that exact structural boundary we just discussed and trying desperately to navigate it.
Sam Jones:It is a fascinating and highly instructive case study. We have to acknowledge what audit board was. Audit board was wildly successful. They built a phenomenal system of engagement on top of a solid system of record Foundation, they revolutionized the user experience for auditors. They really did, but by completely shedding that brand, rebranding to optro, and heavily releasing their accelerate AI suite, they are publicly acknowledging that their highly successful static, stateless infrastructure is fundamentally vulnerable to the AI disruption curve. They know they have to move to the system of action.
Ori Wellington:Let's pressure test this pivot, because this is where the buyer has to be ruthless. The source material acknowledges that Optos accelerate AI suite, which delivers continuous monitoring, automated audit, field work and AI driven evidence synthesis is a highly credible agentic GRC capabilities, is a very real step forward within the policy domain. It is but, and this is a massive but, Opto is declaring in their market positioning that their destination is to be a full system of action,
Sam Jones:and that is precisely where the buyer must separate the marketing ambition from the cold, hard structural reality of the database to be a true system of action to achieve what the wheel house model calls risk, agency, a platform must be dynamic and fully stateful across all four integration points simultaneously. Currently, if you look at optra's core legacy architecture, they effectively hold native dominance in only one policies. Their foundation is audit and compliance. They do not natively hold the enterprise strategy, the ERP workflows or the IT
Ori Wellington:infrastructure, right? But they aren't stupid. They know this gap exists, which is why we have to look at their strategic corporate moves. Opto acquired, fair now an AI governance platform. Now, if you don't understand the four pillars, that just looks like a standard tech acquisition, but if you understand the architecture, this acquisition makes perfect sense. Oh, totally. Ai governance is a unique beast. It sits right at the chaotic intersection between policy, which is your GRC compliance goals, which is your erm strategy, and assets, which is your TRM tech stack. When you deploy an AI agent in your enterprise, governing that agent is simultaneously a compliance issue, a strategic performance issue and a hardcore technology asset issue.
Sam Jones:Exactly acquiring fair now is optro attempting to buy what the research calls context adjacency. It allows their core GRC platform to start peeking over the fence from the isolated policy domain into the complex goals and assets domains. It is a directionally correct, aggressive move for a platform that desperately wants to reach system of action status. But, and this is where technical engineering reality ruins marketing dreams. Buying an adjacent capability does not mean you have natively integrate the data models. The integration friction here is massive.
Ori Wellington:Let's get into the wheeze on that friction. What does it actually look like for the engineers trying to merge these systems?
Sam Jones:Well, you have a legacy GRC tool running on a traditional relational database, right? It's highly structured, heavily schema dependent, optimized for static evidence logging. And then you have the newly acquired AI governance platform, which relies on unstructured data, vector embeddings, continuous API monitoring and real time, statefulness, completely different languages. You can't just plug them together. You face massive schema mismatches, API rate limiting when trying to pull continuous telemetry into a static database, severe data latency. Best case scenario, you build a brittle API bridge that gives you a dashboard view of the AI tool within the GRC platform, but that is not native integration. That does not constitute full, autonomous IRM integration. You are still dealing with siloed reasoning engines trying to talk to each other through translators
Ori Wellington:and while purpose built GRC vendors are fighting this internal integration battle, they are going to face fierce competition from an angle most compliance buyers don't even think about. If we look at the broader. Competitive landscape and structural proximity. The research points out that legacy behemoths, specifically ServiceNow, IRM and SAP, GRC, 2026 actually have a massive hidden architectural advantage in this race toward autonomous IRM,
Sam Jones:they do, and it all comes down to their historical origin point. Think about where ServiceNow and SAP live in the enterprise. Service now is fundamentally rooted in IT Service Management, the ITSM and the configuration management database, the CMDB, they natively hold structural proximity to the assets context. They already know every server, every endpoint, every software license in your company. They own the TRM layer, exactly. And SAP is fundamentally rooted in enterprise resource planning or ERP, they natively hold structural proximity to the processes. Context. SAP already runs your supply chain, your financial ledgers, your Human Resources workflow. So they own ORM, yes, a purpose built GRC platform like optro has to build complex API connectors or buy external companies just to begin understanding an enterprise's operational processes or IT assets SAP and service now don't have to integrate with the business data. They are the business data when they deploy AI agents, those agents are already swimming in the native context of assets and processes.
Ori Wellington:So let me offer a provocation here, a direct challenge for the listener who might be evaluating a massive contract right now, uptro and vendors like them are putting out aggressive press releases declaring they are fundamentally transforming GRC, positioning themselves as the definitive path toward a system of action, but structurally, architecturally today, they are essentially an early stage agentic GRC platform with native dominance in One integration point and one adjacent acquisition bolted on right for a buyer evaluating them, or any similar pure play GRC vendor right now isn't the gap between their marketing declaration of system of action and their actual underlying silo data model, the single most dangerous blind spot in the entire procurement process.
Sam Jones:It is not just a blind spot. It is the critical failure point of modern tech procurement, it is the most important question any evaluator should press on during a vendor pitch. If a vendor sits in your boardroom and says, We are a system of action, we offer autonomous AI, your immediate non negotiable response must be, show me your persistent statefulness across policies, goals, processes and assets simultaneously. Show me the integration Exactly. Do not show me a dashboard that pulls an API feed. Show me how a change in our SAP supply chain process natively triggers a reevaluation of our strategic risk appetite in your database. If they can only show you automated policy mapping or faster audit field work, they are selling you a bridge. They are not selling you the destination.
Ori Wellington:So we have spent a lot of time defining what the destination isn't and what the boundaries are. Let's talk about what the destination actually looks like. Let's talk about autonomous IRM. The wheelhouse note is very precise in its definitions here. Autonomous IRM is not a third bridge. You don't just build another lane of traffic. It is the complete holistic integration of all three systems, record, engagement, action and all four agentic bridges, GRC, erm or RM, TRM, into a single, unified, governed architecture.
Sam Jones:And achieving that requires what data architects call full statefulness.
Ori Wellington:Okay, let's translate dynamic and fully stateful into operational buyer reality, because full statefulness sounds like dry engineering jargon, but in the context of enterprise risk, it is actually the Holy Grail. Paint the picture for us, what does a fully stateful, autonomous IRM enterprise actually look like in practice?
Sam Jones:Let's use a macro level example. Imagine your board of directors decides to drastically revise a strategic enterprise goal, say, in response to competitor movements, they decide to accelerate expansion into a highly regulated new European market by six months. They enter the strategic shift into the system, okay, big strategic move in a fully stateful, autonomous IRM architecture, that single strategic goal revision at the ERM level instantaneously and automatically recalibrates the regulatory policies and scope at the GRC level. It instantly pulls in all relevant EU directives and maps them to your controls. Wow, simultaneously a signal from an IT asset, say, a cloud server in Frankfurt showing a new unpatched vulnerability at the TRM level, instantaneously updates the operational risk picture at the ORM level, which feeds directly back up to show the board exactly how that specific server vulnerability threatens their new strategic timeline for the European launch.
Ori Wellington:And the absolute kicker here, the thing that separates this from everything we do today, is that no human orchestration is required between those cycles.
Sam Jones:None, zero human middleware. The system maintains persistent, continuously updated context across all four pillars simultaneously. A signal from one domain updates the risk picture across all other domains in real time, automatically. This is the state the IRM navigator curve defines as risk agency. It is not humans doing the work for the machine, and it's not machines running wild. Without oversight. It is human and machine agency operating together seamlessly within mathematically validated guardrails.
Ori Wellington:And this leads us to what I found to be the most counterintuitive, genuinely surprising insight in the entire source material we established earlier, that workflow automation has a high GH disruption risk from AI and agenda, GRC has a moderate disruption risk. But autonomous IRM, which is the most advanced, most AI heavy architecture imaginable, carries the low AI disruption risk. Yeah. Why? It comes down to something the research calls the agent proliferation paradox. Let's go unpack this.
Sam Jones:It is a brilliant observation of market dynamics, if you think about it fundamentally, why do legacy software systems get disrupted by AI they get disrupted because AI agents are built to replace the specific functions those systems provide a workflow tool, routes tickets, an AI agent can do the routing itself. So the workflow tool dies, but autonomous IRM isn't providing a discrete task function that an individual agent can replace. Autonomous IRM is the macro architecture that governs the agents.
Ori Wellington:This agent proliferation, the fact that every single department in your company is suddenly spinning up 50 new isolated AI agents to do their specific work, buying shadow, IT tools, launching custom llms. That chaos is actually a massive demand driver for autonomous IRM
Sam Jones:not a threat to it, exactly. This is the compounding risk of isolated agents. Think about the SolarWinds hack or the CrowdStrike outage, where a single localized point of failure cascaded through an entire global supply chain because the systems were tightly coupled but structurally blind to each other, a nightmare. Now imagine that with AI agents, every single time a business unit deploys a new agent to optimize their workflow. They are creating a new, unpredictable source of risk. They are creating complex new governance obligations across all four pillars. How does this new marketing AI agent impact our data privacy compliance? How does it affect our core operational processes? Is it creating a back door vulnerability in our cloud assets? Autonomous IRM is the only architecture capable of governing that level of dynamic cross domain complexity, because it's fully stateful. Yes, the more localized agents and enterprise deploys, the more absolutely indispensable the overarching autonomous IRM architecture becomes to prevent a catastrophic cascade failure.
Ori Wellington:I want to introduce an analogy to really ground this abstraction, because the concept of stateful architecture can get dense, and we need to visualize it. I want to build out the city grid metaphor. I love this one. Think about an agentic GRC platform like a highly advanced self driving car. It is an amazing piece of technology. It reads the road perfectly. It optimizes its own route to save fuel. It uses LIDAR to avoid obstacles. It has agency within its specific, localized context of driving from point A to point B, but autonomous IRM. Autonomous IRM is not the car. Autonomous IRM is the massive, city wide intelligent grid, governing the traffic lights, tracking the pedestrian flow patterns, mapping the emergency vehicle routes and managing the power grid all simultaneously in real time. That is the perfect way to contextualize it. And let's take it further. In this enterprise city, the traffic lights are your identity access management gates, the pedestrians are your end users doing their daily jobs. The emergency vehicles are your critical patch deployments. If you are a city planner or a CRO and you just buy 10,000 self driving cars without investing a single dollar in the intelligent city grid to govern them, you completely cap your efficiency and exponentially increase your risk. You're just asking for a disaster, right? You might get an individual compliance report done faster, but you haven't solved the macro traffic problem, and you certainly haven't prevented a multi car pile up at a blind intersection. You have isolated intelligence creating systemic chaos. So if this fully stateful city grid architecture is so vital, and the demand is only going to skyrocket with the explosion of agent proliferation, I have to ask the obvious question, why hasn't any commercial platform achieved it yet? Why can't a CRO just write a check and buy autonomous IRM today,
Sam Jones:because the barrier to achieving it is fundamentally architectural, not technological. This is what buyers don't understand. We have the AI technology right now. We have incredibly powerful LMS. We have the compute power. What we don't have is the foundational data models,
Ori Wellington:the actual database structure
Sam Jones:exactly aligning four radically distinct reasoning contexts, policies, goals, processes, assets into a unified, governed decision architecture cannot be retrofitted. You cannot build a massive relational database optimized purely for policy compliance for 10 years. Suddenly bolt an LLM onto the side of it and expect that system to natively understand the complex, fluid reality of your operational supply chain processes, it is structurally impossible start over autonomous IRM must be designed from the ground up, from the very first line of code for pervasive statefulness and rebuilding from the ground up is something legacy vendors are terrified to do because it breaks their existing revenue models,
Ori Wellington:and that reality creates a treacherous, highly deceptive landscape. For the buyer. Which brings us to the final and perhaps most important piece of this deep dive, the actionable blueprint. How does an organization actually navigate from a stage of risk dysfunction or mere coordinated maturity toward true risk agency? Because the wheelhouse research warns of a very specific, very common danger here, something they call the integration trap. The integration
Sam Jones:trap is the most common way sophisticated risk leaders waste vast amounts of capital. We have to be honest and acknowledge that the localized efficiency gains of agented GRC are very real and highly tempting. If you deploy an AI agent to automate your audit field work or your third party vendor risk assessments, you will save hundreds of hours and hundreds of 1000s of dollars.
Ori Wellington:The ROI is undeniable.
Sam Jones:The temptation for the buyer is to deploy an agentic GRC platform as a standalone point solution, a mere efficiency upgrade to replace your old, clunky workflow tool.
Ori Wellington:You buy it just to make the compliance team faster without thinking about the rest of the business Exactly.
Sam Jones:But if you do that, you are simply replicating the legacy integration trap in a shiny new agentic form. You are optimizing one single silo. You are reinforcing the boundary by treating it as a point solution. You cap your organization at Embedded maturity, and you leave yourself with absolutely zero architectural path to ever connect your policy data natively with your strategic goals, your operational processes, or your technology assets, you bought a faster, sleeker car, but you're still stuck on a bridge that doesn't reach the mainland.
Ori Wellington:So how do you avoid the trap? How do you buy the car while still building the grid the wheelhouse research lays out four hard, essential questions that every risk leader, every CRO must force their team to answer before they sign a tech deal in this current market. Let's walk through them in detail. Question number one is the IRM architecture defined before the technology is selected. This should be
Sam Jones:a non negotiable rule of procurement. Organizations that select a vendor without defining their own target architecture first are actively planning to fail if you just look at what the vendor is offering and mold your processes to fit their tool, you are letting the vendor's structural limitations dictate your enterprise risk strategy. You need a comprehensive design for all four bridges, a blueprint for how you will eventually connect policies to goals, processes and assets, long before you buy a tool that only solves for policy, right?
Ori Wellington:Buying the technology before defining the architecture is like buying 10 tons of drywall before you've even drawn the blueprints for the house. You're just gonna end up with a pile of expensive materials that don't fit together. Question number two, does the platform's data model have a credible roadmap, backed path toward the other agentic domains?
Sam Jones:Notice the specific phrasing there credible roadmap backed path. During a pitch, every single vendor will look you in the eye and tell you they can do it all. Eventually it's on the roadmap. They say, always on the roadmap. Always. You cannot accept that. You need to demand to see the underlying data model, if their architecture functionally terminates at the GRC domain if their database fundamentally cannot contextualize a strategic erm goal without requiring custom, brittle, expensive API integrations built by third party consultants. That is a fundamentally different, far riskier investment than buying into a platform that is natively building toward holistic statefulness.
Ori Wellington:Question three, is the transition away from workflow automation being treated as a profound architectural opportunity rather than just a risk to manage.
Sam Jones:This is a mindset shift when your legacy workflow system inevitably reaches end of life, or when the vendor maintenance costs get extortionately High, do not just execute a lift and shift to an AI version of the exact same siloed workflow the immense disruption pressure that generative AI is putting on static workflow tools is actually a massive gift. Gift. It is runway. It is the perfect, rare opportunity to tear down the silos and redesign your enterprise risk data model across all four integration points simultaneously. Don't waste the crisis just to buy a faster version the same broken process.
Ori Wellington:And finally, question four, is the organization measuring progress toward risk agency or only measuring GRC
Sam Jones:efficiency metrics drive organizational behavior? It is that simple. If you only measure and report on how many manual hours you saved on control testing this quarter, you are only confirming progress on the policy bridge. You are celebrating a local maximum. To actually reach risk agency, you have to track complex integration metrics. How quickly did an obscure asset vulnerability, identify an AWS automatically update our strategic board level risk posture? That is the metric that proves you were building the city grid.
Ori Wellington:That is the ultimate synthesis of this entire discussion. Efficiency metrics prove you're making exceptionally good time driving across the bridge. Integration metrics prove you are actually heading toward the
Sam Jones:right destination. If we summarize the core thesis of this massive market shift, there is absolutely no room for hedging or half measures. Workflow automation as an entire software category is highly vulnerable. Static. Flawed and rapidly accelerating toward total obsolescence. It's done. Agentic GRC is a real, highly valuable, but fundamentally bounded step forward. It is a necessary bridge, but it is not the end state. The destination. The only architecture that survives and thrives in an AI dense future is fully stateful, autonomous IRM.
Ori Wellington:So the mandate from this research is blindingly clear. For organizations, you must architecturally design toward risk agency before you select a single piece of technology. And for the vendors listening, you have to build the natively stateful architecture now, before the market fully understands what it demands, or you will face terminal disruption the moment the market realizes your platform structurally cannot connect the dots.
Sam Jones:It is a brutal, adapt or die moment for the entire risk tech category, which
Ori Wellington:leaves us with a rather chilling, highly personal thought to close on as you sit there today evaluating your own enterprise risk posture and looking at the vendor contracts on your desk, ponder this structural question based on the reality we've just outlined, if the city grid architecture of autonomous IRM is fundamentally required to govern the compounding, cascading risks of 1000s of AI agents, what happens to the personal legal liability of a co so or a CRO who authorizes the deployment of hundreds of isolated AI agents today Knowing full well that their current risk architecture structurally cannot govern, monitor or predict their cross domain interactions.
Sam Jones:That is the multi million dollar question keeping smart executives awake at night. Thank you for joining
Ori Wellington:us for this deep dive into the structural dynamics of the risk tech market. We will continue exploring the reality shaping your decisions next time.