The Risk Wheelhouse

S5E1: When AI manages risk, who manages the AI?

Wheelhouse Advisors LLC Season 5 Episode 1

Autonomous IRM is moving from the lab into the core of enterprise risk, compliance, and security and the stakes couldn’t be higher. When a self-learning agent flags threats, scores claims, or polices policy violations, who is accountable, how do we intervene, and what proof can we show regulators and customers? We unpack the three frameworks shaping credible answers: ISO/IEC 42001 as a certifiable management system that embeds AI governance into everyday processes, the EU AI Act as hard law with high‑risk tiers and eye‑watering fines, and the NIST AI Risk Management Framework as a practical playbook for building trustworthy systems.

We start with the boardroom view: why ISO 42001 pays off in demonstrable maturity, how the EU AI Act elevates AI to enterprise risk with penalties up to seven percent of global turnover, and where NIST establishes a common language (fairness, transparency, security, and accountability) that unites legal, risk, and engineering. Then we translate strategy into execution. You’ll hear how to build an AI Management System on PDCA, run gap assessments for high‑risk use cases, design human-in/on‑the‑loop oversight, and stand up continuous monitoring, logging, and post‑market incident reporting. We also break down NIST’s Govern‑Map‑Measure‑Manage flow so teams can pilot on a few use cases, validate bias and robustness, and scale with confidence.

Finally, we tackle the accountability puzzle of autonomous agents. ISO demands end‑to‑end auditability and explainability across the lifecycle. The EU AI Act limits unchecked autonomy, mandates human oversight, and bans dangerous applications like social scoring and manipulative systems. NIST frames the agent as a socio‑technical system that needs named owners, security guardrails, bias evaluation, and contingency plans. Through scenarios (cyber threat detection in banking, fraud triage in insurance, and an autonomous IRM assistant) we show how to layer the frameworks: law sets the what, ISO and NIST deliver the how.

If you’re a leader or operator wrestling with when to certify, where to place the human, and how to future‑proof global deployments, this conversation gives you a clear path forward. Subscribe, share with your risk and engineering teams, and leave a review with the one governance action you’re committing to this quarter.



Visit www.therisktechjournal.com and www.rtj-bridge.com to learn more about the topics discussed in today's episode.

Subscribe at Apple Podcasts, Spotify, or Amazon Music. Contact us directly at info@wheelhouseadvisors.com or visit us at LinkedIn or X.com.

Our YouTube channel also delivers fast, executive-ready insights on Integrated Risk Management. Explore short explainers, IRM Navigator research highlights, RiskTech Journal analysis, and conversations from The Risk Wheelhouse Podcast. We cover the issues that matter most to modern risk leaders. Every video is designed to sharpen decision making and strengthen resilience in a digital-first world. Subscribe at youtube.com/@WheelhouseAdv.


Sam:

Welcome to the deep dive. You know, almost every major enterprise today is rapidly integrating artificial intelligence into its core operations. It's happening everywhere.

Ori:

It really is. But I think the biggest shift isn't just uh using AI, it's actually empowering it.

Sam:

Empowering it. How so?

Ori:

Well, we're seeing the emergence of what some call autonomous integrated risk management or autonomous IRM. These are like self-learning agents that are independently monitoring risks, flagging issues, and sometimes even making decisions.

Sam:

Right. And that idea where the system itself makes consequential decisions, that opens up a huge governance challenge, doesn't it?

Ori:

A massive one. If these AI agents are operating independently, then executives, boards, they fundamentally need some kind of structural guarantee. Trevor Burrus, Jr.

Sam:

A guarantee that these systems are trustworthy and compliant.

Ori:

Exactly. Trustworthy, compliant, and really aligned with the company's strategic goals. That's the core problem.

Sam:

Okay. So providing that guarantee, but without drowning the C-suite in complex regulatory details, that's our mission for this deep dive. We've looked at sources covering the three really critical global AI governance frameworks.

Ori:

That's right. We're going to do a comparative analysis of ISOIE 42001, the EU AI Act, and the NIST AI Risk Management Framework, the RMF.

Sam:

And we want to cut straight to what matters for you listening. What does executive leadership absolutely need to prioritize?

Ori:

And then how can your operational teams actually implement these rules across all sorts of different systems?

Sam:

And crucially, how does each framework specifically tackle that really unique accountability challenge posed by an autonomous AI agent? That's key. Definitely. Okay, let's maybe start unpacking this with the strategic view, kind of the top-down perspective. If you're in the C-suite or perhaps on a board, these frameworks probably look quite different, at least on paper. They do. Let's begin with ISO IEC 40201. Now, this is described as a voluntary standard that immediately sounds, well, softer than a mandatory law. So why should executives really invest serious time and let's be honest, capital in adopting it if it's just voluntary?

Ori:

Yeah, that's absolutely the right question to ask. So ISO 42001, it's the AI management system standard or AIMS. And yes, it's voluntary, but the key thing is it is certifiable.

Sam:

Okay, certifiable.

Ori:

And the strategic payoff, I think, is really twofold. First, it actually requires top management involvement. They have to formally integrate AI governance right into the existing business processes. It's about building that necessary internal scaffolding.

Sam:

Yeah, I see. So it's less a compliance burden and maybe more an organizational enabler, helps structure things internally.

Ori:

Precisely. And the second payoff is all about demonstrability. Adopting it sends a clear signal to your stakeholders, customers, investors, regulators, too. It shows you're committed to responsible AI. And critically, if you're planning to comply with, say, the EU AI Act down the line. Trevor Burrus, Jr.

Sam:

Which many global companies will have to. Trevor Burrus, Jr.: Right.

Ori:

Then 42001 gives you a ready-made auditable management system. You can use that to map your processes and actually prove compliance. It significantly lowers that future friction.

Sam:

Okay, that makes sense. Now let's contrast that with the EU AI Act. You mentioned it, and it sounds anything but soft.

Ori:

No, definitely not.

Sam:

This is a binding law. It's coming soon, based on risk tiers. What's the absolute non-negotiable strategic demand from this legislation?

Ori:

Well, the board simply must treat AI risk as a core enterprise-level risk. Full stop. The strategic imperative here, honestly, feels very similar to the introduction of GDPR a few years back for privacy. Trevor Burrus, Jr.

Sam:

GPR, right.

Ori:

If you use or you provide what the Act defines as high-risk AI systems, and it's quite specific about what those are, you absolutely must ensure comprehensive compliance from day one. There's no grace period, really. Trevor Burrus, Jr.

Sam:

Why the comparison to GDPR? Was it the potential impact, the fines?

Ori:

Exactly that. The financial penalties are structured specifically to command board-level attention. Noncompliance could trigger fines up to $35 million, or, and this is the kicker, 7% of global annual turnover, whichever figure is higher.

Sam:

7%? Wow. For a large global company, that's that's not trivial.

Ori:

It's absolutely not a rounding error. For many, it could be a catastrophic risk. So executives really have to establish proper AI oversight governance now, primarily to protect the firm's bottom line and of course its reputation.

Sam:

Okay, understood. And the third one, the NIST AI RMF 1.0. This comes from the U.S., also voluntary, like ISO, but you mentioned it's not certifiable. So what's its strategic value then?

Ori:

Yeah, NIST RMF plays a really crucial role, particularly in places, you know, without binding AI laws yet, it's rapidly becoming a kind of global benchmark for best practice. For boards, adopting the NIST framework demonstrates a proactive stance. It shows they're fulfilling their fiduciary responsibility around managing emerging risks.

Sam:

So it helps guide them.

Ori:

It guides leadership to ask the right, quite specific questions about their AI risk exposure.

Sam:

And what are those right questions generally centered around? What's the focus?

Ori:

Fundamentally, trustworthiness. The NIST framework is really built around ensuring AI systems are fair, secure, transparent, and accountable. Leadership can use it to develop a common language, a shared understanding around AI risk within the organization. It essentially provides a blueprint for self-regulation, helping evaluate potential harms even before specific rules exist.

Sam:

So it structures the conversation at the highest level.

Ori:

Precisely. It gets everyone on the same page about what good looks like.

Sam:

Okay, that lays out the strategic foundations well. Now let's shift gears a bit. Let's talk about where the rubber meets the road implementation readiness. What about the operational teams, the risk, compliance, legal, IT folks? How do they take these strategic mandates and turn them into like an actual operational checklist?

Ori:

Good question. For teams implementing ISO 42001, it's mostly about process integration. The standard gives you a structured AI management system, the aims using that familiar Plando Check Act model that allows for a continuous improvement.

Sam:

Aaron Powell PDCA, right? Many teams know that.

Ori:

Exactly. And the real efficiency boost comes if your teams are already following, say, ISO 273001 for security or maybe 27701 for privacy.

Sam:

Ah, so they're not starting completely from scratch. They can sort of layer in the AI-specific controls onto existing systems.

Ori:

That's the idea. The teams get 38 specific controls and a set of required AI policies. That provides a really clear objective checklist for auditing and making improvements. And the organization does decide to go for certification. That certificate acts as a powerful objective benchmark that's recognized globally. It proves you've done the work.

Sam:

Okay. Now preparing for the EU AI Act implementation sounds well significantly more demanding.

Ori:

Uh-huh.

Sam:

What are the immediate actions for operational teams who know they're facing mandatory compliance soon?

Ori:

Yeah, the workload is definitely heavy there. For any system deemed high-risk, teams have to implement a continuous AI risk management system. And that's across the entire AI lifecycle. Plus, they need a full quality management system wrapped around it. This means really robust data governance.

Sam:

What does that entail practically?

Ori:

Things like auditing your data sets for bias, making sure your data quality pipelines are sound, detailed record keeping, logging all system activity, and defining very rigid human oversight protocols.

Sam:

Okay, so if you're on a risk or legal team listening right now, what should you be doing like today to prepare?

Ori:

Honestly, conduct rigorous gap assessments immediately, like right now. You absolutely must identify all the systems that are likely to meet the high risk criteria defined in the act. Think about systems impacting credit scoring, hiring decisions, insurance eligibility.

Sam:

High stakes areas.

Ori:

Exactly. Once you've identified them, the teams need to start designing those human in the loop or human on-the-loop mechanisms and ensure there's comprehensive cross-functional training involving legal, IT, risk, everyone. Enforcement is getting closer, and this requires serious resource allocation starting now.

Sam:

Got it. And what about the operational side for teams adopting the NIST AI RMF? You mentioned a playbook, which sounds quite practical.

Ori:

It really is the most flexible of the three, I'd say. NIST is designed to be immediately usable and highly tailorable. You can adapt it to pretty much any organizational context, doesn't matter the industry or size.

Sam:

That adaptability is key.

Ori:

And it deliberately comes with practical resources like that playbook you mentioned, and also crosswalks showing how NIST maps to other standards, like ISO or even principles in the EU AI Act. This is hugely helpful for operational teams who need, you know, usable guidance, not just dense legislative text.

Sam:

But how easily can, say, my existing risk management teams who might be more used to traditional frameworks actually pick up and use NIST RMF? Does they need specialized AI engineering training?

Ori:

That's actually one of the strengths of its design. It's structured around four high-level sort of process-oriented core functions: govern, map, measure, and manage. Teams usually start by mapping their AI systems to these functions. This allows for a staged rollout. You can pile out the framework on maybe one or two use cases first.

Sam:

Learn as you go.

Ori:

Exactly. Refine your internal processes for things like bias testing or model validation on a smaller scale before you try to apply it everywhere. It helps build that readiness organically.

Sam:

Right, builds the muscle memory. Okay, that makes perfect sense. Now let's move to what feels like the real cutting edge here. Section three. Governing the autonomous AI agent. When systems are making these self-learning risk decisions, potentially without direct human input moment to moment, how do these frameworks possibly ensure accountability?

Ori:

Yeah, this is where the philosophies of the three frameworks really start to diverge, I think, quite significantly. ISO 42001 tends to address autonomy through its focus on continuous life cycle management. It mandates ongoing monitoring, auditing, and improvement processes, specifically because those self-learning models adapt over time.

Sam:

So it tracks the evolution.

Ori:

Right. And for an autonomous agent under ISO, it requires that basically every decision it makes must be explainable, fully auditable, and demonstrably free from prohibited biases. That's needed to satisfy both internal reviews and potentially external auditors.

Sam:

So ISO demands that even if the decision is autonomous, the process has to be reconstructible and justifiable after the fact.

Ori:

That's a good way to put it.

Sam:

Okay. What about the EU AI Act's stance on unchecked autonomy? It sounds like they might be warier. Trevor Burrus, Jr.

Ori:

Much warier.

Sam:

Yeah.

Ori:

The EU AI Act puts a fundamental check on autonomy, especially for anything classified as high risk. For any high-stakes AI decisions in those categories, it explicitly mandates human oversight protocols.

Sam:

Human oversight meaning.

Ori:

Meaning either human in the loop, where a person has to actively approve the decision before it's executed, or human on the loop, where a human retains the ability to step in to intervene and stop or override the system's decision.

Sam:

So it sets very clear, non-negotiable boundaries on just how much eponymous is actually permitted in sensitive areas.

Ori:

Absolutely. And beyond just oversight, the act goes further. It outright prohibits certain autonomous use cases entirely, things deemed just too dangerous to societal norms.

Sam:

Like what?

Ori:

Things like autonomous social scoring by governments, or systems designed specifically to manipulate human behavior in ways that could cause psychological or physical harm. Those are banned. Wow. And furthermore, because autonomous systems can, by their nature, evolved unpredictably, the act requires providers of high-risk AI to implement really rigorous post-market monitoring. They have to watch how it performs in the real world and report any serious incidents immediately.

Sam:

Constant vigilance required. Okay. And how does the NIST framework approach governing these autonomous risk agents?

Ori:

NIST definitely acknowledges the unique risks here, particularly unpredictability and the sort of black box problem where you don't always know why the AI did what it did. It strongly urges organizations to view the AI agent not just as a piece of software, but as a socio-technical system.

Sam:

Sociotechnical. Meaning it involves people and processes around the tech.

Ori:

Precisely. This holistic view means the organization has to implement strong governance guardrails around the agent. Things like assigning clear accountability to specific human owners, implementing mandatory security measures, setting up rigorous bias evaluation pipelines before deployment, and having strong contingency plans ready in case the agent fails or, you know, drifts unexpectedly off course. Rigorous testing is critical.

Sam:

Testing for fairness, robustness, explainability, all those trustworthiness elements again.

Ori:

Exactly, before you let it run autonomously.

Sam:

Okay, that frames the governance challenge really well. Let's try to bring it all together now by looking at maybe three quick operational scenarios. I think this will help illustrate how you might need a hybrid approach in practice.

Ori:

Sounds good. Let's start with, say, technology risk. Imagine a global bank using generative AI and maybe large language models to help flag sophisticated cyber threats in real time. This system obviously introduces significant risks if it's wrong.

Sam:

Okay, how would they govern that kind of system using these frameworks?

Ori:

Well, they'd likely use the NIST AI RMF as their foundational blueprint. That helps them establish rigorous controls for robustness and accuracy. They'd specifically need to test the AI with adversarial scenarios, trying to fool it to minimize both false positives, which could shut systems down unnecessarily, and false negatives, which miss real threats.

Sam:

So NIST for the technical rigor.

Ori:

Right. Then they'd probably layer ISO 42001 on top. That helps structure the continuous monitoring processes and ensures the internal feedback loops for improvement are correctly set up and audited.

Sam:

And the EU AI Act?

Ori:

Well, even if they aren't legally required to comply yet in a specific jurisdiction, their compliance team would likely ensure alignment with EU AI Act principles. That means meticulously documenting the AI's decision logic and making sure they maintain defined human intervention capabilities for any really critical security decisions. It's about future-proofing and best practice.

Sam:

Okay, that's a very practical layering approach. Let's take scenario two. Operational risk. Maybe an insurance company automating parts of its claims assessment process, perhaps using AI to check for potential fraud. Given the high impact on people's finances, that sounds like it would almost certainly be classified as high risk under the EU AI Act, right?

Ori:

Oh, definitely. That's a classic example. So here, the EU AI Act controls become non-negotiable and mandatory. They'd need that formal risk management system we talked about, continuous data quality checks, specifically looking for biases that could unfairly deny claims, and absolutely mandatory human oversight for any contested claim decisions. The AI can't have the final say if disputed.

Sam:

So the law dictates the core requirements. How do the others help?

Ori:

To actually make this operational, they'd probably lean on ISO 402001. It helps structure the required AI impact assessment and guides the implementation of specific bias mitigation measures within their workflow. And finally, the NIST RMF would provide valuable guidance to their technical teams on how to conduct validation testing, making sure the fraud flags are genuinely reliable and that the system is technically robust and secure.

Sam:

Interesting. So the binding law, the EU Act, sets the the what must be done. And the voluntary standards, ISO and NIST, help define the auditable how for the teams executing it.

Ori:

That's a great way to summarize it. And yes, it means managing potentially three overlapping sets of requirements simultaneously. That's just the reality for many global enterprises now.

Sam:

Okay. Final scenario. Let's look at the GRC space itself: governance, risk, and compliance. Imagine an autonomous IRM assistant, an AI agent that's autonomously standing, say, employee communications to flag potential policy violations. Aaron Powell Right.

Ori:

This involves both operational risk like data handling and also HR compliance considerations. Very sensitive.

Sam:

So how do you govern that? Seems tricky.

Ori:

It requires extreme clarity on accountability. Here, the NIST AI RMF would likely form the bedrock foundation. It ensures clear accountability is assigned up front, meaning a specific compliance officer, a human, must be named as the ultimate owner in oversight mechanism.

Sam:

So a human is always responsible.

Ori:

Always. That human reviews any high-stakes alerts generated by the AI maybe overrides decisions, ensuring the AI's outputs remain subject to human judgment, especially for disciplinary actions. Then the organization's ISO 402001 management system would require that the agent's underlying algorithms are fully auditable and transparent. This allows internal audit teams, for example, to confirm the system is only looking for what it's explicitly authorized to find and not overreaching.

Sam:

What about the regulatory angle, like the EU AI Act principles?

Ori:

Aaron Powell Well, even for an internal tool like this, the company might choose to voluntarily follow key EU AI Act principles. Things like maintaining detailed documentation of how the system works and ensuring transparency to affected employees about the monitoring process itself.

Sam:

Why do that voluntarily?

Ori:

It provides essential ethical reassurance both to the board and to employees. And frankly, it also prepares the firm for potential future regulations, expanding into these sensitive internal use cases. It's prudent.

Sam:

Okay, that makes a lot of sense. So synthesizing all this information from our sources, for the executive listener, what's the single most important takeaway message here?

Ori:

Aaron Powell Yeah, I think the common, really critical theme across all these frameworks and scenarios is the absolute necessity of building governance structures from the top down. But combining that with genuine cross-functional readiness on the ground, you simply cannot delegate AI governance entirely to the tech department or assume it'll just happen. It needs deliberate structure.

Sam:

So leadership has to drive it.

Ori:

Absolutely. Executives must prioritize establishing some kind of foundational framework, whether that's going for ISO 40 2001 certification, building comprehensive compliance programs for the EU AI Act, or maybe adopting the NIST RMF as the core internal guideline. And crucially, this means defining crystal clear roles: who on the board has oversight, who are the designated risk owners, which technical teams handle compliance and mandating continuous monitoring and reporting.

Sam:

That clarity seems like the only way you can gain the confidence needed to actually harness the power of autonomous AI effectively and safely.

Ori:

I believe so.

Sam:

But, you know, it does raise a final really thought-provoking question for you, the listener, to perhaps mull over this week. If organizations are increasingly relying on these sophisticated AI agents for autonomous risk management tools that, as we've heard, are specifically required to be auditable and transparent, what new ethical frameworks and new accountability protocols do we need? Specifically, what do human compliance officers need to develop to truly manage a system that isn't static, but is constantly learning and potentially adjusting its own definitions of risk?

Ori:

That's the deep question, isn't it? The human role seems to shift. It's less about just assessing the risk directly and more about auditing the machine that audits the risk. And that machine is constantly moving the goalposts through its learning. It really becomes a challenge of managing the risk of the AI managing itself.

Sam:

A fascinating challenge indeed. A perfect closing thought for this deep dive. Thank you so much for guiding us through these essential and complex frameworks today.