The Risk Wheelhouse

S4E10: From Boardroom to Code Base - How the EU AI Act Reshapes Business Strategy

Wheelhouse Advisors LLC Season 4 Episode 10

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 26:33

Artificial intelligence stands at a crossroads of breathtaking innovation and urgent need for responsible guardrails. Every breakthrough brings questions about safety, fairness, and accountability that can no longer be afterthoughts. The European Union has responded with the AI Act – the world's first comprehensive legal framework for artificial intelligence – and its General Purpose AI Code of Practice has already secured commitments from tech giants like OpenAI, Google, Microsoft, and Anthropic.

We unpack what this means for anyone building, deploying, or investing in AI systems. The EU's risk-based approach categorizes AI into four tiers, from banned practices (social scoring, emotion detection in workplaces) to high-risk applications requiring strict oversight (recruitment, medical devices) to systems needing basic transparency. For general purpose AI models, key requirements include detailed documentation using specific templates, energy consumption reporting, comprehensive copyright compliance including respecting robots.txt opt-outs, and robust security measures.

The stakes couldn't be higher – violations can trigger fines up to €35 million or 7% of global annual turnover. This isn't just another compliance exercise; it represents a fundamental shift in how organizations must approach AI governance. We outline a practical roadmap for implementation, from urgent model inventories to establishing cross-functional AI risk councils and integrating these requirements into existing risk management frameworks aligned with standards like NIST AI RMF and ISO 42001.

Whether you're a CFO allocating budget for new compliance measures, a CRO assessing emerging risks, or a developer navigating technical requirements, this deep dive provides actionable insights to transform regulatory challenges into strategic advantages. The tension between rapid innovation and responsible deployment defines our AI future – understanding these new rules provides essential context for shaping that future wisely.



Visit www.therisktechjournal.com and www.rtj-bridge.com to learn more about the topics discussed in today's episode. 

Subscribe at Apple Podcasts, Spotify, or Amazon Music. Contact us directly at info@wheelhouseadvisors.com or visit us at LinkedIn or X.com

Our YouTube channel also delivers fast, executive-ready insights on Integrated Risk Management. Explore short explainers, IRM Navigator research highlights, RiskTech Journal analysis, and conversations from The Risk Wheelhouse Podcast. We cover the issues that matter most to modern risk leaders. Every video is designed to sharpen decision making and strengthen resilience in a digital-first world. Subscribe at youtube.com/@WheelhouseAdv.


Sam Jones

Welcome back to the Deep Dive. So imagine you're steering the ship at an organization, right, You're pushing the boundaries, with AI innovating like crazy. But then suddenly you're staring at this maze of new regulation.

Ori Wellington

Yeah, it's a real challenge.

Sam Jones

It really feels like every single week there's some amazing new AI breakthrough, but all that excitement it brings this really urgent need for guardrails, you know.

Ori Wellington

Absolutely. How do we manage all this power responsibly? That's the core question.

Sam Jones

Exactly, and today that's what we're diving deep into the EU AI Act and specifically its General Purpose AI Code of Practice, or GPA code for short.

Ori Wellington

The EU's really laid down a significant marker here and, yeah, it's definitely making waves already.

Sam Jones

It truly is. And look, this isn't just about us reading some dry legal text.

Ori Wellington

No, not at all.

Sam Jones

It's about unpacking what this code actually means for you, listening, whether you're a developer actually building these things, or a business leader deploying AI, or maybe an investor scouting the next big opportunity.

Ori Wellington

The implications really do stretch far and wide.

Sam Jones

So our mission today is pretty clear we want to take all the complex details of this EU AI code of practice based on some solid expert analysis we've looked at and just boil it down.

Ori Wellington

Get to the core insights.

Sam Jones

Right, give you the shortcut to being properly informed. We'll highlight some surprising bits, the really crucial operational stuff you need to know, but without hopefully getting everyone bogged down in jargon, makes sense. So if you're building, deploying, investing in or honestly even just curious about AI, understanding these new rules isn't just ticking a compliance box. It's really about having strategic foresight in this whole AI landscape.

Ori Wellington

Couldn't agree more. Shall we start laying the groundwork. Let's do it in this whole AI landscape. Couldn't agree more. Shall we start laying the groundwork.

Sam Jones

Let's do it so the code. It sits within the bigger EU AI Act. Where do we start with that?

Understanding the EU AI Act Framework

Ori Wellington

Okay, so the EU AI Act itself. It officially entered into force August 1st 2024. The goal is full applicability by August 2nd 2026.

Sam Jones

Right two years.

Ori Wellington

But and this is key it's not just one big deadline way off in the future. It's actually a very carefully phased rollout. There are critical milestones, some of which are already passed or coming up very quickly.

Sam Jones

OK, so these aren't just dates to circle on a calendar. They're real deadlines with real consequences.

Ori Wellington

Certain AI uses are just outright prohibited and some basic AI literacy duties kicked in. Okay, then, from August 2nd 2025, which, as you say, is practically upon us that's when obligations for general purpose AI providers really start, things like new transparency rules, copyright requirements. They become active then.

Sam Jones

Got it and then looking further ahead.

Ori Wellington

Fast forward to August 2nd 2026,. That's when the rest of the act becomes fully applicable. Got it, and A grace period Makes sense Now you mentioned, the EU's approach is risk-based. Yes, and that's really interesting. Instead of a sort of one size fits all rule, they've tried to tailor the regulations based on the actual potential harm an AI system could cause. It's tiered.

Sam Jones

Tiered how? What's the top tier?

Ori Wellington

Top tier is unacceptable risk. These are AI practices just flat out banned. Think things like government social scoring or untargeted scraping of facial images to build databases, or using AI to detect emotions in workplaces or schools. Basically, stuff deemed too invasive or dangerous.

Sam Jones

Okay, so those are just off the table, completely Correct. So what about AI that isn't banned outright but still carries, you know, significant risk?

Ori Wellington

That's the high risk category and these systems face really strict requirements. We're talking high data quality standards, very thorough documentation, traceability, mandatory human oversight, robustness the works.

Sam Jones

And what kind of AI falls into that high-risk bucket?

Ori Wellington

Think AI used in critical infrastructure, energy grids, transport or medical devices, even things like recruitment software or credit scoring systems stuff that could seriously impact someone's safety, livelihood or fundamental rights.

Sam Jones

Right Makes sense. Okay, so unacceptable high risk. What's next?

Ori Wellington

Below high risk you have limited risk systems.

Sam Jones

Yeah.

Ori Wellington

Here the main thing is transparency.

Sam Jones

Transparency meaning.

Ori Wellington

Meaning you need to make it clear when someone's interacting with AI. So a chatbot has to say it's a chatbot. Ai generated content like deepfakes needs to be labeled. It's about ensuring people aren't misled. Gotcha.

Sam Jones

And the lowest tier.

Ori Wellington

That's minimal risk, and these systems are, for the most part, unregulated. The idea is to let innovation happen where the risks are really negligible.

GPAI Code of Practice Explained

Sam Jones

Okay, that tiered approach seems logical, so let's zoom in now on the code of practice itself, this voluntary document right Published mid-2025.

Ori Wellington

Exactly Published July 10th 2025. It's a voluntary, general purpose AI code of practice put together by 13 independent experts after a lot of stakeholder discussion.

Sam Jones

And what's its main job, this voluntary code?

Ori Wellington

Well, its core purpose is to give GPAI model providers a practical way to show they're complying with certain key parts of the AI Act, specifically Articles 53 and 55. It acts as a sort of bridge until the official harmonized EU standards are fully developed.

Sam Jones

So signing up helps companies, how it basically streamlines things. If you follow the code, your interactions with the central AI office should be smoother. It reduces the administrative headache compared to, say, having to submit completely custom documentation every time to prove you're compliant.

Ori Wellington

Okay, and this sounds important. It's not a get-out-of-jail-free card, right? It's not a legal safe harbor.

Sam Jones

Absolutely crucial point. It is not a legal safe harbor. It doesn't automatically mean you are compliant or give you immunity. It's more like a recognized, structured method to demonstrate your compliance efforts. A guide, not a shield.

Ori Wellington

Right, a way to show you're playing by the expected rules.

Sam Jones

Precisely, and to help with that, the commission also put out some guidelines to clarify what counts as GPAI and, importantly, a mandatory template for summarizing your training data publicly.

Ori Wellington

A mandatory template. Okay, that sounds pretty concrete.

Sam Jones

It is. It's a big step towards more transparency about what's actually gone into training these models.

Ori Wellington

Interesting, and who's actually signed up to this code so far?

Sam Jones

Any big names.

Ori Wellington

Oh yeah, quite a few heavy hitters OpenAI, google, microsoft, mistral, servicenow, anthropic IBM, amazon, cohere they're all signatories.

Sam Jones

Hmm, anyone holding out?

Ori Wellington

Well, interestingly, XAI only signed the chapter on safety and security.

Sam Jones

Oh, so what does that mean for them?

Ori Wellington

It means they'll need to prove their compliance on transparency and copyright using other methods which might be, you know, more work or less straightforward than just following the code structure for those parts.

Sam Jones

That decision kind of hints at some underlying debate, doesn't it? Is everyone happy with this code?

Ori Wellington

Not universally. No, there has been some pushback. Groups like CCIA Europe for example, have raised concerns about the burden, the timing, questioning if it's all proportionate, especially parts of the safety chapter. They worry it might stifle innovation.

Sam Jones

Yeah, I can see that tension. Is it too much red tape or is it just the necessary price for building trust and safety in AI?

Ori Wellington

That's the million dollar question, isn't it? The EU perspective is clear these guardrails are vital for public trust and preventing harm, which ultimately helps AI adoption. But the industry concern about balancing compliance speed with innovation speed is also very real.

Sam Jones

So the code is trying to sort of thread that needle, provide a path.

Ori Wellington

That's the idea, a clear the voluntary pathway forward in the interim.

Key Requirements: Transparency & Copyright

Sam Jones

Okay, so let's define terms. What exactly counts as general purpose AI or GPAI under this whole thing?

Ori Wellington

Good question. Basically, it's an AI model that shows significant generality, meaning it's pretty versatile, can be plugged into lots of different downstream systems and adapted for various tasks.

Sam Jones

Is there a technical threshold?

Ori Wellington

There's a practical indicator. The commission suggests, yeah, training compute. If a model took more than 1023 FLOPs to train, that's a massive amount of computation combined with having certain advanced capabilities like complex language understanding or generation, it's likely considered GPAI 10 to the 23.

Sam Jones

Wow, Okay. And then there's an even higher level GPI with systemic risk.

Ori Wellington

That's right. This is for the real frontier models. A model is presumed to have systemic risk if its training compute hits 125 FLOPs, so a hundred times more compute than the GPI indicator, or if the commission designates it because it has a similarly huge impact.

Sam Jones

And if you build one of those?

Ori Wellington

Then you have a strict notification duty. You must tell the commission immediately Well, within two weeks anyway when you hit that threshold, or even when you anticipate hitting it. It's a mandatory heads up.

Sam Jones

No wiggle room there. What about open source AI? Is there any kind of break for them?

Ori Wellington

There is an open source exception. Yes, it applies to some of the technical documentation duties in Article 53. If you release your model under a free and open source license and you make the weights, architecture and usage info public, you might be exempt from those specific documentation requirements.

Sam Jones

Ah, but there's a catch a bit.

Ori Wellington

There's a big catch too. Actually, this exception does not apply if the model has systemic risk and, crucially, it does not get you off the hook for copyright compliance or potential product liability. Open source isn't a free pass on everything.

Sam Jones

Got it. So pulling this together, this voluntary kind of seems like a useful roadmap for navigating the act helps reduce some uncertainty, maybe.

Ori Wellington

Exactly. It provides a recognized way to approach compliance, which is valuable but, like we stress, understanding its limits. That it's not a legal shield is absolutely key. It's about building a legal shield is absolutely key. It's about building a defensible, transparent approach.

Sam Jones

Okay, let's get down to the brass tacks. Then the code moves from principles to actual practical actions, doesn't it Like documenting energy use, handling, copyright? Let's break this down, starting with transparency.

Ori Wellington

Right On transparency. Providers need to be pretty meticulous. They have to use this specific model documentation form.

Sam Jones

And what goes in that form.

Ori Wellington

A lot Detailed specs of the model, characteristics of the training data used, what the model is intended for and, importantly, what is not designed for the out-of-scope uses, plus the compute power consumed during training and this is quite notable the energy consumption.

Sam Jones

Energy consumption. That's interesting. Why mandate that specifically?

Ori Wellington

Well, it signals a broader focus beyond just function. It forces consideration of the environmental footprint and you know, looking ahead, it could potentially feed into future carbon pricing or green AI incentives. It makes sustainability part of the performance picture.

Sam Jones

Hmm, makes sense, and this documentation isn't a one-off.

Ori Wellington

No, it has to be kept up to date. Yeah, and you need to be ready to share it with downstream developers who integrate your model and with the AI office, if they ask. Though, there are provisions to protect legitimate trade secrets, of course.

Sam Jones

What if you don't know the exact energy figure? Maybe for an older model?

Ori Wellington

Estimations are allowed in that case, but you have to be transparent about it. You need to disclose the method you use for the estimate and point out any gaps in your data. The key word is still transparency.

Sam Jones

And supporting downstream users.

Ori Wellington

Yeah, that's important too. Providers need to give integrators good info on the model's capabilities, its limitations, how to integrate it safely. And there's a clear point about fine-tuning If someone downstream significantly modifies your model, they effectively become the provider for that modified version, inheriting the responsibilities.

Sam Jones

Right Passing the baton responsibly. Okay, that covers transparency. Now let's tackle the big one copyright compliance Always a thorny issue with AI.

Ori Wellington

Indeed. The code requires providers to have a solid internal copyright policy. This needs to cover how they lawfully get training data, how they respect opt-outs, how they build safeguards into the model's outputs to try and prevent infringement, and how they handle complaints.

Sam Jones

And respecting opt-outs. How specific does it get?

Ori Wellington

Very specific. It explicitly mentions respecting machine-readable opt-outs like the standard robotstxt file websites use. If a site says don't crawl for AI training, you have to honor that when gathering web data. That's a big operational change for many.

Sam Jones

Yeah, that sounds like it requires significant technical adjustments. What about summarizing the training data you mentioned? A mandatory template?

Ori Wellington

Yes, the mandatory template from the commission. Providers must publish a summary of the content used for training. It needs to be detailed enough to actually help rights holders understand what might be in there. Think identifying major data sets used, listing top domain names that were scraped, that kind of thing.

Sam Jones

Oh for models trained on. You know the vastness of the Internet over years. Pulling that summary together sounds incredibly challenging.

Ori Wellington

It absolutely is, especially for older models where record keeping might not have been as rigorous, but it represents a fundamental shift. We're moving away from rights holders having to guess and sue towards providers having to proactively disclose and justify their data sources.

Sam Jones

Yeah.

Ori Wellington

It really empowers rights holders.

Sam Jones

And complaint handling.

Ori Wellington

Also required. You need designated contact points and clear procedures so rights holders can actually reach out, file a complaint about potential infringement and get a response.

Sam Jones

Okay, transparency, copyright. What's the third pillar? Safety and security right, Especially for those high compute systemic risk models.

Ori Wellington

Exactly. This chapter really zeroes in on those most powerful, potentially riskiest models. The obligations here are quite demanding. Such as. Providers need to conduct thorough model evaluations, including adversarial testing, often called red teaming basically trying to break the model or find harmful capabilities before release. Red teaming basically trying to break the model or find harmful capabilities before release. They need ongoing processes to assess and mitigate systemic risks post-deployment.

Sam Jones

And reporting issues.

Ori Wellington

Yes, Mandatory tracking and prompt reporting of any serious incidents to the AI office and relevant national authorities, Plus ensuring state-of-the-art cybersecurity, not just for the model itself but for the whole infrastructure it runs on. And again, those notification duties kick in if you hit the 1025 FLOP's compute threshold.

Sam Jones

Okay, so taking all this in, what's the real? So what for businesses? We're talking major changes, right? It sounds like AI governance is really moving out of the tech basement and into the boardroom.

Ori Wellington

That's the absolute bottom line. This fundamentally shifts AI governance from being just an IT or maybe a legal problem to being a core line of business responsibility. C-suite needs to be involved.

Sam Jones

And for, say, the chief risk officer or the CFO. What are the concrete operational impacts?

Ori Wellington

Huge impacts. Think about disclosure and attestation. You now need repeatable evidence for things like training, data, origins, compute, usage, energy consumption. So the CFO needs to find budget to actually build the systems to measure and assure this data, potentially aligning it with existing ESG reporting or internal controls, and they need to be ready for the AI office asking tough questions.

Sam Jones

So it's not just reporting, it's funding the measurement infrastructure itself.

Ori Wellington

Precisely, and copyright compliance. That becomes a real cost center in the controllership function. You need budget for crawler controls, for that robotstxt compliance, potentially for licensing data sources, for filtering out illegal content, for running those complaint workflows.

Sam Jones

And pushing it down the supply chain.

Ori Wellington

Yes, Contracts with suppliers. Data providers, cloud providers need to be updated to flow these responsibilities down. You need assurance they're compliant too.

Sam Jones

And for companies working with those really big systemic risk models. What's the budget hit there?

Ori Wellington

They need to brace for significant spending on independent evaluations, those intensive red teaming exercises, setting up serious incident response teams and playbooks and seriously hardening the cybersecurity around these critical AI assets. Coordinating those compute threshold notifications with cloud providers also needs careful planning and process.

Sam Jones

Wow, Okay. And if companies? Well, if they get it wrong, the penalties we talked about earlier are Truly serious.

Ori Wellington

We're talking maximum fines up to 35 million euros or 7% of global annual turnover, whichever is higher, for using prohibited AI or breaching certain other core obligations. That's GDPR level stuff. It could be existential for some businesses 7% of global turnover. Yeah, and other major breaches like violating GPA obligations can hit 15 million euros or 3%. Violating GPA obligations can hit 15 million euros or 3%. Even just providing incorrect information to authorities could cost 7.5 million euros or 1%. These fines have real teeth.

Sam Jones

And just to recap the timing on those fines, the GPII rules themselves start August 2025, but the commission's power to actually levy fines for GPII breaches starts August 2026, right.

Ori Wellington

Correct August 2nd 2026 for the fines related to GPI obligations and remember those legacy models have until August 2nd 2027 to comply before facing fines.

Sam Jones

So, given how high the stakes are financially and reputationally, businesses really can't just see this code as another compliance checkbox, can they?

Ori Wellington

Absolutely not. It's far beyond that. It demands a fundamental rethinking of operational strategy, especially for risk and finance leaders. It's pushing toward a much more proactive, integrated way of managing AI risk across the entire organization.

Sam Jones

Right, that integrated approach. Let's talk about how to actually achieve that, because the code, the guidelines, that mandatory template, they're not just ideas anymore, are they? They're about creating auditable proof.

Implementation Roadmap & Strategic Recommendations

Ori Wellington

Exactly. It shifts the whole game from talking about AI principles to demonstrating auditable processes and artifacts. It makes providers accountable for managing risk throughout the AI lifecycle. You have to show your work.

Sam Jones

And you mentioned integrated risk management. Irm is the way to do this. How does that help structure things?

Ori Wellington

Yeah, irm really provides the practical framework, the operating backbone to weave all these new duties into how a company already manages risk. It connects the dots between enterprise risk ERM operational risk, orm, technology risk, trm and governance risk and compliance GRC.

Sam Jones

Does it align with other standards people might already be using? Yes, perfectly.

Ori Wellington

It aligns very well with established frameworks like the NIST AI Risk Management Framework, which is widely respected globally, and also ISO 42001, the international standard specifically for AI management systems. So you're building on recognized best practices.

Sam Jones

Can you give us a concrete example? How would IRM handle, say that model documentation form requirement?

Ori Wellington

Sure, so within an IRM framework. That model documentation form requirement? Sure, so within an IRM framework, that model documentation form isn't just some standalone document floating around. It gets tagged against core IRM objectives like assurance and compliance. Okay, then it plugs into specific risk functions. It becomes an input for technology risk management, helping manage the AI model as a documented asset. It informs GRC processes, ensuring policies around model development and use are being followed. The end result is you build the central, connected register of your models, their compute logs, their energy use, making everything much easier to track, audit and manage.

Sam Jones

That makes a lot of sense, connecting it into existing structures. So for the CFOs, the CROs listening right now, feeling maybe a bit overwhelmed, what's a practical starting roadmap? What should they be doing like now?

Ori Wellington

Okay, let's break it down In the first 30 days or so, urgently start inventorying your AI models which ones touch the EU market, identify potential GP AI models and flag any candidates for systemic risk. At the same time, start deploying ways to measure compute and energy use, or at least document your estimation methods clearly, as the code allows, and, if you are working on frontier models, get that systemic risk documentation, evaluation, planning and notification process sketched out now.

Sam Jones

Okay, that's a busy first month. What about the next three months? The first quarter?

Ori Wellington

In the next 90 days, the focus shifts to governance and policy. Stand up a cross-functional AI risk council. Get finance tech, risk, legal product ops leaders in the room together. Give them ownership of overseeing the model documentation form process and that public training data summary. Critically publish your internal copyright policy. Get your web crawlers configured to respect those machine-readable opt-outs like robotstxt, and set up your intake mechanism for rights holder complaints.

Sam Jones

Right Getting the core policies and processes in place and looking further out. Six months a year.

Ori Wellington

Over the next six to 12 months. It's about embedding this. Integrate the model documentation and the training data summaries into your regular internal audit cycles and your board reporting packs. Start formally aligning your internal IRM controls with the NIST AI RMF structure and maybe begin looking at ISO 42001, readiness to show maturity and beyond the first year, etc. To explicitly flow down these transparency and copyright requirements. Make sure your partners are aligned.

Sam Jones

That's a really clear, step-by-step approach, excellent. So if we had to boil this entire deep dive down to just a few key takeaways, the absolute must-do actions, what would they be?

Ori Wellington

Okay, four key recommendations. One fund the basics now. Seriously Allocate budget for model inventories, going those model documentation forms filled out, setting up compute and energy metering and establishing that public summary process for training data. Don't wait.

Sam Jones

Okay, number two.

Ori Wellington

Two institutionalize copyright compliance. Make respecting machine-readable opt-out standard practice. Set up clear channels for rights holders to contact you. Implement output filtering. Make sure someone is clearly accountable for this across the organization.

Sam Jones

Got it Third.

Ori Wellington

Three plan for systemic risk, even if you think it doesn't apply to you today. Design your model evaluation processes, your adversarial testing plans, your incident response runbooks now so they're ready to scale if your models or models you rely on from suppliers cross those compute thresholds later. Be prepared Makes sense.

Sam Jones

And the final recommendation.

Final Thoughts & Global Implications

Ori Wellington

Four adopt IRM as your operating backbone. Don't treat this as a separate silo. Map these new code requirements directly onto your existing integrated risk management objectives Performance, resilience, assurance, compliance. Integrate them properly across ERM, orm, trm and GRC. Make it part of how you already manage risk.

Sam Jones

Fantastic. So there we have it. We've really journeyed through the weeds of the EU AI Act and its code of practice today, from the rollout phases and risk levels right down to the nitty gritty of transparency, copyright and safety rules.

Ori Wellington

Yeah, and the key point is this isn't just more regulation for the sake of it. It's driving a fundamental shift. It demands that integrated risk management approach and, yes, some significant operational and financial adjustments.

Sam Jones

But getting ahead of the curve, adopting that proactive IRM approach early, that could actually turn these compliance hurdles into a real strategic advantage, couldn't it? Building trust, making operations more efficient.

Ori Wellington

I definitely think so, and you know, this whole EU effort really throws a spotlight on a massive global tension, doesn't it? How do we keep AI innovation moving at this incredible pace while also making sure it's safe, accountable and stays within ethical lines?

Sam Jones

That's the core challenge.

Ori Wellington

The EU has drawn its line in the sand here, and you can bet the ripples from this will influence AI development everywhere. What does that mean for companies trying to navigate different rules in different countries? How will it shape the design of AI systems themselves going forward? Lots to think about.

Sam Jones

Absolutely Well. Hopefully, this deep dive has given you, our listeners, the knowledge you need to start having those vital conversations inside your own organizations. We really encourage you to think about your own AI practices and how these insights might shape your strategy.

Ori Wellington

Definitely food for thought.

Sam Jones

Thanks so much for joining us for the deep dive today. We'll catch you next time for another essential exploration.