The Risk Wheelhouse
The Risk Wheelhouse is designed to explore how RiskTech is transforming the way companies approach risk management today and into the future. The podcast aims to provide listeners with valuable insights into integrated risk management (IRM) practices and emerging technologies. Each episode will feature a "Deep Dive" into specific topics or research reports developed by Wheelhouse Advisors, helping listeners navigate the complexities of the modern risk landscape.
The Risk Wheelhouse
S7E1: The Delve Collapse And The New Rules Of Enterprise Trust
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
A compliance certificate is supposed to be like a bridge inspection: real materials, real tests, real signatures, and real accountability. Then AI arrived, and the market started rewarding something else entirely, speed. The result is what we call a trust mirage, where “audit-ready” output can look convincing even when the underlying control evidence is shaky or absent.
We unpack the rise and alleged collapse of Delve, a once high-flying agentic GRC startup that promised SOC 2 compliance in days, not months and reportedly reached a $300 million valuation. The wild part is how the story breaks: not with a regulator raid, but with an anonymous Substack writer, a publicly accessible Google spreadsheet, and uncomfortable questions about whether AI-generated reports crossed the line from automation into fabrication. Along the way, we clarify the technical difference between deterministic verification and probabilistic LLM text generation, plus why auditor independence is the core legal requirement that software must protect at the code level.
From there we get practical. We challenge the standard venture capital and enterprise procurement playbooks that lean on SaaS metrics like NDR, and we replace hand-wavy “AI compliance” claims with concrete architectural checks: role-based access controls, read-only evidence collection, cryptographic hashing, and hard separation between agents and human judgment. We also share two frameworks to navigate the new landscape: the IRM navigator curve for sequencing risk maturity, and the ADRI index for spotting vendors that maximize compliance artifacts while minimizing integrity.
If you buy, fund, or build in compliance, GRC, risk management, SOC 2, ISO 27001, HIPAA, or GDPR, this conversation is your warning label and your field guide. Subscribe, share this with your security and finance leaders, and leave a review. What question will you start asking every “agentic” vendor first?
Visit www.therisktechjournal.com and www.rtj-bridge.com to learn more about the topics discussed in today's episode.
Subscribe at Apple Podcasts, Spotify, or Amazon Music. Contact us directly at info@wheelhouseadvisors.com or visit us at LinkedIn or X.com.
Our YouTube channel also delivers fast, executive-ready insights on Integrated Risk Management. Explore short explainers, IRM Navigator research highlights, RiskTech Journal analysis, and conversations from The Risk Wheelhouse Podcast. We cover the issues that matter most to modern risk leaders. Every video is designed to sharpen decision making and strengthen resilience in a digital-first world. Subscribe at youtube.com/@WheelhouseAdv.
Why Compliance Requires Reality
Sam JonesYou know, um usually when we talk about enterprise trust, there's like this underlying expectation of physical reality.
Ori WellingtonRight, like tangible proof.
Sam JonesYeah, exactly. It's like building a suspension bridge. If you drive over it, you uh you expect that someone somewhere actually poured the concrete.
Ori WellingtonYou'd certainly hope so.
Sam JonesRight. You expect that a structural engineer actually, you know, stressed the steel cables, checked the math, and physically signed their name on a document verifying that the bridge is going to hold the weight of your car.
Ori WellingtonAaron Ross Powell Because it's a binary state of structural integrity, right. I mean, either the math works and the materials are sound or they aren't.
Sam JonesExactly. And for decades, the world of enterprise software compliance, it basically operated on a similar assumption. You needed physical proof.
Ori WellingtonYou couldn't just fake it.
Sam JonesNo, I mean you don't just take a photograph of a bridge, run it through some like AI image generator and print out a certificate that says it's safe to cross. You need the receipts.
Ori WellingtonRight. But the the thing is, if you step into the compliance and risk technology space over the last, say, couple of years, that expectation of reality has been seriously tested.
Sam JonesTested as putting it mildly. We're looking at a landscape right now that's dealing with honestly a massive structural crisis.
Ori WellingtonIt really is. The transition from manual verification to artificial intelligence has created what we might call um a trust mirage.
Sam JonesA trust mirage, I like that.
Ori WellingtonYeah, in certain sectors of the market at least. The sheer speed of the technology has completely outpaced the verification methods we rely on.
Sam JonesWell, welcome to the deep dive, everyone. If you're an investor managing venture capital, if you're an enterprise software buyer trying to, you know, protect your company's data, or if you're just someone who's fascinated by the unintended consequences of AI, this conversation is custom-tailored for you.
Ori WellingtonAbsolutely.
Sam JonesToday we are going on a very specific journey. Our mission is to dissect the anatomy of a massive structural failure in the tech world. We're going to uncover exactly how a$300 million valuation in the compliance tech space completely collapsed.
Ori WellingtonAnd you know, we really need to look at this not just as an isolated incident, because it's not. Right. It's a symptom of a much larger sequencing problem in the tech industry as a whole.
Sam JonesSo we're going to map this journey out for you step by step. We'll start with the market hype, the blinding hype surrounding what the industry calls agentic GRC.
Ori WellingtonThe gold rush phase.
Sam JonesExactly. From there, we'll progress into the complete architectural failure of a darling startup named Delve. And finally, we'll arrive at a new, rigorous and frankly, completely necessary way to evaluate AI risk platforms moving forward.
Ori WellingtonAaron Powell So you don't get caught buying a liability instead of an asset.
Sam JonesExactly. Because that's happening a lot right now. And to do this, we're drawing heavily on the incredibly insightful piece you just published in the Risk Tech Journal titled The Compliance Solution, Agentic Hype and the Integrity Gap.
Ori WellingtonYeah, I'm really glad we can unpack the mechanics of it today. Because like I said in the piece, the implications go far beyond just one software vendor.
Sam JonesAaron Powell Oh, for sure. As advisors at Wheelhouse Advisors, we see this constantly. And we're also going to synthesize some fascinating, corroborating details today from TechCrunch, Inc. magazine, WebWire, and uh startuphub.a blog to give you the full 360-degree view.
Ori WellingtonAaron Powell It's a wild story.
Sam JonesIt really is. But here's where it gets really interesting. How did a highly funded compliance platform, I mean, a company backed by Y Combinator and heavy hitters like Insight Partners, how did they get exposed?
Ori WellingtonThat's the crazy part.
Sam JonesRight. It wasn't the SEC. It wasn't some team of highly paid forensic regulators deploying advanced discovery tools. No. It was an anonymous Substack writer and a publicly accessible Google spreadsheet.
Ori WellingtonWhich is just it highlights a glaring vulnerability in the whole ecosystem. I know, I know. But what's fascinating here is that this isn't just a story about a single company cutting corners. Like if we only look at Delve, we miss the systemic pressures that actually created Delve in the first place.
Sam JonesSo it's a symptom, not the disease.
Ori WellingtonExactly. This is a case study of what happens when the intense market pressure to claim agentic AI capabilities completely outpaces the structural integrity of the software itself.
The Agentic GRC Gold Rush
Sam JonesOkay, so let's unpack this. Because to understand how we got to a publicly accessible Google Sheet bringing down a tech darling, we have to um we have to understand the environment that burst this company.
Ori WellingtonThe agentic GRC gold rush.
Sam JonesRight. So GRC stands for governance, risk, and compliance. It's basically the plumbing of enterprise trust.
Ori WellingtonIf you've ever tried to sell software to a hospital or a bank, you know you need GRC.
Sam JonesOh, yeah. You can't even get in the door without it. And according to your article, this entire market has a massive pressure problem right now.
Ori WellingtonAaron Ross Powell The pressure is immense and it's acting on every single participant in the ecosystem simultaneously.
SPEAKER_00How so?
Ori WellingtonWell, you have the vendors who are building these compliance platforms, right?
SPEAKER_00Yeah.
Ori WellingtonYou have the enterprises who are buying them to speed up their sales cycles. And then you have the venture capitalists who are funding them.
Sam JonesSo everyone's involved.
Ori WellingtonEveryone. And they're all operating under the exact same ticking clock. The mandate from the market is just so clear, move fast, claim agentic capabilities, and show AI traction before the window closes.
Sam JonesAaron Powell Which naturally produces shortcuts.
Ori WellingtonInevitably.
Delve’s Promise Of Instant SOC 2
Sam JonesLet's talk about Delve, because they were really the poster child for this rush. They launched back in 2023. And their pitch was basically the holy grail for any company that has ever had to suffer through a security audit.
Ori WellingtonAaron Powell Oh, it was a beautiful pitch.
Sam JonesIt was. They pitched themselves as an AI-driven, agentic GRC platform. They promised that their AI agents would autonomously connect to your company's infrastructure. Like they'd pull configurations from your AWS instances.
Ori WellingtonRight. They'd scan your code for vulnerabilities. Trevor Burrus, Jr.
Sam JonesYeah. And they'd even auto-fill those massive 300-question vendor security spreadsheets that enterprise procurement teams always send over. I mean, nobody wants to fill those out.
Ori WellingtonNobody. The promise was continuous, autonomous operation. The agents were supposed to just, you know, work in the background.
Sam JonesLike magic.
Ori WellingtonExactly. Monitoring the client environments, updating evidence in real time, and alerting engineering teams to security gaps without any human intervention at all.
Sam JonesAnd the grand finale of their pitch was that they'd organize all of this evidence into perfectly wrapped, audit-ready packages for major frameworks. We're talking SOC2, High Pay, ISO, 2701, GDPR.
Ori WellingtonThe big ones.
Sam JonesAll the big ones and their timeline. This is the kicker. They promise compliance in days, not months.
Ori WellingtonYeah, we really need to pause on that specific phrase, days, not months, because it explains exactly why the market was so eager to buy what Dell was selling.
Sam JonesIt's intoxicating.
Ori WellingtonIt is. Let's look at the actual mechanics of a traditional SOC2 certification, just for context.
Sam JonesYeah, walk us through the traditional way.
Ori WellingtonSo for an average software as a service company, achieving SOC2 compliance. The traditional way requires working with an accredited independent auditing firm.
Sam JonesReal human beings.
Ori WellingtonReal humans. The auditor establishes an observation period, which is usually three to six months. And during that time, these human beings are asking your engineering team for specific timestamp proof that your security controls actually work.
Sam JonesSo they can't just take your word for it.
Ori WellingtonNope. They want to see the database logs, they want to see the pull request approvals. And that process historically takes anywhere from six to twelve months of back and forth.
Sam JonesWhich is incredibly slow.
Ori WellingtonIt's slow and it costs between$15,000 and$50,000.
Sam JonesIt's a massive friction point. I mean, if you're a startup founder, that's months of your lead engineer taking screenshots of server settings instead of building the actual product. It's grueling.
Ori WellingtonIt is grueling, yes. But it's grueling by design. The friction is where the verification happens. Then Delve walks into the room and says, hey, we can give you that exact same gold standard certificate, but instead of six to twelve months, it'll take days.
Sam JonesAnd a fraction of the cost, right?
Ori WellingtonIt's exactly instead of fifty thousand dollars, it'll cost you between six thousand and fifteen thousand dollars. They claim to be using AI agents to completely eliminate all that manual friction.
Sam JonesAnd the market swallowed it whole. I mean, that narrative led straight to a$32 million Series A funding round led by Insight Partners. It gave them a$300 million valuation.
Ori WellingtonIt was a rocket ship.
Sam JonesIt was. They rapidly acquired a roster of over 1,000 clients spread across 50 different countries. Because the enterprise sales teams at these client companies felt immense pressure to adopt this.
Ori WellingtonRight, because AI speed had transitioned from being just a cool feature to being a mandatory competitive signal.
Sam JonesIf you don't have it, you're behind.
Ori WellingtonDemonstrating AI readiness is a non-negotiable part of the enterprise sales conversation now. If you're a vendor and you can't show your clients that you're leveraging AI to be faster and more efficient, you look obsolete. You lose the deal. Exactly. Falling behind on compliance certification feels like an existential risk to a startup trying to close enterprise deals. So the certificate itself became the sole objective.
Sam JonesThe piece of paper.
Ori WellingtonYes. The actual robust control environment that the certificate was supposed to represent became entirely secondary.
Sam JonesIt's a fascinating psychological trap for the tech sector, isn't it? There's this deeply ingrained assumption that if a process is slow like manual compliance auditing, it must inherently be broken or inefficient.
Ori WellingtonRight. It's the whole disrupt everything mindset.
Sam JonesExactly. And therefore, an AI that comes along and does it instantly, well, that must be a massive technological breakthrough. We're so conditioned to worship disruption that we forget that sometimes a process is slow for a very specific reason.
Ori WellingtonA very good reason.
Sam JonesYeah. Sometimes you're not buying a breakthrough, you're just buying a very dangerous shortcut.
Ori WellingtonIt's the assumption of automation without the verification of methodology. The tech sector loves the ethos of move fast and break things.
Sam JonesWhich is great for consumer apps.
Ori WellingtonBut when you're dealing with enterprise data security and federal compliance frameworks, the entire point is that things cannot be broken. The methodology actually matters more than the speed.
Sam JonesOkay, let me try out an analogy here. Let's see if this tracks. Imagine you're trying to build a hundred-story skyscraper. The traditional way takes years, right? You have to dig a massive foundation, pour thousands of tons of concrete, let it cure, test the soil.
Ori WellingtonBuild a steel skeleton.
Sam JonesRight. Slowly add the floors. It's expensive and it takes a long time.
Ori WellingtonA very apt comparison for establishing a mature risk program, by the way.
Sam JonesThanks. So then a new contractor shows up and says, I have this incredible new 3D printing technology. We can put your entire hundred-story skyscraper overnight.
Ori WellingtonSounds amazing.
Sam JonesSo you pay them, and sure enough, the next morning there's a giant skyscraper standing there. It looks like a building. You can take a photograph of it and show it to your investors. You get a certificate of occupancy.
Ori WellingtonBut the mechanics of how it was built matter.
Sam JonesExactly. Because they skipped pouring the concrete foundation. They just printed the walls straight onto the dirt. It's a total facade. You have the certificate, but you wouldn't want to be inside that building when a storm rolls in.
Ori WellingtonAnd in the compliance world, the storm always comes.
Sam JonesIt's just a matter of time.
Ori WellingtonWhether it's a zero-day data breach, a regulatory inquiry, or a massive enterprise client sending their own security team to verify your posture, the foundation is eventually going to be tested.
The Whistleblower Exposes The Facade
Sam JonesWhich brings us to the storm that hit Delve. Because a skyscraper without a foundation eventually cracks. And in March 2026, Delve's cracks became a very, very public spectacle.
Ori WellingtonThis is where the structural failure of the illusion becomes impossible to ignore. We have to look at the exact technical nature of the whistleblower allegations.
Sam JonesLet's talk about that Google sheet that broke the facade. According to the whistleblower allegations that dropped in March 2026, the product didn't just fall short of its marketing claims. It allegedly, fundamentally, didn't work as described at an architectural level.
Ori WellingtonThe allegation is profound, honestly. The entire pitch was that Dell's AI automated the collection of evidence, right? Which was then securely handed over to independent human auditors to review and draw conclusions from.
Sam JonesBecause that's the legal requirement.
Ori WellingtonExactly. That is the legal and ethical requirement of these frameworks. The auditor has to be the one who looks at the evidence and says, yes, this control is effective. Right. But the whistleblowers allege that Dell's agents were actually generating the auditor conclusions and the test procedures before the client's actual data was even reviewed.
Deterministic Proof Versus LLM Guessing
Sam JonesOkay, wait. Explain the mechanism there. How does a software platform generate a conclusion before it even looks at the data?
Ori WellingtonIt comes down to the difference between deterministic verification and probabilistic generation.
SPEAKER_00Okay, break that down for us.
Ori WellingtonSure. In a deterministic system, the software goes to an AWS server, runs an API call to check if, say, multi-factor authentication is enforced, gets a true or false log, and hands that specific log to an auditor.
Sam JonesRight. That's real evidence.
Ori WellingtonThat's real evidence. But large language models are probabilistic. They generate text token by token based on patterns.
Sam JonesRight. They guess the next word.
Ori WellingtonExactly. The allegation is that Delve used LLMs to dynamically generate the written text of the auditor's final report. The narrative saying the controls were tested and working based on templates rather than basing it on deterministic cryptographic proof of the client's actual server settings.
Sam JonesSo they were basically using AI to write the answer key before the test was even taken.
Ori WellingtonEssentially, yes.
Sam JonesThey just bypassed the actual telemetry completely.
Ori WellingtonThe gap between what Dell said its agents automated and what they allegedly produced is the gap between a tool that supports a compliance program and a tool that actively fabricates one. Wow. If an AI writes the conclusion without deterministic proof, it's hallucinating compliance.
Sam JonesAnd the operational mechanics of this alleged failure are just wild because they were doing this at such a massive scale, they got sloppy.
Ori WellingtonVery sloppy.
Sam JonesThe report templates were allegedly so identical across completely unrelated companies that it became obvious if you just put two of them side by side. Like an AI generating a risk assessment for a healthcare startup looked identical to a risk assessment for a fintech company. Trevor Burrus, Jr.
Ori WellingtonAnd we have to look at how they handled the human element of the certification process, too. Trevor Burrus, Jr.
Sam JonesRight. Because you still need a signature.
Rubber Stamped Audits And Fallout
Ori WellingtonExactly. To legally issue an SOC2 or an ISO certificate, you need an independent accredited auditor to sign off. The AI can't sign the document itself.
Sam JonesAaron Powell So how do they get around that?
Ori WellingtonAaron Ross Powell The allegations state that Dell was routing these dynamically generated certifications through what are described as offshore mills.
Sam JonesOh wow.
Ori WellingtonYeah. These were operations functioning under nominal U.S. addresses, employing people to essentially rubber stamp the AI-generated conclusions, completely bypassing the rigorous independent review that the entire compliance model legally requires.
Sam JonesThat is, I mean, the fallout was immediate. Why Combinator, which is arguably the most prestigious startup accelerator in the world, asked Delve to leave its program.
Ori WellingtonWhich you rarely see happen publicly.
Sam JonesNever. It's usually handled quietly. The CEO of Dell put out a statement with a very vague admission about, you know, growing too fast and falling short of standards. They're pushing back on specific allegations, and as of now, the legal battles are still unresolved. But the market damage is done.
Ori WellingtonIt's done. But what's fascinating here is the systemic oversight failure. You have to ask yourself, how did a$300 million operation operating across 50 countries go completely unnoticed by the authorities?
Sam JonesRight. No government regulator caught it.
Ori WellingtonExactly. The American Institute of CPAs, which governs SOC2, didn't flag it. The entire regulatory ecosystem failed to detect this anomaly.
Sam JonesAnd who caught it? An anonymous writer on Substack who stumbled across a publicly accessible Google spreadsheet filled with confidential client audit reports.
Ori WellingtonAlmost comical.
Sam JonesThe sheer absurdity of that is hard to overstate. You have this incredibly sophisticated AI-native operation, and it unraveled because someone forgot to change the sharing permissions on a Google sheet. But honestly, why didn't the regulators catch it?
Ori WellingtonIt highlights a critical vulnerability in how we govern trust right now. The compliance certification infrastructure that we rely on today was designed for a bygone era.
Sam JonesLike pre-AI.
Ori WellingtonWay before AI. It was built for a world where audits were bespoke, manual processes. Think about the analogy of building a high-speed rail system, but still using manual switchboard operators from the 1920s to manage the tracks. The old system assumed a slow, human-driven process.
Sam JonesRight. If you're faking audits manually, you can only fake so many before an auditor notices a discrepancy, or you literally just physically run out of hours in the day. The friction limits the scale of the fraud.
Ori WellingtonExactly. But because Dell operated at scale with AI agents, they overwhelmed the old system. There was no cross-client pattern detection built into the regulatory body's oversight mechanisms.
Sam JonesSo nobody was comparing the reports.
Ori WellingtonRight. There was no automated way to verify the independence of the auditors working at those offshore mills. There was no anomaly signal layer scanning the content of these reports across the industry to say, hey, why do these thousand reports have the exact same probabilistic phrasing?
Sam JonesBecause that tool didn't exist.
Ori WellingtonCompletely missed it.
Sam JonesAnd that brings us to a really uncomfortable conversation about the investor point of view.
Ori WellingtonIt's perhaps the most alarming part of the entire narrative, to be honest. Yeah. Because it reveals a fundamental flaw in how capital is being deployed, evaluated, and managed in the AI era.
Sam JonesOkay, let's analyze the investment thesis that led Insight Partners and actual Fortune 500 chief information security officers who participated in the funding to backdelve. Because if you look at this purely through the lens of a traditional VC, this looked like the ultimate disruption play.
Ori WellingtonAaron Powell Right. Put yourself in the shoes of a capital allocator for a minute. You're operating in a market that aggressively rewards AI native software that automates complex knowledge work.
Sam JonesIt's what everyone wants.
Ori WellingtonExactly. You see a company that has taken a painful 12-month enterprise process and collapsed it into a few days. They have a thousand active clients. They're operating in 50 countries. The revenue growth looks spectacular.
Sam JonesI mean, it essentially writes its own investment memo.
Ori WellingtonIt does.
Sam JonesBut they committed a massive due diligence failure. They completely failed to ask the single most important load-bearing architectural question. They didn't ask, does this platform actually preserve auditor independence at the code level?
Ori WellingtonAaron Powell And if we connect this to the bigger picture of venture capital, we have to talk about why standard SAS diligence frameworks completely failed here.
Sam JonesThe playbooks they've used for a decade.
Ori WellingtonRight. When VCs evaluate a typical software as a service company, they look at a very specific set of financial metrics. They look at annual recurring revenue or ARR, they look at customer acquisition cost, CAC, and crucially they look at net dollar retention or NDR.
Sam JonesLet's break down NDR for a second because I think it's really the culprit here. NDR basically measures how much revenue your existing customers are retaining and expanding.
Ori WellingtonExactly.
Sam JonesSo if you start the year with a customer paying you$10,000 and by the end of the year they upgrade and pay you$14,000, your NDR is 140%. And VCs absolutely love high NDR.
Ori WellingtonThey do because it indicates a sticky product that customers find valuable. And if you're selling an easy button for compliance, your NDR is going to be incredibly high.
Sam JonesBecause it's so easy.
Ori WellingtonRight. Startups were happily paying Delve to renew their SOC2, and then buying the add-on modules for HEPA and GDPR because the AI made it so effortless. The financial metrics look flawless. These frameworks are brilliantly calibrated to find one thing: financial traction.
Sam JonesBut compliance platforms are not just another piece of sauce. Like if I buy a marketing AI tool and it hallucinates a bad subject line for an email campaign, my open rates drop. I lose some leads. It's bad, but it's contained.
Ori WellingtonRight. It's an isolated failure.
Sam JonesExactly. If I buy a coding assistant and it writes inefficient code, my engineers catch it in testing. But if I buy a compliance platform and it hallucinates an audit, the consequences are catastrophic and systemic because in compliance, the product is trust. The product is trust. That's a great way to put it.
Ori WellingtonThat is the fundamental difference in the risk profile here. Standard sauce metrics cannot measure whether an AI is doing the right thing architecturally. They can only measure if people are buying it.
Sam JonesAnd people were buying it in droves.
Ori WellingtonRight. But in a trust market, financial traction built on a hallucinated foundation is just untriggered churn. It's technical and legal debt masquerading as revenue.
SPEAKER_00Wow.
Ori WellingtonThe diligence didn't surface the architectural truth because metrics like NDR aren't designed to measure code level integrity.
Sam JonesIt really points to a massive expertise gap in venture capital right now. They understand software growth mechanics, but they don't possess the domain expertise to know what a mature compliance program actually looks like from the inside.
Ori WellingtonWithout that deep domain expertise, investors are just flying blind. They're mistaking the extremely fast production of compliance artifacts like the PDFs, the certificates for actual programmatic maturity.
Sam JonesThey see the paper and think the building is solid.
Ori WellingtonExactly. They see the output and assume the foundation exists without ever verifying the pipeline that generated. That output.
Sam JonesBut wait, let me push back on that. Let me play the defender of the VC here for a second.
Ori WellingtonGo for it.
Sam JonesIf I'm an investor, my mandate for my limited partners is to generate returns by finding the next big thing. If standard metrics like NDR and CAC say this company is a rocket ship, aren't those metrics doing exactly what they were designed to do? The VC's job isn't to be a forensic compliance auditor, it's to spot growth. Is it really their fault for not digging into the cryptographic hashing of the evidence logs?
Ori WellingtonIt's a fair challenge, and honestly, it's exactly the defense you hear in boardrooms all the time. The gravity of hypergrowth exerts a tremendous pull. But I would argue that when you're investing in a risk and compliance platform, the standard rules of SAS simply don't apply. And your fiduciary duty requires a completely different level of scrutiny.
Sam JonesBecause of the nature of the product.
Ori WellingtonExactly. If you're funding a company that issues legal attestations of security, understanding the mechanism of that attestation isn't an edge case. It is the core of the business model.
Sam JonesSo if you're that investor sitting in a boardroom and a founder slides a pitch deck across the table showing a thousand active enterprise clients, raving reviews on G2, and a revenue chart that's just doubling every quarter, what specific question should you be asking to pump the brakes?
Ori WellingtonThe specific question they had to ask is not about revenue. They had to ask about the platform's architecture. They needed to ask, show us technically where your agent's automation ends and where the independent human auditor's judgment begins.
Sam JonesWhere is the line?
Ori WellingtonRight. And they needed to demand proof that this line could not be crossed by the software.
Sam JonesAnd if the founder can't show you that hard-coded barrier in their system architecture, you walk away no matter how good the NDR looks.
Ori WellingtonExactly. Because if that architectural separation doesn't exist, you aren't investing in a compliance tool. You're investing in a liability engine.
Buyer Liability And Procurement Red Flags
Sam JonesA liability engine. That perfectly describes the nightmare that the buyers woke up to.
Ori WellingtonLet's pivot to the customer side of this because the enterprise buyers and procurement teams are the ones left holding the bag right now. We need to talk about the concrete red flags and the buyer's dilemma in this new landscape. The legal exposure for the buyers right now is immense. And it's a deeply unfair situation in many ways because these companies bought Dell specifically to reduce their risk.
Sam JonesRight. If you're a buyer, you wanted to do the right thing. You wanted to secure your posture, close your own enterprise deals faster, and show your partners you were compliant.
Ori WellingtonYou had good intentions.
Sam JonesBut by using these allegedly fabricated certificates to represent your security posture to the market, you may now be legally liable under massive regulatory frameworks like HIPAA in the US and GDPR in Europe.
Ori WellingtonIt's a devastating irony. The risk reduction tool they purchased became the single massive risk source for their entire operation. Let's look at the mechanics of IPTAL liability, for instance.
Sam JonesYeah, how does that work?
Ori WellingtonIf you hand a hospital network, a fabricated IPO compliance report, and that hospital network entrusts you with protected health information based on that report, you aren't just breaching a vendor contract. If a breach occurs, you are potentially violating federal law because you falsely attested to having controls in place that didn't exist.
Sam JonesAnd the liability just flows downstream.
Ori WellingtonThe liability flows directly downstream to the buyer.
Sam JonesIf you're an enterprise buyer listening to this, imagine the scenario.
Ori WellingtonThe contract that makes your year.
Sam JonesRight. And now you're waking up to headlines that the software that generated that document is an alleged fraud mill. The document you staked your company's reputation on is fundamentally worthless. The real-world panic of that realization has to be chilling.
Ori WellingtonWhich is exactly why procurement teams have to fundamentally change how they evaluate these tools. The era of the easy button is over. Buyers must immediately stop asking vendors, can this platform give us SOC2 in days?
Sam JonesBecause it's the wrong question.
Ori WellingtonThat question is inherently dangerous now because it selects for vendors willing to cut architectural corners.
Sam JonesSo, what is the concrete red flag? What should procurement teams be asking instead during the vendor evaluation process?
Ori WellingtonThey need to ask the exact same architectural question that the investors failed to ask. Procurement teams must demand to know what does this platform's architecture say about where agent automation ends and independent verification begins? And they cannot accept a fluffy marketing answer.
Sam JonesRight. It can't just be a bullet point on a PowerPoint slide that says we value auditor independence. It can't be a mere policy statement on their website.
Ori WellingtonNo, it must be a demonstrable software design decision. The vendor needs to open up the hood and show the buyer the hard line separating evidence collection from independent judgment.
Sam JonesHow does a vendor actually prove that? Like what does a hard line look like in software?
Ori WellingtonIt looks like specific cryptographic and access control mechanics. For example, a legitimate vendor will use role-based access controls, or RBAC, to ensure that the AI agent has read-only permissions to gather logs, but zero permissions to write or alter the final auditor's report.
SPEAKER_00Okay, that makes sense.
Ori WellingtonAnother method is cryptographic hashing. When the system pulls a log from AWS, it should immediately generate a SHA 256 hash of that log. When the human auditor reviews it, they check the hash to ensure the evidence hasn't been tampered with or dynamically altered by a language model in transit. It's about deterministic proof.
Sam JonesThat makes total sense. If the AI can write the final conclusion, the platform is fundamentally compromised. The answer to that architectural question tells you whether you're buying a tool that helps build your compliance program or a tool that simply replaces your program with a meaningless, hallucinated document. But you know, knowing you need a foundation doesn't really help a procurement team staring at a 50-page vendor pitch deck full of buzzwords. It's it sounds simple when we say it out loud. Look at the architecture. But we both know that in the heat of a sales cycle, with executives screaming to close deals and adopt AI, relying on individual intuition to spot a cryptographic flaw isn't a reliable defense mechanism.
Ori WellingtonNo, it's not.
Sam JonesHaving a formalized framework is always better than relying on gut instinct.
A Better Way To Evaluate Risk Tools
Ori WellingtonIt is essential. You need a structured way to evaluate these claims that removes the emotion and the hype from the equation. You need a model that quantifies maturity.
Sam JonesWhich brings us perfectly to the solution outlined in your piece and the models we published. As advisors at Wheelhouse Advisors, we need to talk about evaluating maturity using the IRM navigator curve and the ADRI index. Because behind every single one of these failures, the vendors, the buyers, the investors, is the exact same underlying condition.
Ori WellingtonYes. The entire AI disruption pressure cooker rewards the announcement of agenti capabilities, and it completely ignores the program maturity that makes those capabilities trustworthy in the first place. This isn't a technology problem we're looking at. It's a sequencing problem.
Sam JonesA sequencing problem. Let's dig into that. So skipping straight to autonomous AI is the problem. There has to be a correct sequence. I was looking at the IRM navigator curve you detailed, and it struck me that it's basically an evolutionary chart for risk management. IRM stands for Integrated Risk Management.
The IRM Navigator Curve Explained
Ori WellingtonThat's right. The IRM navigator curve is a model that describes risk program development as a sequence. And the sequence matters because each stage builds the necessary structural foundation that the next stage absolutely requires. You cannot skip steps without introducing catastrophic instability. It starts at the bottom with the foundational stage. This is the unglamorous work. It's establishing basic policies, implementing actual access controls, collecting real, deterministic evidence, and subjecting that evidence to independent human verification.
Sam JonesThis sounds like the boring manual stuff nobody wants to do anymore. This is pouring the concrete for the skyscraper. Give me a real-world example of a company at the foundational stage.
Ori WellingtonAaron Powell Sure. A company at the foundational stage is a startup that just hired its first compliance manager. They are manually reviewing who has access to their GitHub repositories. They are writing down their incident response plan in a Google Doc. They are taking physical screenshots of their AWS settings to show an auditor.
SPEAKER_00It's very hands-on.
Ori WellingtonIt's manual, it's slow, but it's real. The evidence is undeniably tied to physical reality. It's not in compliance theater. It's the real substrate of every advanced function that will come later.
Sam JonesOkay. So once the foundation is solid, where do they go next?
Ori WellingtonThen you move to the coordinated risk stage. This is where you connect that foundational IT risk data to your broader enterprise decision making.
Sam JonesWhat does that look like in practice?
Ori WellingtonIt means the security team isn't operating in a silo anymore. For example, if the IT team flags that a legacy server is running an outdated operating system, that risk is coordinated with the finance team's budget planning to fund a migration. It's coordinated with the legal team to understand the regulatory exposure. The data is starting to flow between departments.
Sam JonesThat makes sense. It's breaking down the silos. So what's the third stage?
Ori WellingtonThe third stage is embedded risk. This is where those practices become deeply integrated into your daily operational processes, usually through software integration and APIs.
Sam JonesSo less manual spreadsheets, more automated workflows.
Ori WellingtonExactly. An example of embedded risk is integrating security scanning directly into your software deployment pipeline. When an engineer tries to push new code, the system automatically checks it against your security policies before allowing it to go live. The compliance isn't a separate audit you do at the end of the year. It's embedded into the daily tools the company uses.
Sam JonesAll right, so we have foundational, coordinated, and embedded. That brings us to the fourth stage.
Ori WellingtonExtended risk. This is where you bring continuous cross-domain telemetry and real-time decision support, extending beyond your own four walls to include your entire vendor ecosystem.
Sam JonesAh, the third-party risk problem.
Ori WellingtonExactly. At the extended stage, you aren't just monitoring your own servers. You have continuous API integrations, pulling security posture data from all your critical vendors. If a vendor you rely on experiences a configuration drift, your system flags it in real time.
Sam JonesAnd then finally, at the very peak of the curve.
Ori WellingtonAt the top sits autonomous risk management. This is the true AI endgame. This is where AI agents can operate with invalidated guardrails, making routine risk decisions, auto-remediating minor configuration drifts, and requiring minimal human intervention.
Sam JonesBut the critical caveat is that they can only do this because the foundation beneath them is rock solid. They are pulling from deterministic data that was established in the earlier stages.
Ori WellingtonPrecisely. If we connect this to the bigger picture, the curve explains Dell's fatal flaw perfectly. Dell's clients, encouraged by the software's marketing, tried to skip the slow, expensive foundational stage entirely. They wanted to jump straight from having zero compliance maturity to the shiny autonomous stage at the top of the curve.
Sam JonesThey wanted the penthouse without building the building. The compliance function that Dell's clients were trying to shortcut, the observation periods, the independent auditor reviews, the actual gathering of real control evidence, that all sits firmly at the foundational stage. Those requirements exist because they build the integrity that every single stage above them depends on.
The ADRI Index And Market Mapping
Ori WellingtonSo a genet capabilities deployed without a foundation aren't actually advanced technology at all. They are structurally unstable. They're an illusion waiting to collapse in ways that are not visible to standard financial metrics until something goes horribly wrong.
Sam JonesOkay, so that's the first dimension, the curve, which measures maturity. But our work at Real House Advisors also details a second dimension to this analysis, which I found fascinating. The IRM 50 AI Disruption Risk Index, or the ADRI. I want to break down the two axes of this index because it's a brilliant way to map the market.
Ori WellingtonThe ADRI evaluates risk technology vendors based on two specific structural dimensions to determine how much disruption risk they carry. The first axis looks at how dependent a vendor's core business model is on the mere production of compliance artifacts, meaning the documents, the PDFs, the certificates themselves. Are they selling a security posture or are they just selling a piece of paper?
SPEAKER_00That's the artifact axis and the second axis.
Ori WellingtonThe second axis looks at how close they actually stand to enabling genuine autonomous risk capability based on solid, verifiable architecture. That's the integrity axis. Do they have the cryptographic controls, the role-based access controls, the deterministic data pipelines we talked about earlier?
Sam JonesSo let's apply the ADRI to Dell. Where did they sit on this index?
Ori WellingtonBased on the allegations and the structural failure we saw, Dell sat at the absolute extreme, most dangerous end of the disruption risk spectrum. They were offering maximum compliance artifact production. They were churning out thousands of SOC2 and Hippia reports, but with minimum or perhaps zero integrity architecture beneath it.
Sam JonesThey maxed out the artifact axis and bottomed out the integrity axis.
Ori WellingtonPrecisely. Vendors whose entire business model depends on churning out compliance documents at scale, without that architectural integrity layer that makes those documents meaningful carry massive disruption risk. If a regulatory body decides to audit the auditor, that vendor's entire business model goes to zero overnight.
Sam JonesThis is incredibly practical for the audience. So if I'm an investor managing a fund or an enterprise buyer sitting at the procurement desk, how do I use these two frameworks together in my day-to-day operations?
Ori WellingtonYou use the IRM navigator curve to identify where a vendor's deployment of AI agents is architecturally premature. You use it as a maturity lens. When a vendor pitches you, you ask yourself, is this agentic GRC platform extending a foundational program my company has already built, or is it trying to substitute for a foundation I haven't built yet? If it's acting as a substitute for the hard work, walk away.
Sam JonesAnd how do investors use the ADRI?
Ori WellingtonInvestors use the ADRI as a structured framework for assessing durability. It helps you see past the SAS growth metrics like NDR and CAC to determine which platforms are architecturally sound and which ones are carrying massive hidden integrity exposure. It highlights the systemic risk that standard financial diligence will completely miss. You map the startup on the ADRI, and if they are high on artifacts but low on verifiable architecture, you don't write the check.
Sam JonesThe broader market implications here are significant because let's be honest, the Delve case is an extreme example of alleged outright fraud. But the underlying dynamic, the intense pressure to show AI speed, that isn't extreme. That is the daily reality of the tech sector right now.
Ori WellingtonThe pressure to skip foundational work is only going to increase. With every major compliance platform now racing to announce agentic capabilities to appease their boards and their investors, the temptation to cut corners to stay competitive is immense. These frameworks are no longer just academic models, they are essential defensive survival tools for capital allocators and procurement teams.
Sam JonesI have to push back here, though, from the perspective of a legitimate vendor. Foundational work is slow, it's expensive, it's incredibly hard to package as a sexy differentiated capability in a sales pitch. If I'm a legitimate vendor, someone who actually does the hard foundational work, who respects the architecture, who uses cryptographic hashing, how do I compete against a scammer who is promising magic overnight? How do I avoid losing all my VC funding to the flashy competitor who is faking it and showing 150% NDR?
Ori WellingtonIt's the hardest question in the market right now. It is incredibly painful to compete against a lie, especially a highly funded lie. But the answer lies in leveraging this exact scrutiny to your advantage. Legitimate vendors must draw a hard architectural line and demonstrate it specifically, transparently, and constantly.
Sam JonesThey have to use their integrity as a weapon.
Ori WellingtonYes. They have to educate their buyers. They have to sit down at the procurement meeting and say, look, our competitor is offering you a document in three days. We're offering you actual verifiable security, and here is the architectural proof of why their document is a legal liability and our process is a defensible defense. They have to show the code level separation. The vendors who can prove their integrity will ultimately benefit from the fallout of the Delve case because buyers are going to be terrified of buying another Mirage.
Lessons For Buyers And Investors
Sam JonesThe market will correct, but it's going to be a painful correction for those caught on the wrong side of the maturity curve. We've covered incredible ground today. Let's briefly synthesize the vital lessons learned from this deep dive for you, the listener. The allure of AI speed created a deeply flawed market dynamic. It created an environment where desperate enterprise buyers bought certificates instead of actual security, and where sophisticated venture capitalists bought financial traction instead of structural integrity.
Ori WellingtonThe core takeaway for any future risk technology investments, whether you're spending VC money or allocating a corporate IT budget, is unyielding. Maturity must come first, agents come second. Your investment diligence absolutely must evolve beyond standard SAS metrics like NDR. You have to evaluate the architectural separation between automated evidence collection and human judgment.
When Algorithms Audit Algorithms
Sam JonesAnd if you can't see that separation, close your checkbook. But as we wrap up, I want to leave you with a final lingering thought. This is something to mull over on your own. A look at what happens next in this space. If an AI platform could successfully fabricate an entire corporate compliance program well enough to fool thousands of enterprise buyers and top-tier venture capitalists today, what happens in the near future? What happens when companies start deploying autonomous AI auditor agents to evaluate the compliance of other autonomous AI vendor agents?
Final Takeaway Check The Foundation
Ori WellingtonIt raises a profound and slightly terrifying question about the future of truth in business. When the entire system is automated, when algorithms are evaluating algorithms, will humans even be in the loop long enough to realize that the entire trust ecosystem is just machines hallucinating at each other.
Sam JonesA skyscraper built on a hallucinated foundation. Thank you for joining us on this deep dive. I highly encourage you to carry the IRM navigator curve and the AD or I frameworks into your next boardroom meeting or procurement negotiation. Until next time, stay curious and always check the foundation.