The Risk Wheelhouse

S7E1: The Delve Collapse And The New Rules Of Enterprise Trust

Wheelhouse Advisors LLC Season 7 Episode 1

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 43:14

A compliance certificate is supposed to be like a bridge inspection: real materials, real tests, real signatures, and real accountability. Then AI arrived, and the market started rewarding something else entirely, speed. The result is what we call a trust mirage, where “audit-ready” output can look convincing even when the underlying control evidence is shaky or absent.

We unpack the rise and alleged collapse of Delve, a once high-flying agentic GRC startup that promised SOC 2 compliance in days, not months and reportedly reached a $300 million valuation. The wild part is how the story breaks: not with a regulator raid, but with an anonymous Substack writer, a publicly accessible Google spreadsheet, and uncomfortable questions about whether AI-generated reports crossed the line from automation into fabrication. Along the way, we clarify the technical difference between deterministic verification and probabilistic LLM text generation, plus why auditor independence is the core legal requirement that software must protect at the code level.

From there we get practical. We challenge the standard venture capital and enterprise procurement playbooks that lean on SaaS metrics like NDR, and we replace hand-wavy “AI compliance” claims with concrete architectural checks: role-based access controls, read-only evidence collection, cryptographic hashing, and hard separation between agents and human judgment. We also share two frameworks to navigate the new landscape: the IRM navigator curve for sequencing risk maturity, and the ADRI index for spotting vendors that maximize compliance artifacts while minimizing integrity.

If you buy, fund, or build in compliance, GRC, risk management, SOC 2, ISO 27001, HIPAA, or GDPR, this conversation is your warning label and your field guide. Subscribe, share this with your security and finance leaders, and leave a review. What question will you start asking every “agentic” vendor first?



Visit www.therisktechjournal.com and www.rtj-bridge.com to learn more about the topics discussed in today's episode. 

Subscribe at Apple Podcasts, Spotify, or Amazon Music. Contact us directly at info@wheelhouseadvisors.com or visit us at LinkedIn or X.com

Our YouTube channel also delivers fast, executive-ready insights on Integrated Risk Management. Explore short explainers, IRM Navigator research highlights, RiskTech Journal analysis, and conversations from The Risk Wheelhouse Podcast. We cover the issues that matter most to modern risk leaders. Every video is designed to sharpen decision making and strengthen resilience in a digital-first world. Subscribe at youtube.com/@WheelhouseAdv.


Why Compliance Requires Reality

Sam Jones

You know, um usually when we talk about enterprise trust, there's like this underlying expectation of physical reality.

Ori Wellington

Right, like tangible proof.

Sam Jones

Yeah, exactly. It's like building a suspension bridge. If you drive over it, you uh you expect that someone somewhere actually poured the concrete.

Ori Wellington

You'd certainly hope so.

Sam Jones

Right. You expect that a structural engineer actually, you know, stressed the steel cables, checked the math, and physically signed their name on a document verifying that the bridge is going to hold the weight of your car.

Ori Wellington

Aaron Ross Powell Because it's a binary state of structural integrity, right. I mean, either the math works and the materials are sound or they aren't.

Sam Jones

Exactly. And for decades, the world of enterprise software compliance, it basically operated on a similar assumption. You needed physical proof.

Ori Wellington

You couldn't just fake it.

Sam Jones

No, I mean you don't just take a photograph of a bridge, run it through some like AI image generator and print out a certificate that says it's safe to cross. You need the receipts.

Ori Wellington

Right. But the the thing is, if you step into the compliance and risk technology space over the last, say, couple of years, that expectation of reality has been seriously tested.

Sam Jones

Tested as putting it mildly. We're looking at a landscape right now that's dealing with honestly a massive structural crisis.

Ori Wellington

It really is. The transition from manual verification to artificial intelligence has created what we might call um a trust mirage.

Sam Jones

A trust mirage, I like that.

Ori Wellington

Yeah, in certain sectors of the market at least. The sheer speed of the technology has completely outpaced the verification methods we rely on.

Sam Jones

Well, welcome to the deep dive, everyone. If you're an investor managing venture capital, if you're an enterprise software buyer trying to, you know, protect your company's data, or if you're just someone who's fascinated by the unintended consequences of AI, this conversation is custom-tailored for you.

Ori Wellington

Absolutely.

Sam Jones

Today we are going on a very specific journey. Our mission is to dissect the anatomy of a massive structural failure in the tech world. We're going to uncover exactly how a$300 million valuation in the compliance tech space completely collapsed.

Ori Wellington

And you know, we really need to look at this not just as an isolated incident, because it's not. Right. It's a symptom of a much larger sequencing problem in the tech industry as a whole.

Sam Jones

So we're going to map this journey out for you step by step. We'll start with the market hype, the blinding hype surrounding what the industry calls agentic GRC.

Ori Wellington

The gold rush phase.

Sam Jones

Exactly. From there, we'll progress into the complete architectural failure of a darling startup named Delve. And finally, we'll arrive at a new, rigorous and frankly, completely necessary way to evaluate AI risk platforms moving forward.

Ori Wellington

Aaron Powell So you don't get caught buying a liability instead of an asset.

Sam Jones

Exactly. Because that's happening a lot right now. And to do this, we're drawing heavily on the incredibly insightful piece you just published in the Risk Tech Journal titled The Compliance Solution, Agentic Hype and the Integrity Gap.

Ori Wellington

Yeah, I'm really glad we can unpack the mechanics of it today. Because like I said in the piece, the implications go far beyond just one software vendor.

Sam Jones

Aaron Powell Oh, for sure. As advisors at Wheelhouse Advisors, we see this constantly. And we're also going to synthesize some fascinating, corroborating details today from TechCrunch, Inc. magazine, WebWire, and uh startuphub.a blog to give you the full 360-degree view.

Ori Wellington

Aaron Powell It's a wild story.

Sam Jones

It really is. But here's where it gets really interesting. How did a highly funded compliance platform, I mean, a company backed by Y Combinator and heavy hitters like Insight Partners, how did they get exposed?

Ori Wellington

That's the crazy part.

Sam Jones

Right. It wasn't the SEC. It wasn't some team of highly paid forensic regulators deploying advanced discovery tools. No. It was an anonymous Substack writer and a publicly accessible Google spreadsheet.

Ori Wellington

Which is just it highlights a glaring vulnerability in the whole ecosystem. I know, I know. But what's fascinating here is that this isn't just a story about a single company cutting corners. Like if we only look at Delve, we miss the systemic pressures that actually created Delve in the first place.

Sam Jones

So it's a symptom, not the disease.

Ori Wellington

Exactly. This is a case study of what happens when the intense market pressure to claim agentic AI capabilities completely outpaces the structural integrity of the software itself.

The Agentic GRC Gold Rush

Sam Jones

Okay, so let's unpack this. Because to understand how we got to a publicly accessible Google Sheet bringing down a tech darling, we have to um we have to understand the environment that burst this company.

Ori Wellington

The agentic GRC gold rush.

Sam Jones

Right. So GRC stands for governance, risk, and compliance. It's basically the plumbing of enterprise trust.

Ori Wellington

If you've ever tried to sell software to a hospital or a bank, you know you need GRC.

Sam Jones

Oh, yeah. You can't even get in the door without it. And according to your article, this entire market has a massive pressure problem right now.

Ori Wellington

Aaron Ross Powell The pressure is immense and it's acting on every single participant in the ecosystem simultaneously.

SPEAKER_00

How so?

Ori Wellington

Well, you have the vendors who are building these compliance platforms, right?

SPEAKER_00

Yeah.

Ori Wellington

You have the enterprises who are buying them to speed up their sales cycles. And then you have the venture capitalists who are funding them.

Sam Jones

So everyone's involved.

Ori Wellington

Everyone. And they're all operating under the exact same ticking clock. The mandate from the market is just so clear, move fast, claim agentic capabilities, and show AI traction before the window closes.

Sam Jones

Aaron Powell Which naturally produces shortcuts.

Ori Wellington

Inevitably.

Delve’s Promise Of Instant SOC 2

Sam Jones

Let's talk about Delve, because they were really the poster child for this rush. They launched back in 2023. And their pitch was basically the holy grail for any company that has ever had to suffer through a security audit.

Ori Wellington

Aaron Powell Oh, it was a beautiful pitch.

Sam Jones

It was. They pitched themselves as an AI-driven, agentic GRC platform. They promised that their AI agents would autonomously connect to your company's infrastructure. Like they'd pull configurations from your AWS instances.

Ori Wellington

Right. They'd scan your code for vulnerabilities. Trevor Burrus, Jr.

Sam Jones

Yeah. And they'd even auto-fill those massive 300-question vendor security spreadsheets that enterprise procurement teams always send over. I mean, nobody wants to fill those out.

Ori Wellington

Nobody. The promise was continuous, autonomous operation. The agents were supposed to just, you know, work in the background.

Sam Jones

Like magic.

Ori Wellington

Exactly. Monitoring the client environments, updating evidence in real time, and alerting engineering teams to security gaps without any human intervention at all.

Sam Jones

And the grand finale of their pitch was that they'd organize all of this evidence into perfectly wrapped, audit-ready packages for major frameworks. We're talking SOC2, High Pay, ISO, 2701, GDPR.

Ori Wellington

The big ones.

Sam Jones

All the big ones and their timeline. This is the kicker. They promise compliance in days, not months.

Ori Wellington

Yeah, we really need to pause on that specific phrase, days, not months, because it explains exactly why the market was so eager to buy what Dell was selling.

Sam Jones

It's intoxicating.

Ori Wellington

It is. Let's look at the actual mechanics of a traditional SOC2 certification, just for context.

Sam Jones

Yeah, walk us through the traditional way.

Ori Wellington

So for an average software as a service company, achieving SOC2 compliance. The traditional way requires working with an accredited independent auditing firm.

Sam Jones

Real human beings.

Ori Wellington

Real humans. The auditor establishes an observation period, which is usually three to six months. And during that time, these human beings are asking your engineering team for specific timestamp proof that your security controls actually work.

Sam Jones

So they can't just take your word for it.

Ori Wellington

Nope. They want to see the database logs, they want to see the pull request approvals. And that process historically takes anywhere from six to twelve months of back and forth.

Sam Jones

Which is incredibly slow.

Ori Wellington

It's slow and it costs between$15,000 and$50,000.

Sam Jones

It's a massive friction point. I mean, if you're a startup founder, that's months of your lead engineer taking screenshots of server settings instead of building the actual product. It's grueling.

Ori Wellington

It is grueling, yes. But it's grueling by design. The friction is where the verification happens. Then Delve walks into the room and says, hey, we can give you that exact same gold standard certificate, but instead of six to twelve months, it'll take days.

Sam Jones

And a fraction of the cost, right?

Ori Wellington

It's exactly instead of fifty thousand dollars, it'll cost you between six thousand and fifteen thousand dollars. They claim to be using AI agents to completely eliminate all that manual friction.

Sam Jones

And the market swallowed it whole. I mean, that narrative led straight to a$32 million Series A funding round led by Insight Partners. It gave them a$300 million valuation.

Ori Wellington

It was a rocket ship.

Sam Jones

It was. They rapidly acquired a roster of over 1,000 clients spread across 50 different countries. Because the enterprise sales teams at these client companies felt immense pressure to adopt this.

Ori Wellington

Right, because AI speed had transitioned from being just a cool feature to being a mandatory competitive signal.

Sam Jones

If you don't have it, you're behind.

Ori Wellington

Demonstrating AI readiness is a non-negotiable part of the enterprise sales conversation now. If you're a vendor and you can't show your clients that you're leveraging AI to be faster and more efficient, you look obsolete. You lose the deal. Exactly. Falling behind on compliance certification feels like an existential risk to a startup trying to close enterprise deals. So the certificate itself became the sole objective.

Sam Jones

The piece of paper.

Ori Wellington

Yes. The actual robust control environment that the certificate was supposed to represent became entirely secondary.

Sam Jones

It's a fascinating psychological trap for the tech sector, isn't it? There's this deeply ingrained assumption that if a process is slow like manual compliance auditing, it must inherently be broken or inefficient.

Ori Wellington

Right. It's the whole disrupt everything mindset.

Sam Jones

Exactly. And therefore, an AI that comes along and does it instantly, well, that must be a massive technological breakthrough. We're so conditioned to worship disruption that we forget that sometimes a process is slow for a very specific reason.

Ori Wellington

A very good reason.

Sam Jones

Yeah. Sometimes you're not buying a breakthrough, you're just buying a very dangerous shortcut.

Ori Wellington

It's the assumption of automation without the verification of methodology. The tech sector loves the ethos of move fast and break things.

Sam Jones

Which is great for consumer apps.

Ori Wellington

But when you're dealing with enterprise data security and federal compliance frameworks, the entire point is that things cannot be broken. The methodology actually matters more than the speed.

Sam Jones

Okay, let me try out an analogy here. Let's see if this tracks. Imagine you're trying to build a hundred-story skyscraper. The traditional way takes years, right? You have to dig a massive foundation, pour thousands of tons of concrete, let it cure, test the soil.

Ori Wellington

Build a steel skeleton.

Sam Jones

Right. Slowly add the floors. It's expensive and it takes a long time.

Ori Wellington

A very apt comparison for establishing a mature risk program, by the way.

Sam Jones

Thanks. So then a new contractor shows up and says, I have this incredible new 3D printing technology. We can put your entire hundred-story skyscraper overnight.

Ori Wellington

Sounds amazing.

Sam Jones

So you pay them, and sure enough, the next morning there's a giant skyscraper standing there. It looks like a building. You can take a photograph of it and show it to your investors. You get a certificate of occupancy.

Ori Wellington

But the mechanics of how it was built matter.

Sam Jones

Exactly. Because they skipped pouring the concrete foundation. They just printed the walls straight onto the dirt. It's a total facade. You have the certificate, but you wouldn't want to be inside that building when a storm rolls in.

Ori Wellington

And in the compliance world, the storm always comes.

Sam Jones

It's just a matter of time.

Ori Wellington

Whether it's a zero-day data breach, a regulatory inquiry, or a massive enterprise client sending their own security team to verify your posture, the foundation is eventually going to be tested.

The Whistleblower Exposes The Facade

Sam Jones

Which brings us to the storm that hit Delve. Because a skyscraper without a foundation eventually cracks. And in March 2026, Delve's cracks became a very, very public spectacle.

Ori Wellington

This is where the structural failure of the illusion becomes impossible to ignore. We have to look at the exact technical nature of the whistleblower allegations.

Sam Jones

Let's talk about that Google sheet that broke the facade. According to the whistleblower allegations that dropped in March 2026, the product didn't just fall short of its marketing claims. It allegedly, fundamentally, didn't work as described at an architectural level.

Ori Wellington

The allegation is profound, honestly. The entire pitch was that Dell's AI automated the collection of evidence, right? Which was then securely handed over to independent human auditors to review and draw conclusions from.

Sam Jones

Because that's the legal requirement.

Ori Wellington

Exactly. That is the legal and ethical requirement of these frameworks. The auditor has to be the one who looks at the evidence and says, yes, this control is effective. Right. But the whistleblowers allege that Dell's agents were actually generating the auditor conclusions and the test procedures before the client's actual data was even reviewed.

Deterministic Proof Versus LLM Guessing

Sam Jones

Okay, wait. Explain the mechanism there. How does a software platform generate a conclusion before it even looks at the data?

Ori Wellington

It comes down to the difference between deterministic verification and probabilistic generation.

SPEAKER_00

Okay, break that down for us.

Ori Wellington

Sure. In a deterministic system, the software goes to an AWS server, runs an API call to check if, say, multi-factor authentication is enforced, gets a true or false log, and hands that specific log to an auditor.

Sam Jones

Right. That's real evidence.

Ori Wellington

That's real evidence. But large language models are probabilistic. They generate text token by token based on patterns.

Sam Jones

Right. They guess the next word.

Ori Wellington

Exactly. The allegation is that Delve used LLMs to dynamically generate the written text of the auditor's final report. The narrative saying the controls were tested and working based on templates rather than basing it on deterministic cryptographic proof of the client's actual server settings.

Sam Jones

So they were basically using AI to write the answer key before the test was even taken.

Ori Wellington

Essentially, yes.

Sam Jones

They just bypassed the actual telemetry completely.

Ori Wellington

The gap between what Dell said its agents automated and what they allegedly produced is the gap between a tool that supports a compliance program and a tool that actively fabricates one. Wow. If an AI writes the conclusion without deterministic proof, it's hallucinating compliance.

Sam Jones

And the operational mechanics of this alleged failure are just wild because they were doing this at such a massive scale, they got sloppy.

Ori Wellington

Very sloppy.

Sam Jones

The report templates were allegedly so identical across completely unrelated companies that it became obvious if you just put two of them side by side. Like an AI generating a risk assessment for a healthcare startup looked identical to a risk assessment for a fintech company. Trevor Burrus, Jr.

Ori Wellington

And we have to look at how they handled the human element of the certification process, too. Trevor Burrus, Jr.

Sam Jones

Right. Because you still need a signature.

Rubber Stamped Audits And Fallout

Ori Wellington

Exactly. To legally issue an SOC2 or an ISO certificate, you need an independent accredited auditor to sign off. The AI can't sign the document itself.

Sam Jones

Aaron Powell So how do they get around that?

Ori Wellington

Aaron Ross Powell The allegations state that Dell was routing these dynamically generated certifications through what are described as offshore mills.

Sam Jones

Oh wow.

Ori Wellington

Yeah. These were operations functioning under nominal U.S. addresses, employing people to essentially rubber stamp the AI-generated conclusions, completely bypassing the rigorous independent review that the entire compliance model legally requires.

Sam Jones

That is, I mean, the fallout was immediate. Why Combinator, which is arguably the most prestigious startup accelerator in the world, asked Delve to leave its program.

Ori Wellington

Which you rarely see happen publicly.

Sam Jones

Never. It's usually handled quietly. The CEO of Dell put out a statement with a very vague admission about, you know, growing too fast and falling short of standards. They're pushing back on specific allegations, and as of now, the legal battles are still unresolved. But the market damage is done.

Ori Wellington

It's done. But what's fascinating here is the systemic oversight failure. You have to ask yourself, how did a$300 million operation operating across 50 countries go completely unnoticed by the authorities?

Sam Jones

Right. No government regulator caught it.

Ori Wellington

Exactly. The American Institute of CPAs, which governs SOC2, didn't flag it. The entire regulatory ecosystem failed to detect this anomaly.

Sam Jones

And who caught it? An anonymous writer on Substack who stumbled across a publicly accessible Google spreadsheet filled with confidential client audit reports.

Ori Wellington

Almost comical.

Sam Jones

The sheer absurdity of that is hard to overstate. You have this incredibly sophisticated AI-native operation, and it unraveled because someone forgot to change the sharing permissions on a Google sheet. But honestly, why didn't the regulators catch it?

Ori Wellington

It highlights a critical vulnerability in how we govern trust right now. The compliance certification infrastructure that we rely on today was designed for a bygone era.

Sam Jones

Like pre-AI.

Ori Wellington

Way before AI. It was built for a world where audits were bespoke, manual processes. Think about the analogy of building a high-speed rail system, but still using manual switchboard operators from the 1920s to manage the tracks. The old system assumed a slow, human-driven process.

Sam Jones

Right. If you're faking audits manually, you can only fake so many before an auditor notices a discrepancy, or you literally just physically run out of hours in the day. The friction limits the scale of the fraud.

Ori Wellington

Exactly. But because Dell operated at scale with AI agents, they overwhelmed the old system. There was no cross-client pattern detection built into the regulatory body's oversight mechanisms.

Sam Jones

So nobody was comparing the reports.

Ori Wellington

Right. There was no automated way to verify the independence of the auditors working at those offshore mills. There was no anomaly signal layer scanning the content of these reports across the industry to say, hey, why do these thousand reports have the exact same probabilistic phrasing?

Sam Jones

Because that tool didn't exist.

Ori Wellington

Completely missed it.

Sam Jones

And that brings us to a really uncomfortable conversation about the investor point of view.

Ori Wellington

It's perhaps the most alarming part of the entire narrative, to be honest. Yeah. Because it reveals a fundamental flaw in how capital is being deployed, evaluated, and managed in the AI era.

Sam Jones

Okay, let's analyze the investment thesis that led Insight Partners and actual Fortune 500 chief information security officers who participated in the funding to backdelve. Because if you look at this purely through the lens of a traditional VC, this looked like the ultimate disruption play.

Ori Wellington

Aaron Powell Right. Put yourself in the shoes of a capital allocator for a minute. You're operating in a market that aggressively rewards AI native software that automates complex knowledge work.

Sam Jones

It's what everyone wants.

Ori Wellington

Exactly. You see a company that has taken a painful 12-month enterprise process and collapsed it into a few days. They have a thousand active clients. They're operating in 50 countries. The revenue growth looks spectacular.

Sam Jones

I mean, it essentially writes its own investment memo.

Ori Wellington

It does.

Sam Jones

But they committed a massive due diligence failure. They completely failed to ask the single most important load-bearing architectural question. They didn't ask, does this platform actually preserve auditor independence at the code level?

Ori Wellington

Aaron Powell And if we connect this to the bigger picture of venture capital, we have to talk about why standard SAS diligence frameworks completely failed here.

Sam Jones

The playbooks they've used for a decade.

Ori Wellington

Right. When VCs evaluate a typical software as a service company, they look at a very specific set of financial metrics. They look at annual recurring revenue or ARR, they look at customer acquisition cost, CAC, and crucially they look at net dollar retention or NDR.

Sam Jones

Let's break down NDR for a second because I think it's really the culprit here. NDR basically measures how much revenue your existing customers are retaining and expanding.

Ori Wellington

Exactly.

Sam Jones

So if you start the year with a customer paying you$10,000 and by the end of the year they upgrade and pay you$14,000, your NDR is 140%. And VCs absolutely love high NDR.

Ori Wellington

They do because it indicates a sticky product that customers find valuable. And if you're selling an easy button for compliance, your NDR is going to be incredibly high.

Sam Jones

Because it's so easy.

Ori Wellington

Right. Startups were happily paying Delve to renew their SOC2, and then buying the add-on modules for HEPA and GDPR because the AI made it so effortless. The financial metrics look flawless. These frameworks are brilliantly calibrated to find one thing: financial traction.

Sam Jones

But compliance platforms are not just another piece of sauce. Like if I buy a marketing AI tool and it hallucinates a bad subject line for an email campaign, my open rates drop. I lose some leads. It's bad, but it's contained.

Ori Wellington

Right. It's an isolated failure.

Sam Jones

Exactly. If I buy a coding assistant and it writes inefficient code, my engineers catch it in testing. But if I buy a compliance platform and it hallucinates an audit, the consequences are catastrophic and systemic because in compliance, the product is trust. The product is trust. That's a great way to put it.

Ori Wellington

That is the fundamental difference in the risk profile here. Standard sauce metrics cannot measure whether an AI is doing the right thing architecturally. They can only measure if people are buying it.

Sam Jones

And people were buying it in droves.

Ori Wellington

Right. But in a trust market, financial traction built on a hallucinated foundation is just untriggered churn. It's technical and legal debt masquerading as revenue.

SPEAKER_00

Wow.

Ori Wellington

The diligence didn't surface the architectural truth because metrics like NDR aren't designed to measure code level integrity.

Sam Jones

It really points to a massive expertise gap in venture capital right now. They understand software growth mechanics, but they don't possess the domain expertise to know what a mature compliance program actually looks like from the inside.

Ori Wellington

Without that deep domain expertise, investors are just flying blind. They're mistaking the extremely fast production of compliance artifacts like the PDFs, the certificates for actual programmatic maturity.

Sam Jones

They see the paper and think the building is solid.

Ori Wellington

Exactly. They see the output and assume the foundation exists without ever verifying the pipeline that generated. That output.

Sam Jones

But wait, let me push back on that. Let me play the defender of the VC here for a second.

Ori Wellington

Go for it.

Sam Jones

If I'm an investor, my mandate for my limited partners is to generate returns by finding the next big thing. If standard metrics like NDR and CAC say this company is a rocket ship, aren't those metrics doing exactly what they were designed to do? The VC's job isn't to be a forensic compliance auditor, it's to spot growth. Is it really their fault for not digging into the cryptographic hashing of the evidence logs?

Ori Wellington

It's a fair challenge, and honestly, it's exactly the defense you hear in boardrooms all the time. The gravity of hypergrowth exerts a tremendous pull. But I would argue that when you're investing in a risk and compliance platform, the standard rules of SAS simply don't apply. And your fiduciary duty requires a completely different level of scrutiny.

Sam Jones

Because of the nature of the product.

Ori Wellington

Exactly. If you're funding a company that issues legal attestations of security, understanding the mechanism of that attestation isn't an edge case. It is the core of the business model.

Sam Jones

So if you're that investor sitting in a boardroom and a founder slides a pitch deck across the table showing a thousand active enterprise clients, raving reviews on G2, and a revenue chart that's just doubling every quarter, what specific question should you be asking to pump the brakes?

Ori Wellington

The specific question they had to ask is not about revenue. They had to ask about the platform's architecture. They needed to ask, show us technically where your agent's automation ends and where the independent human auditor's judgment begins.

Sam Jones

Where is the line?

Ori Wellington

Right. And they needed to demand proof that this line could not be crossed by the software.

Sam Jones

And if the founder can't show you that hard-coded barrier in their system architecture, you walk away no matter how good the NDR looks.

Ori Wellington

Exactly. Because if that architectural separation doesn't exist, you aren't investing in a compliance tool. You're investing in a liability engine.

Buyer Liability And Procurement Red Flags

Sam Jones

A liability engine. That perfectly describes the nightmare that the buyers woke up to.

Ori Wellington

Let's pivot to the customer side of this because the enterprise buyers and procurement teams are the ones left holding the bag right now. We need to talk about the concrete red flags and the buyer's dilemma in this new landscape. The legal exposure for the buyers right now is immense. And it's a deeply unfair situation in many ways because these companies bought Dell specifically to reduce their risk.

Sam Jones

Right. If you're a buyer, you wanted to do the right thing. You wanted to secure your posture, close your own enterprise deals faster, and show your partners you were compliant.

Ori Wellington

You had good intentions.

Sam Jones

But by using these allegedly fabricated certificates to represent your security posture to the market, you may now be legally liable under massive regulatory frameworks like HIPAA in the US and GDPR in Europe.

Ori Wellington

It's a devastating irony. The risk reduction tool they purchased became the single massive risk source for their entire operation. Let's look at the mechanics of IPTAL liability, for instance.

Sam Jones

Yeah, how does that work?

Ori Wellington

If you hand a hospital network, a fabricated IPO compliance report, and that hospital network entrusts you with protected health information based on that report, you aren't just breaching a vendor contract. If a breach occurs, you are potentially violating federal law because you falsely attested to having controls in place that didn't exist.

Sam Jones

And the liability just flows downstream.

Ori Wellington

The liability flows directly downstream to the buyer.

Sam Jones

If you're an enterprise buyer listening to this, imagine the scenario.

Ori Wellington

The contract that makes your year.

Sam Jones

Right. And now you're waking up to headlines that the software that generated that document is an alleged fraud mill. The document you staked your company's reputation on is fundamentally worthless. The real-world panic of that realization has to be chilling.

Ori Wellington

Which is exactly why procurement teams have to fundamentally change how they evaluate these tools. The era of the easy button is over. Buyers must immediately stop asking vendors, can this platform give us SOC2 in days?

Sam Jones

Because it's the wrong question.

Ori Wellington

That question is inherently dangerous now because it selects for vendors willing to cut architectural corners.

Sam Jones

So, what is the concrete red flag? What should procurement teams be asking instead during the vendor evaluation process?

Ori Wellington

They need to ask the exact same architectural question that the investors failed to ask. Procurement teams must demand to know what does this platform's architecture say about where agent automation ends and independent verification begins? And they cannot accept a fluffy marketing answer.

Sam Jones

Right. It can't just be a bullet point on a PowerPoint slide that says we value auditor independence. It can't be a mere policy statement on their website.

Ori Wellington

No, it must be a demonstrable software design decision. The vendor needs to open up the hood and show the buyer the hard line separating evidence collection from independent judgment.

Sam Jones

How does a vendor actually prove that? Like what does a hard line look like in software?

Ori Wellington

It looks like specific cryptographic and access control mechanics. For example, a legitimate vendor will use role-based access controls, or RBAC, to ensure that the AI agent has read-only permissions to gather logs, but zero permissions to write or alter the final auditor's report.

SPEAKER_00

Okay, that makes sense.

Ori Wellington

Another method is cryptographic hashing. When the system pulls a log from AWS, it should immediately generate a SHA 256 hash of that log. When the human auditor reviews it, they check the hash to ensure the evidence hasn't been tampered with or dynamically altered by a language model in transit. It's about deterministic proof.

Sam Jones

That makes total sense. If the AI can write the final conclusion, the platform is fundamentally compromised. The answer to that architectural question tells you whether you're buying a tool that helps build your compliance program or a tool that simply replaces your program with a meaningless, hallucinated document. But you know, knowing you need a foundation doesn't really help a procurement team staring at a 50-page vendor pitch deck full of buzzwords. It's it sounds simple when we say it out loud. Look at the architecture. But we both know that in the heat of a sales cycle, with executives screaming to close deals and adopt AI, relying on individual intuition to spot a cryptographic flaw isn't a reliable defense mechanism.

Ori Wellington

No, it's not.

Sam Jones

Having a formalized framework is always better than relying on gut instinct.

A Better Way To Evaluate Risk Tools

Ori Wellington

It is essential. You need a structured way to evaluate these claims that removes the emotion and the hype from the equation. You need a model that quantifies maturity.

Sam Jones

Which brings us perfectly to the solution outlined in your piece and the models we published. As advisors at Wheelhouse Advisors, we need to talk about evaluating maturity using the IRM navigator curve and the ADRI index. Because behind every single one of these failures, the vendors, the buyers, the investors, is the exact same underlying condition.

Ori Wellington

Yes. The entire AI disruption pressure cooker rewards the announcement of agenti capabilities, and it completely ignores the program maturity that makes those capabilities trustworthy in the first place. This isn't a technology problem we're looking at. It's a sequencing problem.

Sam Jones

A sequencing problem. Let's dig into that. So skipping straight to autonomous AI is the problem. There has to be a correct sequence. I was looking at the IRM navigator curve you detailed, and it struck me that it's basically an evolutionary chart for risk management. IRM stands for Integrated Risk Management.

The IRM Navigator Curve Explained

Ori Wellington

That's right. The IRM navigator curve is a model that describes risk program development as a sequence. And the sequence matters because each stage builds the necessary structural foundation that the next stage absolutely requires. You cannot skip steps without introducing catastrophic instability. It starts at the bottom with the foundational stage. This is the unglamorous work. It's establishing basic policies, implementing actual access controls, collecting real, deterministic evidence, and subjecting that evidence to independent human verification.

Sam Jones

This sounds like the boring manual stuff nobody wants to do anymore. This is pouring the concrete for the skyscraper. Give me a real-world example of a company at the foundational stage.

Ori Wellington

Aaron Powell Sure. A company at the foundational stage is a startup that just hired its first compliance manager. They are manually reviewing who has access to their GitHub repositories. They are writing down their incident response plan in a Google Doc. They are taking physical screenshots of their AWS settings to show an auditor.

SPEAKER_00

It's very hands-on.

Ori Wellington

It's manual, it's slow, but it's real. The evidence is undeniably tied to physical reality. It's not in compliance theater. It's the real substrate of every advanced function that will come later.

Sam Jones

Okay. So once the foundation is solid, where do they go next?

Ori Wellington

Then you move to the coordinated risk stage. This is where you connect that foundational IT risk data to your broader enterprise decision making.

Sam Jones

What does that look like in practice?

Ori Wellington

It means the security team isn't operating in a silo anymore. For example, if the IT team flags that a legacy server is running an outdated operating system, that risk is coordinated with the finance team's budget planning to fund a migration. It's coordinated with the legal team to understand the regulatory exposure. The data is starting to flow between departments.

Sam Jones

That makes sense. It's breaking down the silos. So what's the third stage?

Ori Wellington

The third stage is embedded risk. This is where those practices become deeply integrated into your daily operational processes, usually through software integration and APIs.

Sam Jones

So less manual spreadsheets, more automated workflows.

Ori Wellington

Exactly. An example of embedded risk is integrating security scanning directly into your software deployment pipeline. When an engineer tries to push new code, the system automatically checks it against your security policies before allowing it to go live. The compliance isn't a separate audit you do at the end of the year. It's embedded into the daily tools the company uses.

Sam Jones

All right, so we have foundational, coordinated, and embedded. That brings us to the fourth stage.

Ori Wellington

Extended risk. This is where you bring continuous cross-domain telemetry and real-time decision support, extending beyond your own four walls to include your entire vendor ecosystem.

Sam Jones

Ah, the third-party risk problem.

Ori Wellington

Exactly. At the extended stage, you aren't just monitoring your own servers. You have continuous API integrations, pulling security posture data from all your critical vendors. If a vendor you rely on experiences a configuration drift, your system flags it in real time.

Sam Jones

And then finally, at the very peak of the curve.

Ori Wellington

At the top sits autonomous risk management. This is the true AI endgame. This is where AI agents can operate with invalidated guardrails, making routine risk decisions, auto-remediating minor configuration drifts, and requiring minimal human intervention.

Sam Jones

But the critical caveat is that they can only do this because the foundation beneath them is rock solid. They are pulling from deterministic data that was established in the earlier stages.

Ori Wellington

Precisely. If we connect this to the bigger picture, the curve explains Dell's fatal flaw perfectly. Dell's clients, encouraged by the software's marketing, tried to skip the slow, expensive foundational stage entirely. They wanted to jump straight from having zero compliance maturity to the shiny autonomous stage at the top of the curve.

Sam Jones

They wanted the penthouse without building the building. The compliance function that Dell's clients were trying to shortcut, the observation periods, the independent auditor reviews, the actual gathering of real control evidence, that all sits firmly at the foundational stage. Those requirements exist because they build the integrity that every single stage above them depends on.

The ADRI Index And Market Mapping

Ori Wellington

So a genet capabilities deployed without a foundation aren't actually advanced technology at all. They are structurally unstable. They're an illusion waiting to collapse in ways that are not visible to standard financial metrics until something goes horribly wrong.

Sam Jones

Okay, so that's the first dimension, the curve, which measures maturity. But our work at Real House Advisors also details a second dimension to this analysis, which I found fascinating. The IRM 50 AI Disruption Risk Index, or the ADRI. I want to break down the two axes of this index because it's a brilliant way to map the market.

Ori Wellington

The ADRI evaluates risk technology vendors based on two specific structural dimensions to determine how much disruption risk they carry. The first axis looks at how dependent a vendor's core business model is on the mere production of compliance artifacts, meaning the documents, the PDFs, the certificates themselves. Are they selling a security posture or are they just selling a piece of paper?

SPEAKER_00

That's the artifact axis and the second axis.

Ori Wellington

The second axis looks at how close they actually stand to enabling genuine autonomous risk capability based on solid, verifiable architecture. That's the integrity axis. Do they have the cryptographic controls, the role-based access controls, the deterministic data pipelines we talked about earlier?

Sam Jones

So let's apply the ADRI to Dell. Where did they sit on this index?

Ori Wellington

Based on the allegations and the structural failure we saw, Dell sat at the absolute extreme, most dangerous end of the disruption risk spectrum. They were offering maximum compliance artifact production. They were churning out thousands of SOC2 and Hippia reports, but with minimum or perhaps zero integrity architecture beneath it.

Sam Jones

They maxed out the artifact axis and bottomed out the integrity axis.

Ori Wellington

Precisely. Vendors whose entire business model depends on churning out compliance documents at scale, without that architectural integrity layer that makes those documents meaningful carry massive disruption risk. If a regulatory body decides to audit the auditor, that vendor's entire business model goes to zero overnight.

Sam Jones

This is incredibly practical for the audience. So if I'm an investor managing a fund or an enterprise buyer sitting at the procurement desk, how do I use these two frameworks together in my day-to-day operations?

Ori Wellington

You use the IRM navigator curve to identify where a vendor's deployment of AI agents is architecturally premature. You use it as a maturity lens. When a vendor pitches you, you ask yourself, is this agentic GRC platform extending a foundational program my company has already built, or is it trying to substitute for a foundation I haven't built yet? If it's acting as a substitute for the hard work, walk away.

Sam Jones

And how do investors use the ADRI?

Ori Wellington

Investors use the ADRI as a structured framework for assessing durability. It helps you see past the SAS growth metrics like NDR and CAC to determine which platforms are architecturally sound and which ones are carrying massive hidden integrity exposure. It highlights the systemic risk that standard financial diligence will completely miss. You map the startup on the ADRI, and if they are high on artifacts but low on verifiable architecture, you don't write the check.

Sam Jones

The broader market implications here are significant because let's be honest, the Delve case is an extreme example of alleged outright fraud. But the underlying dynamic, the intense pressure to show AI speed, that isn't extreme. That is the daily reality of the tech sector right now.

Ori Wellington

The pressure to skip foundational work is only going to increase. With every major compliance platform now racing to announce agentic capabilities to appease their boards and their investors, the temptation to cut corners to stay competitive is immense. These frameworks are no longer just academic models, they are essential defensive survival tools for capital allocators and procurement teams.

Sam Jones

I have to push back here, though, from the perspective of a legitimate vendor. Foundational work is slow, it's expensive, it's incredibly hard to package as a sexy differentiated capability in a sales pitch. If I'm a legitimate vendor, someone who actually does the hard foundational work, who respects the architecture, who uses cryptographic hashing, how do I compete against a scammer who is promising magic overnight? How do I avoid losing all my VC funding to the flashy competitor who is faking it and showing 150% NDR?

Ori Wellington

It's the hardest question in the market right now. It is incredibly painful to compete against a lie, especially a highly funded lie. But the answer lies in leveraging this exact scrutiny to your advantage. Legitimate vendors must draw a hard architectural line and demonstrate it specifically, transparently, and constantly.

Sam Jones

They have to use their integrity as a weapon.

Ori Wellington

Yes. They have to educate their buyers. They have to sit down at the procurement meeting and say, look, our competitor is offering you a document in three days. We're offering you actual verifiable security, and here is the architectural proof of why their document is a legal liability and our process is a defensible defense. They have to show the code level separation. The vendors who can prove their integrity will ultimately benefit from the fallout of the Delve case because buyers are going to be terrified of buying another Mirage.

Lessons For Buyers And Investors

Sam Jones

The market will correct, but it's going to be a painful correction for those caught on the wrong side of the maturity curve. We've covered incredible ground today. Let's briefly synthesize the vital lessons learned from this deep dive for you, the listener. The allure of AI speed created a deeply flawed market dynamic. It created an environment where desperate enterprise buyers bought certificates instead of actual security, and where sophisticated venture capitalists bought financial traction instead of structural integrity.

Ori Wellington

The core takeaway for any future risk technology investments, whether you're spending VC money or allocating a corporate IT budget, is unyielding. Maturity must come first, agents come second. Your investment diligence absolutely must evolve beyond standard SAS metrics like NDR. You have to evaluate the architectural separation between automated evidence collection and human judgment.

When Algorithms Audit Algorithms

Sam Jones

And if you can't see that separation, close your checkbook. But as we wrap up, I want to leave you with a final lingering thought. This is something to mull over on your own. A look at what happens next in this space. If an AI platform could successfully fabricate an entire corporate compliance program well enough to fool thousands of enterprise buyers and top-tier venture capitalists today, what happens in the near future? What happens when companies start deploying autonomous AI auditor agents to evaluate the compliance of other autonomous AI vendor agents?

Final Takeaway Check The Foundation

Ori Wellington

It raises a profound and slightly terrifying question about the future of truth in business. When the entire system is automated, when algorithms are evaluating algorithms, will humans even be in the loop long enough to realize that the entire trust ecosystem is just machines hallucinating at each other.

Sam Jones

A skyscraper built on a hallucinated foundation. Thank you for joining us on this deep dive. I highly encourage you to carry the IRM navigator curve and the AD or I frameworks into your next boardroom meeting or procurement negotiation. Until next time, stay curious and always check the foundation.