Essay 01
Why AI Projects Fail — And Why It's Not What You Think
By Steve Cannon
I've spent six years watching AI projects fail inside real
companies. Not fail because the technology didn't work. Not fail
because the talent wasn't there.
Fail because the organization didn't have the structure to act
on what the model was telling them.
Here's what that looks like in practice.
My team built a model at a Medicare insurance company losing
$1–2M per year on $79M in revenue. What we found:
-
3 of 50 states were causing $9.7M per year in losses
-
Recommendation: stop buying leads in those three states
-
Modeled opportunity: +$8.7M per year in EBITDA
The model was right. The data was clear.
The company didn't act. Why? Because leadership couldn't
explain why those three states were loss-making. The model
found the pattern. It couldn't provide the story. And without
the story, nobody moved.
Same losses. Year after year.
That's not a technology problem. That's not a talent problem.
That's an agency problem.
What the agency problem actually is
The agency problem is simple. It's what happens when the people
making decisions don't bear the full consequences of those
decisions. Managers protect their budgets, their teams, their
explanations. A model that threatens any of those things gets
ignored — not out of bad faith, but because the incentive
structure makes ignoring it the rational choice.
This happens everywhere. The only structure I've seen actually
solve it is private equity.
Think about what PE actually gives you:
-
Controlling interest — the authority to act
on what the data says, even when management resists
-
Rigorous metrics — the language to defend
model-driven decisions
-
Patient capital — the time to let the model
be right
PE wasn't invented for AI. It was invented to solve the agency
problem between managers and owners. But it turns out those are
the same problem.
Most funds just aren't using their toolkit this way yet.
That's the gap system8.ai exists to close.
Steve Cannon, Founder & CEO, system8.ai
Essay 02
Don't Clone Humans Into Agents
By Steve Cannon
Every AI firm right now is selling the same idea.
Map each human role to an AI equivalent. Build the agent that
replaces the analyst. The agent that replaces the customer
service rep. The agent that replaces the operations manager.
It's intuitive. It's visual. It almost always fails.
Here's why.
A job description is not a workflow. When you try to replace a
human role with an AI agent, you inherit everything that role
actually involves:
-
The messy data inputs the human was quietly cleaning
-
The edge cases handled with judgment that was never written
down
-
The organizational dependencies nobody documented
-
The failure modes — which in an agent are harder to predict
and harder to explain than in a human
The result is a project that is technically ambitious,
organizationally disruptive, deeply expensive — and produces
results roughly equivalent to what the human was doing before.
That is not value creation.
The right question isn't "which roles can we replace?" It's
"where are decisions being made badly?"
Those are different questions. They lead to very different
places.
The wins in Phase 2 consistently come from three places:
-
Decision optimization — find the decisions
made hundreds of times a day and make them better with math.
Lead buying. Pricing. Scheduling. Inventory. The human stays.
The decision quality improves dramatically. EBITDA moves.
-
Workflow redesign — instead of replicating
an existing workflow with AI, ask what the workflow would
look like if you designed it from scratch today. The answer
is almost never the same workflow with a human swapped out.
It's usually something fundamentally better.
-
Automation of the genuinely repeatable —
data entry, report generation, routine communications. Fast
to automate, easy to verify, immediately visible in
headcount economics. And it frees people to do work that
actually requires them.
The test before any AI initiative
-
Are we replacing a role or improving a decision?
-
Are we replicating a workflow or redesigning it?
-
Is this task complex or just time-consuming?
For a PE fund manager, this matters for a specific reason.
Clone-the-human projects look ambitious, consume significant
capital, generate organizational resistance, and produce
results that are hard to measure.
Decision optimization projects move EBITDA. They build on each
other. They generate a documented value creation story that
holds up in diligence.
That's the difference between an AI initiative that ends up in
a footnote and one that ends up in the exit narrative.
Steve Cannon, Founder & CEO, system8.ai