The Thinking

The operating playbook for AI in PE portfolios — built from the inside out.

Steve Cannon has spent 25 years building what most firms are still trying to figure out. Steve publishes in two formats. Field Notes are quick observations. Essays are the full argument. Both are worth your time.

Point of View

Most of the conversation about AI in private equity is happening at the wrong level.

Fund managers are reading frameworks from consulting firms that have never actually pushed a model through organizational resistance. Portfolio companies are running pilots that will never scale because no one built the infrastructure those pilots need to run on.

The hard-won lessons about what actually works — what the quant funds figured out over decades, what the high-volume operators discovered by failing and adapting — aren't in those frameworks. They're in the people who were there.

This is where Steve publishes what he's learned.

Featured Thinking

Field Notes

Note 01

If You're Retaining Equity and AI Is Coming For Your Market — Read This First

For founders considering their liquidity options, the choice of capital structure matters more than most people realize when AI disruption is on the horizon.

Most liquidity options — debt, strategic sale, IPO — come with constraints that make it hard to fund the kind of deep, multi-year AI transformation that companies in disrupted markets actually need. Debt limits investment capacity. Public markets tie you to quarterly results. Strategic buyers expect regular cash generation.

Private equity is structurally different. The 3–5 year timeline, the patient capital, and the operational control create the conditions that AI transformation actually requires. For founders who suspect their market is about to change — and want to retain meaningful equity through that change — that structural difference matters enormously.

Note 02

Why I Built system8.ai — And Why Private Equity Is the Answer I Wasn't Expecting

After six years helping companies adopt AI, the pattern became impossible to ignore.

The reasons AI fails inside companies aren't technical. They're psychological and structural. Learning to manage without stories. Getting data into a state where it can actually be trusted. Finding the patience to endure payback timelines that don't fit a quarterly reporting cycle.

Private equity was built exactly for this kind of problem — not for AI specifically, but for deep, difficult, long-timeline operational transformation. The financial rigor, the patient capital, the controlling interest — these are precisely the tools that push a model's recommendation through the organizational resistance that kills AI projects everywhere else.

That realization is why system8.ai exists.

Note 03

From AI Victim to AI Survivor — Why PE Is the Only Structure That Gets It Done

The difficulties of applying AI to a real company — even with committed leadership and a great culture — are genuinely hard to surmount without the right structure.

Independent companies lack the capital patience. Public companies are trapped by quarterly results. Strategic buyers want cash generation, not transformation. Private equity is the exception — the one structure with the financial rigor, the timeline, and the operational authority to push quant, AI, and process automation all the way through.

A fund manager with those tools can prevent a portfolio company from becoming an AI victim. The end state — data cleaned, insights mined, inefficiencies resolved, systems automated — is achievable. But only with the right structure pushing it through.

Original Essays

Longer pieces. The arguments behind the playbook.

Essay 01

Why AI Projects Fail — And Why It's Not What You Think

I've spent six years watching AI projects fail inside real companies. Not fail because the technology didn't work. Not fail because the talent wasn't there.

Fail because the organization didn't have the structure to act on what the model was telling them.

Here's what that looks like in practice.

My team built a model at a Medicare insurance company losing $1–2M per year on $79M in revenue. What we found:

  • 3 of 50 states were causing $9.7M per year in losses
  • Recommendation: stop buying leads in those three states
  • Modeled opportunity: +$8.7M per year in EBITDA

The model was right. The data was clear.

The company didn't act. Why? Because leadership couldn't explain why those three states were loss-making. The model found the pattern. It couldn't provide the story. And without the story, nobody moved.

Same losses. Year after year.

That's not a technology problem. That's not a talent problem. That's an agency problem.

What the agency problem actually is

The agency problem is simple. It's what happens when the people making decisions don't bear the full consequences of those decisions. Managers protect their budgets, their teams, their explanations. A model that threatens any of those things gets ignored — not out of bad faith, but because the incentive structure makes ignoring it the rational choice.

This happens everywhere. The only structure I've seen actually solve it is private equity.

Think about what PE actually gives you:

  • Controlling interest — the authority to act on what the data says, even when management resists
  • Rigorous metrics — the language to defend model-driven decisions
  • Patient capital — the time to let the model be right

PE wasn't invented for AI. It was invented to solve the agency problem between managers and owners. But it turns out those are the same problem.

Most funds just aren't using their toolkit this way yet.

That's the gap system8.ai exists to close.

Steve Cannon, Founder & CEO, system8.ai
Essay 02

Don't Clone Humans Into Agents

Every AI firm right now is selling the same idea.

Map each human role to an AI equivalent. Build the agent that replaces the analyst. The agent that replaces the customer service rep. The agent that replaces the operations manager.

It's intuitive. It's visual. It almost always fails.

Here's why.

A job description is not a workflow. When you try to replace a human role with an AI agent, you inherit everything that role actually involves:

  • The messy data inputs the human was quietly cleaning
  • The edge cases handled with judgment that was never written down
  • The organizational dependencies nobody documented
  • The failure modes — which in an agent are harder to predict and harder to explain than in a human

The result is a project that is technically ambitious, organizationally disruptive, deeply expensive — and produces results roughly equivalent to what the human was doing before.

That is not value creation.

The right question isn't "which roles can we replace?" It's "where are decisions being made badly?"

Those are different questions. They lead to very different places.

The wins in Phase 2 consistently come from three places:

  • Decision optimization — find the decisions made hundreds of times a day and make them better with math. Lead buying. Pricing. Scheduling. Inventory. The human stays. The decision quality improves dramatically. EBITDA moves.
  • Workflow redesign — instead of replicating an existing workflow with AI, ask what the workflow would look like if you designed it from scratch today. The answer is almost never the same workflow with a human swapped out. It's usually something fundamentally better.
  • Automation of the genuinely repeatable — data entry, report generation, routine communications. Fast to automate, easy to verify, immediately visible in headcount economics. And it frees people to do work that actually requires them.

The test before any AI initiative

  • Are we replacing a role or improving a decision?
  • Are we replicating a workflow or redesigning it?
  • Is this task complex or just time-consuming?

For a PE fund manager, this matters for a specific reason. Clone-the-human projects look ambitious, consume significant capital, generate organizational resistance, and produce results that are hard to measure.

Decision optimization projects move EBITDA. They build on each other. They generate a documented value creation story that holds up in diligence.

That's the difference between an AI initiative that ends up in a footnote and one that ends up in the exit narrative.

Steve Cannon, Founder & CEO, system8.ai
Anchor Piece · The Reference Point

What 40 Years of Automation Looks Like in the Numbers

Interactive Brokers started automating trading operations in the 1980s. The structural advantage that investment created is still visible today.

Steve spent a decade at Interactive Brokers building the data engineering framework that made fully automated trading work at scale. The firm didn't achieve its margin advantage by deploying the latest AI tools. It achieved it by making a sustained, long-horizon commitment to letting math and data drive decisions — and building the infrastructure to make that commitment real.

IB Data Team Size
~7
Other Brokers
100+
IB Pre-Tax Margin
~79%
Other Brokers
~15%

That gap is what happens when automation gets pushed all the way through the operating model. Not as a pilot. Not as a proof of concept. As the way the business actually runs.

It is the gap PE is structurally positioned to close — for the companies in your portfolio. The question is whether you move before your competitors do.

More Thinking

Coming regularly.

Steve publishes when he has something worth saying — not on a schedule. If you want to follow the thinking as it develops, the best place is LinkedIn.

Follow Steve on LinkedIn
Your Move

The thinking is free. The application is where the work happens.

If something on this page resonated — if it described something you've seen in your own portfolio or your own investment committee — that's worth a conversation.