I’m currently building an AI-powered incident investigation product that automatically debugs and triages production issues when incidents are declared.

We’re one of many new entrants in the “Agentic SRE” space (yuck). Working here has given me a front-row seat to the competitive dynamics of the AI ecosystem, which I’m increasingly seeing as the innovator’s dilemma writ-large.

The dilemma describes how successful companies can fail precisely because they listen to their customers and invest in projects that promise the highest return. This rational behaviour makes them vulnerable to smaller companies who initially serve smaller markets with lower profitability, but eventually grow to threaten the incumbent’s core business.

In today’s AI landscape, this dynamic is supercharged. Startups with products that can be quickly adjusted to capture AI-enabled use cases have significant advantages over incumbents, in ways that will make AI one of the most disruptive technological shifts in recent history.

This is all standard posturing you’ll see from most AI ‘influencers’, so I’ll jump into what I’m seeing working with this technology on the day-to-day.

Exponentially higher cost to GA

Large incumbents face a critical problem with AI products: they must solve for enormous scale from day one, while startups like ours can focus narrowly and ship earlier. This difference in go-to-market requirements creates a surprising advantage for smaller players.

The economics are challenging as as the investment required to build a sophisticated AI product scales with both the size and diversity of your customer base. It’s not just linear scaling–it’s multiplicative, and that cost isn’t just what appears on your AI bill.

As you scale an AI system to larger problem-spaces, two things happen:

  1. You require vastly more evaluation datasets and testing
  2. The scope of the problem can explode before your team get traction solving it

When building our incident investigation agent we began by focusing exclusively on solving for ourselves, starting with an agent that finds code changes that caused our incidents. Getting that system good was a process of building training data, running repeated evals, constant testing against our modest datasets. Our scale means our spend on salaries far outweighs our AI bills, and the resulting agent generalises to a decent chunk of our userbase.

Now imagine you’re PagerDuty or GitHub, both companies where an agentic SRE would fit well into their offerings. To make this agent worth building, it would need to work for such a large number of customers that you’d be drowning in diverse tech stacks and different coding practices. Achieving consistent performance across that variety of use cases would require an eval suite that is prohibitively expensive to run at the frequency your team will need to iterate effectively. This often leads larger companies to delay investment while waiting for technical improvements that promise to reduce these costs.

This creates a window where companies like ours can build, deploy, and iterate on AI systems while larger competitors are still figuring out how to make the economics work. By the time they solve the scale problem, we’ll have multiple generations of improvement and real-world feedback baked into our systems.

That headstart is even more crucial because…

Initial buy-in is the same

I’ve written a blog post recently called “Beyond the AI MVP” that discusses how easily you can build a credible-looking AI prototype that turns out to be useless in practice. You only realize this when people start genuinely using the product.

It’s at that point that the real work starts, because in order to build AI systems effectively you need:

  • A team with experience deploying AI for real-world uses to real people
  • Investment in tooling driven by your specific company use cases
  • Time for your team to learn the new tools

Whether you’re building an agentic AI system for large-scale use cases or a more modest customer profile, the buy-in required before your team can begin effectively implementing the system is similar. That buy-in includes building tools to be effective with AI such as eval suites, backtests, and scorecards. It’s also about changing how you work and upskilling your team.

All of this learning and investment is slow and, crucially, doesn’t get much faster by pouring more resource on the problem. You can put the entire company on it, but it’s unlikely to accelerate the individual learning curve of an engineer, and the chaos of everyone building competing internal tools without clear direction is unlikely to help. We found it hard to parallelise this investment beyond more than two-or-three people.

Startups who can deploy AI systems to simpler customer environments will build a meaningful headstart around expertise, internal tooling, and avoid the AI MVP trap.

Maybe larger companies will pay the buy-in late but eventually catch-up, though, at which point their resources allow them to overtake? Well…

Small team advantage

One advantage of a larger company is people, who you can (presumably) throw at a problem to make things go faster.

If you look around, though, the AI break-out companies like Cursor aren’t massive–they’re small, <20 people at $100M ARR! The standard explanation is “first-mover advantage in a new market,” but I think it’s more fundamental than that. We simply haven’t cracked how to build compelling AI tools with large teams.

“Feature factories” will not be able to evolve agentic systems into a polished experience. Not now, at least: in these early days, you need a small core team who can hold the entire system in their head, so they can catch the often unpredictable impact that small prompt tweaks can cause in a system where errors exponentially compound. They leverage tools like evals to help control the chaos, but “craftsmanship” has become more important to a quality AI experience than it has been in product engineering for the last decade or more.

For incumbents, this is a problem. They’ll need much smaller, skunkworks-esque teams to do the initial discovery and development. Established companies are typically worse at this than startups, though, who live and breathe this execution model.

Even if they create these teams…

Dogfooding and iteration speed

The most powerful feedback loop in AI development is dogfooding—using the product yourself in real-world scenarios. We’ve been pushed to build speculative agent tool calling to optimise our chat interactions to get them fast enough for people to enjoy using them, for example, and caught everything from AI misunderstanding European names (thanks Leo Sjöberg!) to weird prompt misdirections by actively using the product ourselves.

We haven’t yet scaled these systems to work for the largest customers, though, so incumbents are locked out from dogfooding like this exactly because of their size and complexity. If iteration is the only reliable way to understand time-to-build, being unable to dogfood is a massive pain to navigate.

Red tape at larger companies compounds this problem. When every iteration requires multiple approvals, security reviews, and compliance checks, the essential rapid feedback loop is broken. A change that takes hours at a startup might take weeks or months at an enterprise.

This “iteration advantage” isn’t just about moving faster; it’s about learning faster. In AI product development, the quality of your solution depends heavily on how many real-world scenarios you’ve encountered and solved for. Every iteration that incorporates real feedback improves the system exponentially compared to theoretical improvements made in isolation.

By the time incumbents have built the infrastructure to test at scale, startups will have gone through dozens of iterations with real users, developing an instinct for what works that’s nearly impossible to shortcut.

So what if these larger companies enact ‘war-time’ policies, dropping all the red tape? Even then…

Traditional moats don’t protect incumbents

Those are a bunch of novel, AI-specific headwinds for smaller companies. But if you’ve worked in tech long enough, you’ve heard the standard narrative about why established companies are protected: they have the data, the integrations, and access to the best tech.

AI is rewriting this playbook in counterintuitive ways.

Data moats have moved

In our situation, PagerDuty is a good example of an incumbent who doesn’t have the right data to build a moat. Our product has always seen so much more of the incident process than they have, where we see all messages posted in the channel and attachments, incident follow-ups, tickets, post-mortems.

AI creates totally new product opportunities, allowing you to solve problems that were previously out of reach. Incumbents won’t ever have considered these problems because they weren’t possible, so it’s likely they never started collecting the data.

The only “moat” that matters is how fast you can iterate on your AI system, which requires a feedback loop with your customers now. Again, this plays into the dilemma, where incumbents are delaying starting until the tech allows them to address enterprise, while upstarts are already getting hands-on experience and building themselves a headstart.

Integration advantage flips to the newcomer

Established players usually have an integration advantage – everyone already connects to them. Surprisingly, in AI, this dynamic reverses.

Modern startups have strong incentives to connect into existing ecosystems of tools, but incumbents have the luxury of saying “connect to us”. That leaves young startups with fewer integrations, but what they have is under their control.

As AI changes the nature of how systems communicate, having a breadth of existing integrations is only an advantage if you can effectively improve and change them. Startups will be fine, but the long-tail wait for integrator ecosystems to make changes becomes an anchor holding incumbents back.

Cutting-edge technology is being commoditised

The sharp-edge of AI R&D is happening in mega-scaling AI companies like OpenAI, Anthropic, Google, not (or rarely) in the companies building AI products.

It’s highly unusual that cutting-edge technology would be as available to a consumer purchaser as to a massive corporation, or that access would be metered in a pay-as-you-go fashion that disrupts capital advantages too. The AI startup launching on ProductHunt this week will use the same models as Google themselves, it’s an unusually even playing field.

Startups are going to win this

This is going to be a major disruptive shift and is happening extremely quickly. Generative AI changes enough of what is possible and how to achieve it that the rules have changed, and almost every traditional advantage of incumbency has been neutralized or inverted.

To recap:

  • Building for large, diverse customer bases is exponentially more expensive than focusing on narrow verticals, creating space for startups to innovate and making it hard for incumbents to justify the investment.
  • This headstart matters because the initial investment in AI requires a similar time commitment regardless of company size, creating an opportunity for startups to build expertise and tooling.
  • Small, cohesive teams have an advantage in developing AI products since we don’t yet have a playbook for effectively scaling AI development with large teams, countering an incumbents biggest advantage (their resource).
  • Dogfooding and rapid iteration are only possible at certain scales, giving startups the ability to learn from real-world usage and improve faster than incumbents locked out by their own complexity.
  • Traditional moats of data, integrations, and cutting-edge tech distribution aren’t applying.

This isn’t to say incumbents are doomed. The smartest ones will respond by creating autonomous AI divisions with different economics and timelines, acquiring promising startups early, or pivoting to platform plays that leverage their scale in new ways.

But there’s a window of opportunity right now that’s unusually wide. For perhaps the next 2-3 years, we’ll see startups solving problems that should “rightfully” belong to established players. Just look at how quickly Cursor has become the dominant IDE, or the speed that ElevenLabs hit $100M ARR, in spaces you’d expect JetBrains or Google to dominate.

What happens after that window closes? My guess is that we’ll see a consolidation phase where the successful startups either get acquired or grow into the new incumbents. But by then, the market will have been fundamentally reshaped, and we’ll think about AI very differently than we do right now.

It’s a crazy time to be working in tech. If you’re building an AI startup, though, don’t be intimidated by competitors with deeper pockets. You have a surprising number of systemic advantages, provided you can move quickly enough to use them.

If you liked this post and want to see more, follow me at @lawrjones.