Features

Features

Features

Jun 9, 2025

Why We Built Replicate When Sentry Exists

Why We Built Replicate When Sentry Exists

Why We Built Replicate When Sentry Exists

Why We Built Replicate When Sentry Exists
Why We Built Replicate When Sentry Exists

A founder's honest take on building in a space where giants already roam

"Why didn't you just build a feature for Sentry?"

I've heard this question about fifty times since we launched Replicate. It's a fair question. Sentry is an incredible tool that's helped millions of developers catch and fix errors. They've built something genuinely valuable, and honestly, we're fans.

But here's the thing: we built Replicate precisely because Sentry exists and does what it does so well. Let me explain.

The Gap That 3 AM Debugging Sessions Revealed

Picture this: It's 3 AM, and you're staring at a Sentry error that says "Cannot read property 'user' of undefined" with a stack trace pointing to line 247 of your auth middleware. Helpful? Absolutely. But you still have no idea why this happened.

Was the user logged out? Did they refresh mid-session? Were they clicking rapidly on a button that should have been disabled? Did the race condition happen because of slow network connectivity, or was there an edge case in your state management?

Sentry tells you the what. But as developers, we spend most of our debugging time figuring out the why.

This gap hit me hard when I was building my previous product. We had solid error monitoring, but I was still getting Slack messages like:

"Hey, the dashboard is broken again 😞"

Followed by me asking:

"Can you tell me exactly what you clicked?"

Followed by:

"I don't remember, I tried a bunch of stuff"

Sound familiar?

What Sentry Does Brilliantly (And Why We're Not Trying to Replace It)

Let's be clear: Sentry excels at what it was built for. Error monitoring, performance tracking, release health – these are foundational needs that every development team has. They've created an incredible platform that captures technical errors across virtually every stack imaginable.

But Sentry was designed for developers, by developers, with a developer-first mindset. It assumes you want raw data, stack traces, and technical context. Which makes perfect sense when you're dealing with server errors, unhandled exceptions, or performance bottlenecks.

The challenge comes when you're dealing with user-reported issues that don't necessarily throw errors. Or when the error is technically correct, but the user experience that led to it is the real problem.

The "Works on My Machine" Problem Sentry Can't Solve

Here's a real example from my own experience:

User report: "I can't submit my application. The button just doesn't work."

Sentry logs: Clean. No errors. No exceptions. The submit endpoint was working fine.

Reality: After two hours of investigation, we discovered the user was rapidly clicking the submit button while the form was still validating. Our loading state wasn't clear enough, so they thought it was broken. Technically, everything worked correctly. From a user experience perspective, it was completely broken.

This is where traditional error monitoring hits a wall. The issue wasn't in our code – it was in the gap between what we built and how users actually interact with it.

Visual Context Changes Everything

When we started building Replicate, we had a simple thesis: most "bugs" aren't bugs in the traditional sense. They're misunderstandings between developers and users.

Users don't think in terms of API responses and state management. They think in terms of "I clicked this, then this happened, and it didn't match what I expected."

So instead of starting with error logs, we started with user experience. What if you could see exactly what the user saw? What if you could watch their session unfold like a movie? What if AI could analyze that session and explain not just what went wrong, but why the user experienced it as broken?

That's fundamentally different from error monitoring. It's experience debugging.

The AI Layer: From Data to Understanding

This is where things get interesting. Sentry gives you incredible data. But data isn't understanding.

When our AI (we call her Quanta) analyzes a session replay, she doesn't just tell you what happened. She explains things like:

  • "The user clicked 'Submit' 4 times in 2 seconds because the loading state wasn't visible on their screen resolution"

  • "This error occurred because the user navigated back and forward rapidly, causing a race condition in your state management"

  • "The user experienced this as a bug because they expected the form to auto-save, but your validation cleared their input"

That's not error monitoring – that's user experience intelligence.

Why We Didn't Just Build This as a Sentry Integration

We actually considered this early on. Why not build Replicate as a layer on top of existing error monitoring?

The answer comes down to data architecture and philosophy. Error monitoring tools are designed to be lightweight and performance-focused. They capture specific moments when things go wrong.

Session replay and AI analysis require capturing entire user journeys, understanding interaction patterns, and maintaining context across multiple page loads. That's a fundamentally different technical challenge that requires different infrastructure.

More importantly, it requires a different mental model. Instead of "what errors are happening," the question becomes "what experiences are confusing users?"

The Complementary Approach

Here's what's interesting: many of our users also use Sentry. They're solving different problems.

Sentry catches the technical issues – the unhandled exceptions, the performance regressions, the server errors that need immediate attention.

Replicate catches the experiential issues – the user flows that feel broken even when the code works perfectly, the edge cases that only surface with real user behavior, the UX gaps that create support tickets.

They serve different purposes: one monitors your code, the other helps you understand your users.

The Real Competition Isn't Sentry

Our actual competition isn't error monitoring tools. It's the status quo of developers spending hours trying to reproduce user-reported issues.

It's the endless Slack threads of "it's broken" followed by "can you give me more details?"

It's the assumption that users will somehow become better at reporting bugs if we just ask them nicely.

It's the idea that good developers should be able to debug any issue with just a stack trace and some determination.

We're competing against the inefficiency of the current debugging workflow, not against the tools that power it.

Where We're Headed

Sentry has done something remarkable: they've made error monitoring so good that it's become infrastructure. Every serious development team uses some form of error monitoring because the value is obvious and immediate.

We want to do the same thing for user experience debugging. We want it to become obvious that understanding user behavior is just as critical as monitoring errors.

Imagine a world where "it's broken" reports come with complete visual context and AI-powered explanations. Where developers spend their time building features instead of playing detective. Where user experience bugs are caught and fixed as quickly as server errors.

That's the world we're building toward.

The Honest Truth

Could Sentry build what we've built? Absolutely. They have incredible talent and resources.

But they're solving a different problem. They're focused on helping developers monitor and debug their code. We're focused on helping developers understand their users.

Both are valuable. Both are necessary. And honestly, both make each other better.

When you can see both the technical error (via error monitoring) and the user experience that caused it (via session replay and AI analysis), you get the complete picture. That's when debugging transforms from guesswork into understanding.

We built Replicate not because error monitoring was wrong, but because it was incomplete. The future of debugging isn't choosing between technical data and user context – it's having both.

If you're curious about what user experience debugging looks like in practice, try Replicate free for 14 days.

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post

Get Notifications For Each Fresh Post