Product Drift

Product Drift

A year ago, the hardest part of building software was writing code. Now, the hard part is knowing whether it’s any good.

Agents can generate features faster than you can read them. Sometimes they produce whole flows before you’ve decided what comes next. At first, this feels amazing — like the codebase can grow on its own. But quickly you notice something: the product doesn’t just move forward. It starts to drift.

Product drift is subtle at first. Every new feature shifts something. Small, barely noticeable changes accumulate. A button behaves differently here, a flow no longer matches expectations there. CI passes, the code looks right. Underneath? It’s a coin flip. You can’t feel totally confident the code will hold until it’s exercised in the real product. The product slowly stops behaving like itself. What once seemed stable starts to wobble quietly.

This challenge isn’t just philosophical — it’s already happening. When features become cheap to produce, the bar for what gets shipped quietly drops. It’s easy to prompt something into existence, much harder to ask whether it should exist at all. The temptation we’re all feeling is to prototype first and think later.

When a design is slightly off, the path of least resistance is to patch around it. In the past, that friction may have forced us to step back and recalibrate. Now the agent can absorb the “hackiness” for you, and the incentive to fix the underlying design fades.

Paradoxically, teams often aren’t even moving faster. They’re just producing more.

Most of our team meets with customers and hears feedback regularly, so everyone has a running list of user papercuts and potential changes. Before our background agent, those went into the #product-feedback slack channel, where they sparked debate and variations until there was enough momentum to merit being added to the engineering backlog. Once our background agent was launched, a sentence about the idea was all it took, and we could all tag @Yogi and kick off the work immediately.

In some cases, seeing the new feature in action is very helpful for shaping the discussion around it and spawning more iterations before it is merged in. In others, the feedback makes sense, but the shape of the solution isn’t thought through well enough: When the prompt was “allow a user to do X”, the background agent defaulted to adding yet another button to an interface that was already densely packed. Instead, the right solution required taking a step back, thinking through the user’s actual motivation, and finding an opportunity to offer that action at the moment they needed it.

The lack of friction for new features also means that they are sometimes larger than even the owner of the change realizes. In one case, a component we thought we were adding to internal tooling made it into the external site as well. The prompt didn’t specify that it shouldn’t be public. The code reviewer didn’t realize that it was unintentional. And when the screenshot was pitched in #product-discussion, the team member who kicked it off didn’t know that the agent had added it in both places.

This is a tension many teams are feeling, not just ours. We use coding agents for everything now, and the discipline that once slowed us down just enough to think is starting to disappear. When a coding agent is an @ away in Slack, you’ll build anything without thinking twice — it is liberating at first, but then the edges start to fray when you’re reviewing PRs that don’t feel aligned to what your customers want.

At Ranger, we don’t believe in slowing agents down, adding heavier human review, or imposing bureaucratic approval processes. The problem isn’t speed — it’s that feedback loops haven’t kept up with the speed. Manual review, intuition, and static tests all fail at this pace.

Instead, we’ve been building a layer of continuous verification. Coding agents do not act alone; they work with QA agents that test, probe, and evaluate the product continuously. Ranger is a dialogue: humans define the taste, but the agents extend it, allowing judgment to scale without collapsing under velocity. We need a continuously evolving source of truth that both humans and agents can reference.

In our case, we’ve now vetoed new buttons on the crowded dashboard multiple times in Slack. When it takes the path of least resistance and tries to add yet another one, our background agent should be reminded that an extra button on that screen isn’t the customer experience we want and to search our feature review database for examples of related user flows we think are better.

We think the next problem to solve is contextual: we want Ranger to review features for you (from Figma, Linear, etc.) and directly incorporate past feedback. Let creativity run fast without losing product coherence. We’re creating Ranger so that everyone has a tool that allows their judgment to scale with the pace of building. Try it out here!