All posts
Context EngineeringEngineering Insights

The context problem moved from people to agents

Dennis PilarinosDennis Pilarinos·March 24, 2026
The context problem moved from people to agents

A developer whose code sails through review isn't necessarily better than one who gets walls of comments. They just know the “terrain”. They know which patterns the system expects, which constraints nobody wrote down and why the previous team made the choices they did. That context is what separates code that compiles from code that makes sense for that system / team.

Engineering orgs have understood this for years, and it's why we started Unblocked.

The original problem: reconstructing context#

When we began, we were focused on developers. Every time someone picks up new work, they reconstruct context from scratch. They tap a teammate, send a Slack message to the last person who touched that code, dig through PRs all towards the goal of understanding why things are the way they are. None of it is particularly hard, but it is tedious, frustratingly slow and breaks flow. It only gets worse as the team scales.

That was the original problem we set out to solve. However the world has changed - A LOT.

The shift: agents now have the same problem (but worse)#

Humans are no longer writing most of the code. AI-generated code is how teams ship now, and the DORA State of AI-Assisted Software Development report says the vast majority of developers use coding agents daily, with adoption still accelerating. Those agents have a worse version of the context problem than any human developer.

When a new hire starts, they have context gaps, but they're in “the room”. They’re in standups, read Slack, and absorb conventions through review. They have context debt, but they're paying it down from day one.

An agent starts from zero every time. It might be a competent programmer in the abstract, but it knows nothing about your codebase, your team's history, or why decisions were made the way they were. So it produces code that compiles, passes tests, and fails review.

The problem gets worse when agents try to compensate. They search broadly and stop as soon as they find something plausible, a pattern known as “satisfaction of search”. Because the agent doesn't know what it's missing, it can't tell a plausible answer from a correct one. It commits early, generates confidently, and produces something that looks right until someone who understands the system takes a closer look.

Expanding access gives agents more information, not better answers#

Most teams recognize this behaviour and respond by expanding access - more MCP servers, more rules files, more documentation, bigger context windows. These help at the margins, but they don't fix the underlying problem. MCP servers expose slices of information without resolving conflicts between them. Rules capture what someone thought to write down, missing the implicit knowledge that lives in old threads and past decisions. Bigger context windows add noise, and model performance drops as irrelevant material accumulates. The strategy in every case is the same: give the agent more raw information and hope it figures out what matters.

That's a retrieval solution (incorrectly) applied to a reasoning problem.

A context engine resolves conflicts before the agent starts reasoning#

A context engine works differently. Rather than waiting for the agent to ask the right question, it gives the agent a reconciled view of the system before it starts reasoning, and stays up-to-date and available as decisions get made. When the agent encounters something ambiguous, it does not need to arbitrate between conflicting sources or decide whether it has seen enough information.

We ran a controlled test, same model, same task, and without organizational context the agent burned roughly double the tokens and needed hours of iteration to reach production-quality code. With context, the agent generated mergeable code in ½ the time and tokens.

It's clear that the tools people use to write code will keep changing. We've already moved from IDE-centric to CLI-based to agent-driven workflows, and there's no reason to think that's the end of it. What remains constant is whoever (or whatever?) is modifying a system needs to understand how it works, regardless of the interface.

Why this matters now#

We built Unblocked because organizational knowledge is scattered, contradictory, and expensive to reconstruct on demand. That problem hasn't changed. What's changed is how often it shows up and how fast things go wrong without it. The gap that used to slow down developers now shows up in every agent-driven workflow.

We're extending the Unblocked context engine into those environments so that agents get the same level of understanding that experienced engineers carry around in their heads.