Eduardo Arsand

The Stack You Inherited Was Not Designed for Your Problem

21

Most of the technical decisions shaping my day-to-day work were made before I arrived, by people who no longer work at the projects I have to work, for a context that maybe no longer exists.

That is not a complaint. It is a description of how software development works in practice — and why so much engineering effort disappears into friction that has nothing to do with the actual problem being solved.

The stack was not designed for my problem. It was designed for someone else's problem, at someone else's scale, under someone else's constraints. I inherited it.

Where Inherited Stacks Come From

They come from a few predictable sources.

The first is success: a company grows, the architecture that worked at ten people strains at a hundred, and by the time that strain is felt the system is too embedded to replace easily. The decisions that made sense early become load-bearing walls nobody remembers pouring.

The second source is imitation: A small team adopts the architecture of a large company because the large company published blog posts about it, because it looked serious, because the engineers who joined came from places that used it. The scale mismatch is invisible at adoption and expensive later. A microservices architecture that Netflix needed to coordinate hundreds of teams introduces coordination overhead that a ten-person team pays without any of the corresponding benefit.

The third is inertia: inside the stack itself. Frameworks and platforms make decisions on your behalf — about how state is managed, how data flows, how side effects are handled — and those decisions were made for the median use case of the tool's target audience. Your use case may not be median. The framework does not care.

What It Feels Like From Inside

There is a specific texture to working inside an inherited stack that does not fit the problem.

Features that should be simple require navigating layers of abstraction that exist for reasons nobody on the current team can fully reconstruct. A data model built for a different product shape forces every new requirement into contortions. Deployment infrastructure sized for a different traffic profile costs money and attention that produces no user value.

The experienced engineers on the team develop workarounds — local knowledge about which parts of the system to avoid, which patterns are safe, where the bodies are buried. That knowledge is real and valuable. It is also a symptom. When navigating the stack requires an oral tradition, the stack is not serving the team. The team is serving the stack.

The Rewrite Instinct and Why It Usually Fails

The response to a bad architectural fit is often a rewrite, and rewrites fail at a high enough rate to be treated with skepticism by default. Not because the diagnosis is wrong — the fit often is genuinely poor — but because rewrites inherit a different set of problems. The new system is designed for the problem as it is understood today, which is a partial understanding. The edge cases, the institutional memory, the decade of bug fixes quietly embedded in the old system — these do not transfer automatically, and their absence shows up in production in ways that are hard to anticipate and expensive to recover from.

I have seen rewrites that took two years and delivered a system with different limitations than the one it replaced. The architectural mismatch was resolved. New mismatches were introduced. The team had spent two years not shipping product.

What Actually Helps

The more productive frame is not replacement but containment.

Identify where the mismatch is most costly — where the inherited architecture creates the most friction against current requirements — and isolate it. Build the new thing in a way that does not depend on the old thing's shape. Let the boundary between them be explicit and managed rather than implicit and everywhere.

  • New domains added as separate services rather than extensions of an ill-fitting core
  • Data models for new features that do not inherit the constraints of legacy schemas
  • Infrastructure decisions scoped to current scale rather than inherited from a different growth stage
  • Dependencies on framework behavior replaced incrementally at the edges, not ripped out from the center

This is slower than a rewrite and less satisfying. It does not produce the clean slate feeling that makes rewrites attractive.

What it produces is a system that keeps working while it changes — which is the actual requirement, even when it does not feel like it. Sometimes we should not be heroes that will fix all the problems in the world. We are humans.

The Honest Accounting

The stack you inherited encoded a set of tradeoffs. Some of those tradeoffs were correct for the context they were made in. Some were mistakes even then. Most were neither — they were reasonable decisions whose costs only became visible when the context shifted.

Working inside that inheritance productively requires being honest about which category each piece falls into. Not every friction point is architectural debt worth addressing. Some of it is just the cost of operating at scale, or the cost of a domain that is genuinely complex, or the cost of requirements that changed faster than any architecture could have accommodated.

The work is in telling the difference — and in not mistaking the discomfort of inherited constraints for a problem that a better stack would have prevented.

It probably would not have. It would have had different constraints. Inherited by whoever comes next.


Comments ({{ modelContent.total_comments }})