Thinking in Systems
During my first years in tech, every time heard the term Systems of I frowned in confusion (except if it was refering to System of a Down, of course). I didn't know why we had so many "systems" around. Anyway, it wasn't until a few years later, when I read Donella Meadows' seminal book "Thinking in Systems" when I realized that, indeed, everything is a system. Stocks. Resources. Flows. Loops.
And here we are, indeed, talking about the SDLC as a system full of stocks, resources, balancing loops and, of course, feedback loops. In this context, there's a thought I keep coming back to because it feels more true every time I revisit it: the inner loop of software development was never really about typing code, and coding agents are making that impossible to ignore.
A few years ago, in A software development analogy nobody asked for, I argued that coding belongs to the design phase of software, not the construction phase. For example, in manufacturing, you design the car and then automate its construction in an assembly line that produces copies of it. In software, writing code is still part of the act of design. The construction phase comes later, when we build, deploy, distribute, and replicate the thing we have already shaped.
That distinction mattered then. It matters much more now.
Because once coding agents can generate a large amount of code on demand, the interesting question is no longer whether AI can code. The interesting question is what happens when automation starts eating into design work itself. Or, more precisely, what happens when the part being partially automated is the inner loop of software creation, especially if we consider that a design loop. And do you know why this matters? Because when you automated something you are actually constraining that something with automation. And constraining design always come together with some interesting debates. That's what I call The Design Problem.
This is where Developer Experience becomes much more important than many people seem to think.
Good DevEx was never about typing faster
When people talk about AI tools for developers, the conversation often collapses into a weaker idea than Developer Experience deserves. The implicit benchmark becomes faster output. More lines changed. More pull requests opened. More tickets moved. More code, more quickly.
But good Developer Experience was never about typing speed.
The thing we actually cared about was always the quality of the loop: low cognitive load, fast feedback, and a state of flow. A good environment was one where the developer could stay in contact with the system, make a move, see what happened, adjust quickly, and build confidence without constantly loading their mental models.
That is why I keep finding this recent Thoughtworks framing so useful. In their paper, they describe a productivity/experience paradox: organizations can get more output from AI-assisted development workflows while the lived experience of building software gets worse. Developers may produce more, yet feel more interrupted, more supervisory, less certain, and less in flow.
That paradox deserves to sit much closer to the center of the discussion.
Because agentic workflows can absolutely increase throughput while degrading the medium of thought, more automation does not automatically mean better Developer Experience. More generated code can arrive together with more cognitive load. More parallelism can come with more fragmented attention. More apparent speed can coexist with a slower path to confidence. Imagine asking three agents to work in parallel on a feature: one updates the API, one changes the UI, and one adds tests. On paper, this looks like a clear throughput win, but in practice, the poor human developer now has to keep three problem contexts in their head, reconcile three interpretations of the same intent, inspect where they diverged from each other, and decide whether the combined result is actually coherent. More code arrived sooner, but confidence arrived later. The feedback loop is turning into a trust loop*.
So if we keep the meaning of “good” constant, the questions changes from “how do we generate code faster?” Ito “under what conditions do coding agents preserve or improve the quality of the inner loop so I can trust them?”
The inner loop is being rebuilt around delegation and trust
One reason this feels disorienting is that agentic development does not only remove work. It also creates a new category of work.
The old loop looked roughly like this:
- Make a change.
- Run it.
- Learn from the result.
- Adjust.
The new loop is starting to look more like this:
- Explain.
- Challenge.
- Delegate.
- Verify.
- Steer.
That is a very different rhythm. The developer is now spending more time framing the problem, decomposing the task, encoding constraints, inspecting output, and repairing mismatches between local correctness and global coherence. The mechanical act of writing code may shrink, but the act of software engineering now incorporates a trust calibration exercise.
Thoughtworks calls part of this the “middle loop,” and I think that name could be useful because it describes something real. They claim there is now a supervisory layer between direct implementation and the broader delivery process which can become mentally expensive very quickly. While the idea is quite powerful, I'd rather not follow this mental model of the middle loop.
What happens when you work with coding agents is that you describe the work, wait for the system, inspect the result, doubt some part of it, tighten the constraints, ask again, review again, steer again. This can work. But it is fragile. The loop has more idle time, more context switching, and more moments where confidence has to be reconstructed manually. That is why I do not think the future of software development can simply be “prompt harder.” If delegation is becoming a first-class activity, then The Design Problem I mentioned earlier is not only the agent. The Design Problem is how to make delegation feel as smooth and cognitively continuous as direct code manipulation once felt. The solution to the problem is to create a System of Delegation.
What is actually at risk is the medium
To me, this is the deeper issue. The thing being disrupted is not just output. It is not even “flow” in some vague emotional sense. What is being disrupted is the medium through which developers think. And this is not only true for coding. Most people have felt some version of it elsewhere. An idea is vague while it stays in your head, then becomes clearer the moment you start writing, sketching, diagramming, or moving something around on the page. Thought sharpens through contact with a medium. That is the real point here. What developers risk losing is not just speed, but that live back-and-forth between the mind and the material.
For decades, we improved the coding environment by progressively reducing the latency between intention and response. Syntax highlighting, compilers, REPLs, hot reload, language servers, static analysis, modern editors, test runners, live previews: each of these made software easier to think in because they tightened the loop between action and feedback, which means that typing code was more about cognition than transcription. For example, the code pushed back with type errors, which are signals. Structure and abstractions emerged through interaction. You did not think everything in advance and then translate it perfectly into syntax. The language, the runtime, and the tooling (all together) participated in the discovery process. That's why I firmly believe the coding part of software is an act of design.
That is what current chat-shaped agent workflows often break.
The loop becomes jerky: prompt, wait, read, evaluate, repeat. Cadence, control, and visibility weakens. The developer is no longer sculpting in direct contact with the material, but intermittently negotiating with a black box that returns chunks of plausible work. Amelia Wattenberger, in a sublime article recently described this as trying to paint through a mail slot. I think that image captures the problem very well. You slide a description under the door, wait, and a painting comes back. Then you send another note asking for a bluer sky. More than delay, what this new feedback loop destroys is a smooth state of flow for developers.
This is also why I do not think the stable inner loop of the future will simply be “throw a prompt over the wall and tolerate the idle time until the answer comes back.” That feels too much like accepting the current chat interfaces as some sort of destiny. Maybe faster models help? I don't know, maybe local models help in some contexts. But I am not convinced latency alone is the real issue, because in many cases humans do not actually want to inspect every intermediate planning artifact either. Most people do not want to read the full internal scaffolding of the work any more than they want to stare at a spinner. Who reads the output of /plan modes anyway?
So I suspect the answer is not simply a shorter wait for the same loop, but a different kind of environment around the loop itself. One where trust is built less through manual inspection of every intermediate step and more through structured verification: explicit constraints, code reviews based on risk-tiers, policy checks, and other forms of software-mediated validation. The challenge for the next generation of DevEx tools is not to expose more internal scaffolding to the human, but to create better primitives for steering, checking, and understanding the work as it evolves.
This matters because speed changes the shape of the workflow. When a large portion of the implementation arrives quickly, the remaining work starts to feel disproportionately heavy: polish, edge cases, conceptual integrity, the slow discipline of making the system actually coherent. Those tasks can begin to look like drag, when in reality they are where judgment still earns its keep. That is why, like Amelia in her post, I do not think the future is a choice between returning to manual coding and resigning ourselves to the chat-shaped workflow we have today. Both responses assume the current interface is closer to the endpoint than it probably is. I doubt that. The real opportunity is to design a better medium altogether for this new kind of software work.
Rigor is moving, not disappearing
There is another shift here that matters a lot: as agents write more code, engineering rigor moves upstream rather than disappearing. Or in other words, when code generation becomes cheaper, the system needs other ways to remain coherent and trustworthy, so more of the rigor starts accumulating in places like:
- specifications
- tests
- invariants
- types
- policies
- rules (DOs and DONTs)
- risk tiers
- permission boundaries
- architectural constraints
From a traditional coding lens, this can look like overhead, but it's actually a redistribution. And there are some opinions out there about how this redistribution is happening.
A common misreading of this shift is to say that the constraint has moved to the right, into code review, because the queues are now piling up there. In The holiday when software engineering changed forever, I argued that this is not what Theory of Constraints actually predicts. When one constraint is elevated, downstream stages often experience stress before the system adapts. Queues form. Friction appears. But pressure is not the same thing as a constraint.
If code review were now the dominant constraint, then increasing review capacity would increase meaningful throughput. However, In practice, it often does not. You can review faster, test faster, and release faster, and still build the wrong thing at speed. What really happens is that implementation has become so cheap that ambiguity survives farther into the system and starts showing up as downstream pain. The real leverage has moved left, into intent formation, context shaping, constraint articulation, and deciding what should be built in the first place.
If coding is part of design (or better said, is an act of design), then the main design effort is no longer concentrated only in the source files and the code review. More of it starts living in the surrounding structure that shapes what can be safely generated with coding agents in the first place. For example, part of the work may no longer be “write the component,” but “define the constraints under which any acceptable component must be generated”: which patterns are allowed, where business logic may live, what tests must exist, what permissions cannot be bypassed, and which parts of the system are out of bounds.
This is one reason I think coding agents are making the old argument about software construction much easier to understand. We used to apply automation mostly to the outer loop of software: delivery, deployment, packaging, infrastructure, replication. Now we are applying automation to the construction process itself, which means we are being forced to formalize parts of the inner loop that previously remained half-tacit.
And that reveals something important: a I said earlier, to automate something is to constrain it. Now I add: you can only automate the parts that are sufficiently understood. So what does it mean to automate design if we consider coding an "act of design"? Well, we should start by acknowledging that not all design is equally automatable:
- There is exploratory design, where the constraints are still moving.
- There is compositional design, where known concepts are assembled into a coherent shape.
- And there is translational design, where intent is rendered into code, tests, configs, interfaces, and implementation details.
Coding agents are stronger in the translational layer and increasingly capable in parts of the compositional layer. That is already powerful enough to change the center of gravity of the work, because if more of the translational work can be delegated, then the leverage moves toward the design of the conditions under which delegation happens. So, in a manner, we can say that defining those constraints, rules, and policies is effectively defining the space where coding will happen. Or in other words, designing the design space (if we accept coding as an act of design, I insist).
Get ready because this is getting very meta! :)
Designing the design space
Designing the artifact means deciding the answer. Designing the design space means deciding what counts as an acceptable answer. For example, the work may shift from “write this feature” to “define the rules any acceptable version of this feature must obey.”
That difference sounds abstract until you make it operational. In practice, it means defining the primitives, constraints, boundaries, policies, verification rules, extension patterns, and escape hatches inside which many good solutions can emerge without making the system incoherent. A senior engineer in the pre-agent world could often keep much of this in their head. Some architecture lived in diagrams, some in code, some in conventions, and a lot in social enforcement. The team knew, more or less, which moves were acceptable. The rules were partially written down and partially embodied in a few experienced people.
Companies were never especially good at this. Many got away with relaxed boundaries because the enforcement mechanism was interpersonal. Senior engineers, architects, code reviewers, ceremonies, tribal knowledge, and implicit norms were doing the work. That gets much harder now, because coding agents reward explicitness and because the people using these tools will not be limited to the traditional engineering team. Once product managers, designers, UX people, founders, analysts, and assorted idea generators can all ask a system to produce software-like change, the old model of “the constraints live in the heads of senior engineers” stops scaling. What was once an internal engineering coping mechanism becomes an organizational bottleneck. So yes, we need to get those constraints written down now that more people are building, an it's a very hard thing to do.
This, to me, is one of the most important implications of the whole shift. We now need to design the design space more deliberately for two reasons: agents need clearer boundaries, and people beyond the engineering team can now shape software directly through agents (it's leaking otward!). More people can now participate in design-through-generation, which means the rules of the space must become legible beyond the old inner circle.
That forces a new kind of explicitness. And explicitness is not automatically bureaucracy. Good constraints feel like leverage, while bad constraints feel like paperwork. The difference is whether they reduce cognitive load and help create better trusted feedback loops, or merely create ritual around it.
Where DevEx and architecture merge
Once you see the shift in those terms, the relationship between Developer Experience and architecture becomes much tighter. Good DevEx in an agentic environment depends less on how magical the code generator looks in a demo, and more on whether the environment makes intent easy to express, constraints easy to encode, verification easy to perform, and confidence easy to recover.
A strong design space improves DevEx in several ways:
-
First, it lowers cognitive load. When boundaries, non-negotiables, and preferred patterns are already encoded in the environment as primitives and functionalies, the developer does not need to keep reloading them into working memory (their memory and the coding agent memory!) every time they delegate work.
-
Second, it improves feedback quality. If the agent is operating inside explicit constraints, then the response to a mistake can become more objective and faster: wrong boundary, invalid state transition, policy violation, broken invariant, or mis-flagged risk assessment for manual code reviews and tests.
-
Third, it reduces supervision overhead. The inner loop becomes lighter when the working environment is better instrumented. You are no longer doing as much forensic reading to determine whether a generated code change “feels wrong.” The environment itself helps narrow the attention to what matters.
-
Fourth, it helps preserve a smooth state of flow. A state of flow is basically a productive hyperfocus. A mental state where you are doing challenging work with intensity, making progress, and feeling satisfaction while doing it. But flow needs the right system around it. The context has to make you feel inside the work, connected to it, motivated by it, and part of the thing you are shaping. That is the “vibe” many people talk about: not decoration, but the motivational tissue that makes deep work feel natural.
One of the real dangers of code abundance is cognitive debt. I would define this debt as the gap between how fast the system changes and how well humans still understand it. If code arrives faster than humans can meaningfully absorb, then one of the old ways engineers learned the system starts to erode. Having documented and explicit shared rules, repeated patterns, visible boundaries, and explicit domain models become essential because they make the system legible under higher rates of change.
This is why I increasingly think the future of DevEx looks less like polishing the text editor and more like designing the full environment in which trustworthy software can be produced collaboratively by humans and agents.
The development environment is changing
This is also why I suspect the next generation of software tools will be organized around a different center of gravity.
As Addy Osmani explains in this article, for a long time, the default unit of software work was the file. Open it, edit it, run it, inspect the result, repeat. That made perfect sense when the most valuable engineering moves happened inside direct code manipulation. Once delegation becomes a first-class development activity, the environment has to optimize for a different unit of work: the bounded task. A task has a goal, a scope, a context, a set of constraints, and some visible path to verification. The editor remains essential, but it becomes one instrument inside a broader working surface rather than the unquestioned center of the whole experience. Even tools like Linear are being conceptually re-designed around tasks as the center of the agentic inner loop.
Now, the development environments (IDE?) has to solve a different problem:
- Make intent legible
- Keep context coherent
- Delegate work and show where delegated work is drifting
- Surface where human judgment is actually needed, probably based on a risk-tiered model.
And It also has to do all this without turning the inner loop into a project-management ritual.
This changes what good visibility means. In a file-centered environment, visibility mostly meant seeing code, errors, stack traces, and test output. In a delegation-centered environment, visibility also has to include task state, divergence from intent, boundary crossings, confidence, risk level, and the local context within which an agent is operating. The practical question becomes broader too. It no longer stops at “what changed?” and has to include “does this still belong?”
That is where ideas like risk-tiered attention become so important. Not every generated change deserves the same interruption cost. Not every deviation requires synchronous human review. Not every task should compete equally for attention. When many semi-autonomous loops run in parallel, what becomes scarce is human judgment rather than execution capacity. For example, an agent renaming variables inside a bounded frontend component probably does not deserve the same interruption cost as an agent proposing a schema change, crossing a permission boundary, or touching a payment flow. One can be reviewed later as part of normal flow. The other may need immediate human attention because the blast radius is fundamentally different.
Viewed through that same Theory of Constraints lens I mentioned earlier, risk-tiered attention is a way to reserve scarce human judgment for the moments where the system is genuinely uncertain: ambiguous intent, high blast radius, weak grounding, architectural boundary crossings, policy-sensitive changes. The problem we need to solve is deciding where human judgment creates the most leverage. Seen this way, files, diffs, and terminals start to look more like high-precision fallback tools and escape hatches than first-class primitives. They remain crucial for inspection, debugging, and the moments when a human needs to intervene directly. But they no longer define the whole environment.
The new Inner Loop is still an emerging practice
One of the most remarkable things right now is how idiosyncratic everyone’s workflow still is.
You just need to spend a quite evening in X to see how people are building their own skills, their own prompts, their own review rituals, their own delegation habits, their own intervention thresholds, their own task shapes. And yet, beneath all that surface variation, a pattern is starting to converge. Many of these workflows rhyme with the same loop I mentioned earlier: explain, challenge, delegate, verify, steer.
That tells me something useful. We do not have best practices yet. We barely have good practices. What we mostly have are emerging practices. That is not a criticism at all. In the contrary, it is and acknowledgement of how technological transition looks like in Warldey Mapping terms. The primitive changes first. Then people improvise around it. Then patterns start to stabilize. Then tools crystallize around the most useful patterns. Only later do those become boring defaults.
We are somewhere before that boring stage.
So I think one of the mistakes right now is expecting the current tool shape to already be the mature one. It almost certainly is not. The chat window plus terminal is probably a transitional interface, not the final form of the medium. The current generation of tools is useful partly because it reveals the problem so clearly. It shows us what breaks when cadence breaks, what hurts when visibility disappears, what kind of cognitive tax appears when multiple agents compete for one human mind, and where the next layer of abstraction needs to emerge.
Even the practical advice appearing from thought leaders points in this direction. Part of the craft now is finding your personal ceiling for parallel agents, tightening scopes, time-boxing sessions, and respecting the cost of human context switching. These are more than merely productivity tips as they help us reveal where the real bottleneck is now: the human capacity to supervise abundance.
The real work of the next decade
So where does all of this leave Developer Experience? I think it leaves it in a more strategic position than before.
If code is abundant, then the scarce resource is no longer code production. The scarce resource is human clarity. The ability to maintain coherence, confidence, taste, judgment, and momentum while collaborating with systems that can produce changes faster than humans can comfortably absorb. In an era of abundance like the one we are immersed now, what becomes scarce is usually not supply, but human qualities like attention, judgement, and curation.
The best environments will lower ambiguity rather than amplify it. They will make intent easier to shape without turning every act of clarification into useless management ceremonies and rituals. They will make constraints easier to author, inspect, evolve, and reuse. They will shorten the distance between delegation and confidence. They will preserve system understanding as change accelerates. They will know when to stay out of the way and when to demand attention. And, crucially, they will help us “designing the design space” so we can do our jobs in a live medium rather than in static documentation.
That may be the deepest requirement of all.
Because if the work is moving upward from code-writing toward intent-shaping and constraint-authoring, then the next-generation tool cannot ask the developer to trade one cognitive medium for bureaucracy. It needs to make this meta layer of work feel as immediate, responsive, and manipulable as code once felt. The developer still needs to remain in contact with the material, even if the material now includes policies, boundaries, task graphs, state models, domain concepts, and trust loops. This is why I suspect the big winners in DevEx will be defined less by raw code generation and more by their ability to make software creation feel calmer, clearer, and more self-validating
And there is an organizational consequence here too. For a long time, many companies could avoid explicitly designing the design space because the social structure of the engineering team compensated for it (or because it's just hard to do, let's be hones). The constraints were enforced informally a the architecture lived partly in the codebase and partly in the heads of a few people. That escape hatch is closing, though. When more people across the company can participate in software construction through coding agents, the design space itself becomes shared infrastructure. It has to be written down, instrumented, and made operational. Architecture stops being only an internal concern (or an implementation detail as Product Managers like tos ay and starts becoming an interface between human intent and machine execution. Architecture becomes a verifiable contract, like in the old MDA days!
Anyway, as I introduced above, working with coding agents introduces a much bigger shift than “AI writes code now.” It could be that the real work of the next decade may be turning tacit engineering judgment into explicit systems of delegation (aka "designing the design space"). But to automate design, we need to figure out what parts of design we want to make it explicit so that delegation works without friction. That's how we can preserve a good Developer Experience in the agentic world. And if we get it right, the future of software development will not be a world where humans step aside and watch agents work. It will be a world where humans and agents collaborate inside better-designed development environments, with less ambiguity, better feedback, and a stronger sense of contact with the thing being built.
That, to me, is a much more interesting future than faster typing ever was.
