What happens when intelligence becomes a utility?
When I look at the current AI landscape, I keep seeing the same pattern: the cost of intelligence is collapsing. Or, more precisely, intelligence is being industrialized so fast that it is starting to look less like a differentiating product and more like a utility.
And when a capability becomes a utility, value does not disappear. It moves.
Note: In Wardley Mapping terms, the evolution of technology enables the coevolution of practice, which in turn enables new sources of value.
That is the question that interests me most here. Not whether intelligence is becoming cheaper. I think that part is already visible. The real question is where the value goes once the model itself stops being the only thing that matters.
What happens when the model stops being the moat
For the last couple of years, much of the AI conversation has been organized around the model itself: better benchmarks, better reasoning, better coding performance, better economics. That made sense in the first phase of the market because when a new technological capability appears, the first thing everyone looks at is the primitive.
In this case, the primitive was the model, and whoever had the best one appeared to hold the strongest strategic position.
To some extent, that is still true today. But I suspect that advantage is getting thinner much faster than many people expected. Open-weight models keep improving, commercial models keep competing each other into narrower margins, infrastructure keeps getting better, tooling keeps getting better, and the distance between frontier capability and widely available capability, while still real, keeps shrinking.
When that happens, the center of gravity moves away from raw intelligence and toward the application of intelligence. The question is no longer only who has the smartest model. The question becomes who can apply intelligence to a meaningful problem, inside a useful interface, with the right constraints, the right context, the right distribution, and the right trust model.
Applied intelligence has two forms
When I say applied intelligence, I do not mean only that software can now be generated on demand. I mean two things happening at once.
First, software itself becomes more generative. Applications become more conversational, more context-aware, and more capable of responding dynamically to user intent instead of forcing every interaction through rigid flows. Intelligence stops being an external assistant and starts becoming part of the product behavior.
Second, software also becomes more generated. Instead of shipping only fixed applications with fixed surfaces, we start moving toward capabilities that can be assembled, adapted, or created for a specific user, task, or moment.
Those are related industry shifts, but they are not the same. One changes how software behaves. The other changes when software gets distributed.
Intelligence gets cheap, application gets expensive
If intelligence keeps becoming cheaper and more available, then many big techs will discover something uncomfortable: having access to intelligence is not the same as creating value with it.
This should not surprise us. We have seen this pattern many times in technology. Compute became easier to access. Storage became easier to access. Networking became easier to access. Every time one of these primitives became more industrialized, the market did not stop creating value. It simply moved up the stack.
I think intelligence is starting to follow the same path. Once the answer itself becomes cheap, the scarce thing is no longer the ability to generate it. The scarce thing becomes knowing what to generate, when to generate it, why it matters, and how to embed it inside a product or service that people can actually trust, use, and pay for.
That is why I keep thinking that many big techs are still aiming one layer too low. They are competing at the intelligence layer when they should be thinking much harder about the application layer.
Software becomes more intelligent before it becomes fully generated
Before we get to fully dynamic software on demand, we will first live through a world where software becomes much more intelligent in use. Products will become less static and more adaptive. Interfaces will respond to context. Workflows will become more conversational. Functionality will not disappear, but it will become more fluid and easier to reshape around what the user is trying to do.
That matters because a lot of the value will be created right there. Not in the raw model, but in the product decisions around it: when to ask, when to infer, when to automate, when to stay silent, when to escalate to a human, and how to make the system feel trustworthy instead of merely impressive.
In other words, applied intelligence is not only about generating artifacts. It is also about embedding judgment-like behavior into the software experience itself.
And then software starts getting generated on demand
If a system becomes good enough at understanding context, reasoning over intent, and generating software artifacts, then at some point it will not only help us build software faster. It will start participating in the decision of when software should be created at all.
This connects with something I wrote in The holiday when software engineering changed forever. There, I argued that the main constraint in software delivery was moving left, away from implementation and toward intent, context, and deciding what actually matters. That still feels true to me. But hidden inside that observation is a more uncomfortable question: if AI keeps moving upstream, why would it stop at the old implementation bottleneck?
That possibility is one reason software is the first industry being transformed so deeply by AI. Software is made of digital artifacts. It can be generated, tested, copied, modified, distributed, and observed with unusually fast feedback loops. In a physical industry, the validation loop eventually hits the body, the factory, the street, or the hospital. In software, the loop often closes inside the machine.
Once that is true, the leap from “AI helps us build software” to “AI helps decide when software should be built” becomes much smaller than it first appears.
This is also what makes me think about services as software. Activities that previously required too much human effort to encode in software may suddenly become worth productizing because the cost of software creation itself is falling. What used to be a human activity supported by software increasingly becomes a software activity supervised by humans.
A simple example
Imagine a very ordinary consumer scenario. Your flight gets cancelled while you are already at the airport. Today, you would bounce between the airline app, your email, your calendar, your hotel booking, your maps app, your messaging app, and maybe your bank.
In a more mature AI world, there are two different things that could happen.
The first is intelligent software. Your travel app understands the disruption, explains your options conversationally, notices your constraints, suggests the next best action, and coordinates the relevant workflow much better than a static interface would.
The second is generated software. The system recognizes that what you suddenly need is not seven disconnected apps but one temporary travel-recovery capability. It assembles a live interface to rebook the flight, extend the hotel if needed, compare train alternatives, notify the person waiting for you, rearrange your calendar, and surface the likely extra costs, all around that specific disruption and only for as long as the situation exists.
That is the distinction I care about. One is software becoming intelligent. The other is software becoming fluid (in terms of creation and distribution). What not long ago sounded too theoretical is already becoming real. Google Disco is a clear example of that direction.
So where does the value go?
If the model becomes less differentiated over time, then what matters more is everything around it: distribution, product taste, trust, workflow design, proprietary context, constraints, feedback loops, and operational integration.
That is why I think the value of AI companies will increasingly move toward applied intelligence rather than intelligence in isolation. The future advantage is not merely having a model. It is knowing how to orchestrate intelligence around real jobs to be done, including where humans stay in control, what should be conversational, what should be automated, when software on demand makes sense, and what should never be generated at all.
The big techs that win may not be the ones with the most magical primitive. They may be the ones that know how to wrap intelligence inside a useful system.
The strategic implication
This has a strategic consequence for big techs. If intelligence is becoming a utility, then building a business around raw model access alone may become harder over time unless you also control something else that matters: distribution, hardware, data, workflow, domain context, or application surface.
And if you are not one of the companies building the models, that does not necessarily mean you are too late. It may mean the most interesting space is now opening up above the model layer.
That is where new tech companies, product teams, and startups can still create disproportionate value, not by trying to out-model the model companies, but by turning cheap intelligence into useful capability.
Applied intelligence means both: software that behaves more intelligently, and software that can be generated around the job at hand.
