4 min read

Chat Is Not Where It's At - Part II

Chat Is Not Where It's At - Part II

The assumption, sometimes implied, sometimes spoken directly, but foundational to the fervor of recent activities in the tech world, was that large language models represented a short circuit. A warp zone. That we'd jumped from the era of painstaking, expensive, hand-built software directly into the era of machines that simply understand. That the decades of work that came before — schemas, databases, ontologies, all that grinding production and maintenance of software infrastructure — had been made quaint overnight by something that could read and write natural language.

It hasn't worked out that way. And the interesting question isn't why it hasn't worked, Part I covered that. The question is what it tells us about where we're actually headed.

The Technological Continuum

Here's something easy to miss when you're inside the hype cycle: nothing about computing has ever been a short cut. Every apparent leap, when you look at it closely, turned out to be the next incremental step on a technological continuum.

Language named things. Clay tablets tallied them. Paper condensed data. Ledgers tracked them over time. The filing cabinet organized information by category. The electronic spreadsheet made those categories computational. The relational database made the relationships between categories queryable. The GUI gave non-specialists a way to interact with all of that without having to learn the underlying formalisms. Each step changed the interface. None of them changed the fundamental project. The project has always been the same: gain leverage over what exists and how things relate, to build systems that maintain and operate on that understanding.

Computers are merely the latest substrate for an impulse that's been running since somebody first scratched a tally mark into clay. Every generation of the technology is an attempt to represent more of the world, more precisely, more usably, than the last one could.

When you look at it this way, the LLM moment stops looking like a rupture and starts looking like a familiar beat. A new capability arrived — natural language cognition, genuinely powerful, genuinely unprecedented. And the first instinct was to treat it as a replacement for everything that came before rather than an addition to it, an extension. That instinct was wrong, and the industry is now in the process of discovering exactly how wrong. But the destination of correction isn't mysterious, it's pointing in the same direction the whole arc has always pointed.

What Can Be Seen From Here

The contours of what comes next are already visible from here. It's time to make some direct assertions:

  • The complexity doesn't disappear. Whatever comes next is a complex computational system. The dream that a sufficiently large model could absorb the complexity — that you could collapse an entire application into a prompt — has not survived contact with production. The complexity is real. It requires real engineering. It continues to be managed, not dissolved.

  • AI sits at the user interface boundary. The genuine breakthrough of LLMs is natural language as an interaction layer. That's where they most belong: at the boundary between human intent and system capability, translating in both directions. Integrated within the system and especially as the surface you touch it through. The thing that hides the complexity without pretending it isn't there.

  • The system operates over structured representation. Underneath the natural language interface, the system needs to actually know things in the empirical engineering sense. Persistent state, typed relationships, queryable structure. A representation of the domain that accumulates, updates, and can be reasoned over by both humans and machines. The features the bubble assumed unnecessary.

  • The structure is formatted toward AI interoperation. If the LLM is the interface layer and structured data is the substrate, then the structure has to be even more legible to the AI, something the model can navigate semantically. The representation has to be designed for the interaction, not retrofitted into it. Something beyond traditional database design, a structure that's built to be understood, not just queried.

  • There will be pressure to unify. The pressure toward unified data representation is and has been ever present. What's held it back is practical, the technology was too rigid, and the people working with the data couldn't be expected to think in schemas. LLMs change both sides of that equation. When an AI can mediate between human intent and formal structure, the barriers to unification narrow. Shared vocabularies, shared ways of expressing what things are and how they relate, become feasible in ways they weren't before. The current fragmentation — every application, every department, every system, every pipeline its own island — is a transitional state on the continuum.

  • The next paradigm is about data representation, more so than AI computation. The discourse has been fixated on the AI: bigger models, better reasoning, more capable agents. But the actual bottleneck, the capability that determines whether any of it works in practice, is at the data layer. How information is represented, structured, and made available to computation, so called 'context'. The model is, in essence, already powerful enough, what's missing is the material it's supposed to be powerful over.

The Implications

We've been calling this the AI revolution. But AI might be the wrong focus for what's actually happening.

When we say AI, we usually mean the model — the transformer, the thing that generates and interprets language. That thing is real and important, but it's largely an interface technology. It's the part of the system that faces you. The mouth and ears, not the mind.

The real transformation, what changes how organizations operate, how knowledge accumulates, how humans interact with computational systems, is happening at the layer underneath. It's about representation. How we encode what we know, how we structure what exists and how things relate, and how we build systems that can hold, navigate, and reason over that structure. The LLM as one major participant in that process, but not the entirety of it.

The next paradigm isn't really a new kind of AI; it's a new kind of computer. One where natural language is the interface, structured meaning is the substrate, and the system maintains a persistent, evolving model of its domain that humans and machines navigate and manipulate together.

The sentimental vision has largely been articulated. The pieces now exist. What remains is a matter of assembly.