4 min read

The Solution Will Come From the Field

The Solution Will Come From the Field

The commercial AI labs say they're building toward artificial general intelligence, but they've left that term nebulously defined. When pressed, they retreat to abstractions — alien intelligence, unknowable capabilities, disruption beyond prediction. The vagueness isn't modesty. It's cover.

Meanwhile, when ordinary people imagine what AI should become, they picture something else entirely. Not a godlike oracle but a capable partner. A system that remembers, assists, participates in the texture of a life. They may not know they're drawing on sixty years of futurist vision, but they are. And the gap between what the labs promisingly gesture at and what people actually want isn't a communication problem. It's a structural one.

Even critics more sophisticated than the author see pieces of this. Recent conversations with leading researchers have surfaced real skepticism about scaling — acknowledgments that the current paradigm may be hitting limits. But limits toward what, exactly? And their proposed solution is telling: go back to the lab. Find the next breakthrough. The assumption is that whatever's missing will be discovered through more research. A better model.

It won't. Because what's missing isn't a capability. It's an orientation.

The last time the foundations of computing were this liquid, a different question drove the work. Engelbart, Kay, Licklider — they weren't trying to build superhuman intelligence, they were trying to amplify human capability. That orientation produced the half-step paradigms we now work through and the aspirational visions we still reach for: the Dynabook, Star Trek's shipboard computer, the interfaces that Hollywood designers (often friends and collaborators in the network of those researchers) rendered into cultural imagination. JARVIS didn't come from nowhere. It came from a lineage that knew what it was trying to build.

The irony is that artificial intelligence was part of that original ferment. The early AI work of the 60s and 70s — symbolic reasoning, knowledge representation, attempts to formalize thought — emerged alongside the augmentation vision, not opposed to it. They were parallel efforts to understand and extend mind. Later, in the 90s and 2000s, ontological approaches tried again: semantic web, formal knowledge systems, attempts to give machines structured understanding of the world. Flawed, yes, but reaching toward something real.

By the time AI re-emerged as machine learning, all of this was unfashionable. Symbolic and ontological approaches — meaningful attempts to grapple with representation, grounding, reasoning — were discarded without ceremony. Subsequent generations of researchers grew up being told that these fields were not merely incomplete, but embarrassing. Dead. Unscalable. Losers.

The baby went out with the bathwater. And because the people now entering the labs are shaped by the intellectual fashions of this moment, they don't recognize what was thrown away. Their expressed conception of intelligence begins and ends with the model stack: gradients, datasets, scaling curves, eval suites. The model was always one component of a larger system. Somewhere along the way, the component became the whole project. They believe they're on the frontier, but they're often reenacting debates whose context they don't realize they've inherited.

The lineage has largely been forgotten.

The industrial structure makes this worse. Silicon Valley rewards technical prowess as though it were vision. Ambition substitutes for philosophical and historical grounding. And confidence — particularly the confidence that comes from being surrounded by money and adoration — gets mistaken for creativity.

Inside the labs, curiosity narrows rather than widens. Researchers optimize what can be optimized. They measure what can be measured. They work on what's fashionable. And they trust that someone above them is holding the ethics and the vision.

Often, no one is.

Responsibility is fragmented by design. Capabilities teams build capabilities. Safety teams build guardrails. Product teams build wrappers. But no one holds the broader, concrete picture of what this is actually for — because that question has been declared unanswerable. Product intelligence gets deferred onto customer feedback and “AGI” is invoked like a destination, but its concept is kept conveniently vague. It’s the perfect abstraction: impressive enough to signal ambition, amorphous enough to avoid scrutiny.

They say the goal is unknowable. This author suspects they simply haven’t thought it through. Or worse.

Because the goal isn't mysterious, we've been sketching it since the 1960s. It's the system that helps a human think, act, remember, create — the cognitive partner. Not a disembodied oracle. A situated intelligence that participates in a life.

The labs aren't building that.

They're building a better cylinder. And a cylinder, no matter how powerful, is not a car.

A bigger model doesn't create memory, grounding, continuity, agency, or workflow integration. It doesn't situate intelligence in a human context. These aren't features to be added later. They're the architecture of what needs to be built and the labs, for all their capital, have almost none in this regard. They have a scaling curve and a hope that generality will somehow deliver functionality.

It won't.

The people who will build the real thing are the ones who touch the real world. Not because they're smarter, but because they're forced to confront what the labs can ignore: the failures of continuity, grounding, memory, context, and integration that occur when intelligence meets life. These aren't research curiosities. They're blockers. And solving them requires building systems, not just models.

That work is happening in the field — among builders free from the myopia of convention, whose problems aren't benchmark scores but practical functionality. People who need AI to actually perform in the context of a workflow, an environment, a human life.

The labs will continue to produce better cylinders. Good. We need them if we're going to hit 200mph on the Nürburgring.

But the assistant, the thing people actually imagine when they say "AI", will not come from the center. It will come from the periphery, where vision is not optional, and where intelligence is measured not by parameters but by practicalities.

The future won't be delivered by the people with the biggest models.

It will be delivered by the people who know what those models are for.