On the Arc of the Computer
or: what even is 'a computer'?
I'm sitting here typing this on a Pop!_OS Linux machine. Overlapping windows, nested folders, a 'desktop.' I could be on macOS or Windows and find essentially the same furniture. I couldn't really be on much else. What's up with that?
The interface paradigm I'm using right now was largely settled by the mid-1980s. Forty years in an industry that congratulates itself on relentless innovation, the fundamental grammar of interaction has been frozen for two generations. We've made it faster, shinier, more responsive—but the conceptual structure is the same: files, folders, applications, windows, mouse.
I.
Long before there were actual personal computers, there was an idea, an imagined capability: a machine that could help a person work with knowledge itself. A system that could hold even more than human memory, traverse ideas associatively rather than sequentially, preserve context, surface connections, and extend thought rather than merely execute instructions. Toward the middle of last century this vision existed before the hardware capable of realizing it—before interfaces, before even the word 'computer' meant what it does now. The machine was conceived first as a cognitive prosthesis, a way of externalizing thought so that it could be examined, reorganized, and shared.
When interactive computing research took shape decades later, it wasn't inventing this ambition so much as trying to give it form. The goal was never just faster calculation. It was to create a medium for thought: a system humans could think with, not merely operate. This aspiration runs beneath everything that followed. The spreadsheet, the word processor, the database, the network, the simulation—each is an attempt to make some region of abstraction visible and manipulable. The question was never simply 'how do we compute?' It was always 'how do we work with complexity?'
II.
In the HCI research labs of the 1960s and 70s, nobody had a previous blueprint for what a computer interface should look like. Instead there was an abundance of imagination and ambition. Light pens for direct manipulation, constraint systems where you specified relationships rather than procedures, symbolic environments that made interaction feel conversational, visual languages that tried to bypass text entirely. These weren't just stylistic variations so much as different attempts to answer the same underlying question: how might a human meaningfully engage with digital abstraction? The diversity of approaches matters because it shows that 'the computer' had not yet settled into a single public form. The grammar of interaction was still fluid—the project itself—how to incarnate an imagined cognitive capability—was unresolved. What we now treat as inevitable was, at the time and still is, just one possibility among many.
III.
Some systems from this era treated the user not as an operator of fixed functions, but as a participant in the system's own structure. Smalltalk is the clearest example: an environment where every element—windows, menus, text, graphics—was made of the same substance as the programs running within it. To use the system was to have access to its foundations. The distance between using the system and altering system behavior was small. This wasn't about expert power so much as configurability; the surface and the depths were continuous, and you could reach down. This isn't merely nostalgia for a particular system, but what it represented: an attempt to stay close to the original imagined object—a computer that could be inhabited, reshaped, and extended by its users as they thought.
As computing moved toward the threshold of mass adoption, the prominence of system altering affordances receded. The vision wasn't rejected, but its full expression was rightly deemed too costly—in learning curve, in support burden, in the difficulty of maintaining stable systems at scale. The participatory dimension didn't vanish; it moved out of reach, became specialized. Coopted by the marketing team. The imagined object remained, but at a distance.
IV.
The paradigm that ultimately scaled—windows, icons, menus, pointer—was not a destination; it was a compromise that held. It mapped cleanly onto established 20th century office management norms and physical intuitions: objects on a surface, tools in a drawer, papers in a folder, wastebasket. It was learnable, repeatable, and teachable, allowing millions of people who would never program to nonetheless perform complex tasks. It created a stable grammar that software could target and organizations could rely on. And it worked. Personal computing became novelty, then ordinary, then ubiquitous.
But stabilization is not completion. The WIMP paradigm solved a social problem: how to make computing accessible to the computer-illiterate and commercially viable at scale. It did not and would not, by design, fully realize the deeper aspiration that preceded it. It was a partial incarnation of an imagined capability, sufficient to proceed, but incomplete and not final. What we live with now is a commercial extrapolation of a narrow tranche of a humanistic vision.
V.
While interface research stabilized around explicit structure and human legibility, another approach—rooted in the same underlying aspiration—continued along a different path. Learning-based systems pursued a strategy for managing complexity that did not require humans to specify structure in advance; instead, structure would be inferred from data. These approaches diverged early, sometimes contentiously. Symbolic systems privileged explicit meaning and control; learning-based systems emphasized emergence, scale, and statistical regularity. Each approach made different trade-offs about where interpretation should live and how much ambiguity could be tolerated. Past a certain point the paths became mutually exclusive: you were either modeling meaning explicitly or learning it implicitly.
For decades, they developed in parallel. Search engines learned to rank; recommendation systems learned to predict; but the surfaces users touched stayed fundamentally WIMP. The learning happened in the backend. The imagined object—the computer as a partner in thought—persisted across both branches, but neither could fully approach it alone.
VI.
Before machine learning could operate at human-relevant scales, the infrastructure to store and process the world digitally had to be built first. The Big Data era was driven by the need to extract value from the massive information accumulation resultant from the initial success of the world wide web. Incidentally, this meant inventing the techniques required to manage vast, messy, continuous information: distributed storage, fault-tolerant computation, indexing systems capable of traversing billions of records, pipelines that could clean, version, and transform data at scale. These were infrastructural achievements—unromantic, but essential, and lucrative in several dimensions.
Contemporary learning systems are contingent on this groundwork. Training large models requires not just data, but techniques for organizing, retrieving, and processing that data reliably across time, machines, and institutions. What appears sudden is the visible payoff of decades of incremental, largely invisible work. The foundational vision didn't change. The technical substrate finally became sufficient to support more of it.
VII.
With accumulated data practices and computational capacity, learning-based systems have begun operating in domains once reserved for explicit symbolic manipulation. Language—previously too ambiguous and context-dependent for reliable machine processing—can now be modeled, generated, and transformed at useful levels of fidelity. This doesn't place us on a new arc. It bends the existing one closer to its long-held orientation. The branches that diverged decades ago are now on a trajectory of intersection; the conditions that forced their separation have loosened. What was fragmented by necessity can now, in principle, be recomposed.
The reason the LLM feels uncanny and familiar is that we've seen it before—not in products, but in stories. The persistent presentation of expressive, conversational, participatory computers in fiction media isn't prediction or unmoored musing. It's cultural reverb of an intention that has never quite been realized. A continuum of envisioning that runs back to Vannevar Bush's original inspiration.
VIII.
We've inherited interface forms built around explicit command and legacy iconographic abstractions. Where we've moved on, we've scaffolded an incoherent, and sometimes malignant, consumption paradigm. Computing is now ubiquitous; the limitations imposed by a computer-illiterate or skeptical market no longer apply. In parallel, we've acquired systems that dramatically expand what can count as an instruction—absorbing context, intent, and approximation where rigid syntax historically placed the burden on the programmer or user. And we've regained timely proximity to an aspiration that predates both. Taken together: the makings of a next generation of computing systems. Nothing has re-solidified yet. The chat interface that dominates today is not a destination; it's an accommodation—the fastest way to surface new capability, much as WIMP once was. Adequate. Provisional.
The deeper questions remain open. How should meaning be structured? Where should adaptation live? How do humans and machines share the work of holding, manipulating, and leveraging abstraction toward further mastery over complexity? These are not new questions. They are the same ones that animated the earliest imaginings of the computer, reposed under different conditions. The arc hasn't ended. It hasn't broken. It has become visible again.
This is where we are in this present moment. Still on the arc of the computer.