Broken Succession

“EMPTY CHAIR” Photo by Bhautik Patel on Unsplash

Jakob Nielsen and the Temptation to Declare the Craft Dead

Every field eventually reaches a moment when one of its founding figures begins speaking in a different register. Not necessarily a different idea. A different tone.

Recently, the usability pioneer Jakob Nielsen published a long reflection on AI and the future of UX work. On the surface, it reads like a technical forecast. AI coding is accelerating. Design tools are improving. Exponential scaling will smooth out today’s weaknesses. The familiar workflow of usability engineering (i.e., manual testing, heuristic evaluation, iterative design) may soon be automated away.

These observations aren’t implausible. Automation is clearly reshaping how technical work happens. But what is striking in Nielsen’s essay is not simply the prediction. It is the posture.

The piece reads less like a field update and more like a concession to a story that has become dominant in AI discourse: the paradigm has shifted, exponential growth is underway, elite practitioners are already acknowledging the transition, and professionals would be wise to prepare for a future in which the manual aspects of their work disappear.

That framing feels unusual coming from Nielsen, because the discipline he helped build was founded on a very different premise.


The Original Nielsen Thesis

For most of his career, Nielsen argued something both simple and subversive. Technology does not guarantee usable systems. Human judgment does. That’s what gave him a cult following.

Usability engineering emerged from a recurring failure in computing. Engineers built powerful systems that ordinary people could barely navigate. Software became more capable but also more confusing. Interfaces accumulated features while shedding clarity.

Nielsen’s response was not to promise that better technology would fix the problem. Instead, he introduced methods that forced designers to look outward, toward the people actually using their systems.

Observation. User testing. Iterative refinement. Heuristic evaluation.

These practices were not glamorous. They were empirical, repetitive, and sometimes tedious. But they trained practitioners to notice where users hesitated, where they misunderstood the system, where the design quietly imposed cognitive costs on the person interacting with it.

Usability engineering was never just a workflow. It was a discipline of attention. If you watched closely enough, human behavior would tell you where the system was wrong.

That message was fundamentally empowering. It suggested that thoughtful practitioners could shape technological environments by paying disciplined attention to human experience.


The Narrative of Technological Inevitability

In Nielsen’s recent essay, the tone shifts.

Rather than emphasizing the enduring need for human interpretation in design, the argument leans toward a narrative now common in Silicon Valley’s AI ecosystem. A new interaction paradigm has arrived. Exponential improvement will quickly smooth over current limitations. Coding has already begun transitioning to AI dominance. UX will follow. By the end of the decade, the manual processes that defined the field may largely vanish.

The argument unfolds through a structure that now appears almost ritualized in discussions of AI. First comes the declaration of a paradigm shift. Then, the invocation of exponential scaling laws. Elite practitioners are cited as early adopters. Finally, professionals are advised to move “up the abstraction ladder” as machines assume the operational layers of their work.

This narrative is persuasive partly because parts of it are true. AI systems can already generate interface prototypes, summarize research sessions, and produce variations of a design at speeds that would have been unimaginable a few years ago.

But something subtle happens when the argument moves from these observations to a broader claim about the fate of professions.


Automation of Tasks Is Not the Replacement of Professions

Much of the evidence offered for AI’s transformative power concerns the automation of specific tasks: generating code, drafting interfaces, running heuristic checks, and iterating across design variations.

These developments matter. They will undoubtedly reshape workflows across design and engineering.

But professions are not merely bundles of tasks waiting to be automated.

UX practitioners do more than produce interface layouts or conduct usability tests. They interpret ambiguous signals from users. They translate between engineers, executives, and customers. They surface tensions between business incentives and human experience. They make judgments about what constitutes a meaningful improvement rather than a cosmetic change.

These activities involve context, interpretation, and responsibility in ways that do not map neatly onto scaling curves.

Technological history repeatedly shows that automation rarely eliminates professions outright. Instead, it changes where expertise resides. Computer-aided design did not eliminate architects. Spreadsheets did not eliminate accountants. Medical imaging did not eliminate radiologists. In each case, production became easier while judgment became more valuable.

The same dynamic may well unfold in design.


The Politics of Inevitability

This is where the story becomes larger than a single essay or discipline.

The narrative structure Nielsen adopts (paradigm shift, exponential growth, elite early adopters, inevitable displacement) has become a defining feature of AI discourse more broadly.

As the journalist Karen Hao documents in her reporting and in Empire of AI, the language of inevitability surrounding AI does more than describe technological change. It also performs political work.

The companies building frontier AI systems operate at extraordinary scale. They require vast computational infrastructure, enormous datasets, and capital flows that rival those of nation-states. In that context, the story told about technological progress becomes consequential. If the development of AI is framed as unstoppable, then questions about governance, restraint, or alternative trajectories begin to sound unrealistic. The debate shifts from whether certain developments should occur to how quickly everyone else must adapt.

In that environment, inevitability becomes a kind of intellectual gravity.

Experts repeat it. Industries reorganize around it. Professionals begin narrating their own displacement as though it were a natural law.

Seen in that light, Nielsen’s essay may be reflecting not just technological change but the power of a narrative that now saturates discussions of AI.


Where the Limits Still Show

One place where this tension becomes particularly visible is in scholarly writing.

AI systems today can produce remarkably fluent academic prose. They can summarize research themes, draft literature reviews, and assemble paragraphs that resemble a scholarly argument.

Yet anyone who spends time evaluating serious scholarship quickly recognizes the difference between text that merely looks academic and work that actually advances understanding.

A strong paper is not built from grammatical fluency alone. It requires a sense of what is genuinely at stake in the argument. It requires judgment about which sources illuminate the problem and which merely decorate the page with citations. It requires the ability to identify tensions within a field and push them forward in a meaningful way.

Above all, it requires the ability to recognize when a sentence is technically correct but intellectually empty.

Current AI systems remain far better at producing the appearance of scholarship than at generating its intellectual architecture. They can assist with drafting, summarizing, and structuring ideas. But the deeper work of synthesis, deciding what matters and why, still depends on human judgment.


Broken Succession

Which brings us back to Nielsen.

There is another way to read his essay, one that has less to do with technical prediction than with the difficulty of narrating one’s own legacy in the face of a paradigm shift.

Late in a career, integrating a disruptive new tool into one’s repertoire can be awkward. It requires experimentation, partial adoption, and public imperfection. For someone who has spent decades defining the standards of a discipline, occupying that uncertain middle ground can be uncomfortable.

There is a simpler narrative available. The craft belonged to another era. The machine is taking over. History has moved on.

That move preserves dignity. It allows the past to remain intact. But it also risks collapsing an important distinction between a workflow and a craft.

The visible techniques of usability engineering (manual tests, heuristic reviews, iterative prototypes) may indeed evolve dramatically under the influence of AI. But the craft Nielsen helped cultivate was never reducible to those techniques. It was a way of seeing technological systems through the lens of human experience.

When a senior figure narrates the transition as the end of the craft itself, what is lost is not only continuity but stewardship. A healthy succession says: the tools have changed, but the discipline still matters. Here is what must be preserved. Here is how the sensibility evolves.

Broken succession does something different. It treats the end of one generation’s repertoire as the end of the tradition.


The Seat He Helped Win

For years, Nielsen was one of the few steady voices in technology-forward industries who could cut through the false certainty of objective data by insisting on something more basic: the voice of the customer, observable behavior, recurring patterns of confusion, and the common-sense friction that sophisticated systems so often impose on ordinary people. He helped make user experience legible inside environments dominated by engineering logic, executive simplification, and market pressure. He gave organizations a way to see what technical teams, left to themselves, routinely missed.

That mattered because many of the problems usability engineering addressed were never purely technical. They emerged in organizations where engineers, lacking either the incentive or the collaborative structure to work across disciplines, designed interfaces from the inside out. UX was not ornamental. It was corrective. It secured a seat at the table for the human consequences of technical decisions.

That is why the tone of Nielsen’s essay feels so consequential. If AI now allows engineers to produce design work that appears good enough to technology-enamored executives, the danger is not simply that workflows become faster. It is that organizations begin to persuade themselves that they no longer need the discipline Nielsen helped build. In that case, we would not be transcending the old mistakes. We would be repeating them in a more automated form.

Seen this way, Nielsen’s essay does more than predict a transition. It risks legitimizing the withdrawal of advocacy from a profession that still exists because technical systems do not naturally organize themselves around human needs. What unsettles the reader is the possibility that, having spent a career fighting to secure UX a place in technical decision-making, he now speaks as though that place can be surrendered just as the pressure to defend it is returning in a new form.


The Craft That Remains

Automation will continue to reshape design and engineering, just as it will many other disciplines. AI systems will generate artifacts faster than human teams ever could. The mechanics of production will change. But the deeper question facing technological systems has not changed at all.

Someone still has to decide what those systems should do, and for what purpose. Someone still has to interpret how they affect the people who depend on them. Someone still has to recognize when apparent efficiency masks a deeper form of harm. That is the difference between using machines to support judgment and reorganizing judgment around machines.

The discipline Nielsen helped build was never really about wireframes or usability checklists. It was about protecting a space for human judgment inside rapidly expanding technical systems.

That space has not disappeared.

If anything, the faster machines become at generating solutions, the more necessary human judgment becomes.

Published on LinkedIn and SubStack


Hi, I’m Christine. 👋

In a world awash in buzzwords and borrowed courage, Dative.works is a pocket of resistance—where data pros, skeptics, and stray idealists come to build something less disposable.

If you’d rather ask better questions than echo easy answers, pull up a chair. The work starts here.

Next
Next

No Harm Intended