Dignity By Design: Ep. 1 with Clara Hawking

I held off a long time contributing to the din of podcasts out there, where people in love with the sound of their own voice pick a name, generate a description, and hastily choose a guest to ride the social media slop-waves into the friendly ports where clients, gigs, and abundance are waiting for them.

Dignity By Design: Episode 1 with Clara Hawking

This is a first post-show note for Dignity by Design, not a recap of Clara’s essay, not a recap of the episode, and or a summary of my Manifesto for the 99%. These are reflections I had, and what they mean for where this series is headed.

The real topic isn’t AI. It’s institutional metabolism.

Every organization has a way of converting people into throughput. Some do it politely, with performance reviews and dashboards. Others do it with cages and deportation paperwork. The forms vary; the function is remarkably consistent: the system keeps moving by offloading its friction onto someone else’s body, time, reputation, or future.

AI accelerates that metabolism. It expands the system’s ability to act at scale without direct contact, and therefore without direct accountability. When the harm comes, it arrives as a paperwork artifact: a “match,” a “risk score,” a “process.”

The reason her essay lingers with people, one participant mentioned it “haunted” her, is that modern harm is rarely theatrical. It’s administrative. It’s denial wrapped in procedure. It’s violence in the language of efficiency.

“Teeth” is a code for enforcement asymmetry.

The image that framed our episode, teeth removed so a worker is complaint-proof, isn’t just a historical horror. It’s a shorthand for a structural pattern:

  • The system keeps its freedom.

  • The human becomes the constraint.

That’s the bargain in a labor story. It’s also the bargain in AI-mediated decision systems. When risk is flagged but the roadmap proceeds, the organization preserves optionality by making the human carry the consequence: the whistleblower becomes difficult, the governance team becomes overhead, and the harmed person becomes an edge case.

The teeth aren’t literal anymore. But the extraction pattern remains.

One of the things I’m watching (across tech, education, healthcare, immigration enforcement) is a widening gap between constraint knowledge and decision authority. Many organizations know how to name risk. Far fewer design the authority structures that make risk consequential.

When governance can describe constraints but can’t trigger review, pause, or veto, it doesn’t matter how good the policy is. The institution is structurally free to proceed.

The show’s real ethic is consent over extraction.

A small thing happened before we recorded that I’ve come to see as non-negotiable. Clara asked to read my research papers. Not as a formality, but to engage with my material and confirm the guest isn’t going to be treated like raw material. I offered an outline she could edit, and confirmed discussion boundaries up front.

That structure is not separate from the subject matter. It is the subject matter, scaled down.

We live inside systems designed for capture. Stories become content. Attention becomes inventory. Data becomes training material. Consent becomes implied because it slows things down.

This series is a deliberate refusal of that pattern. Not as politeness. As governance. Co-authorship, agency, and standing aren’t “nice to have” values. They are the conditions under which truth can be spoken without being harvested.

If I’m going to host a series about dignity, I can’t build it on the same extractive dynamics I’m critiquing. The container has to match the claim, or the whole project collapses into hypocrisy.

The “curation vs feed” detour was actually the episode in disguise.

Off camera and in the margins, we drifted into a conversation about physical media. We reminisced about Blockbuster, Netflix, iTunes, and vinyl. It would be easy to call it nostalgia, but it wasn’t. It was a proxy argument about judgment, taste, insight, and perspective.

A curated world has accountable choice. Someone can be questioned. Taste has a keeper. You can locate responsibility.

A feed-based world has plausible deniability. “The system recommended it.” “The algorithm surfaced it.” “We’re just meeting demand.”

That detour gradually reinforced one of the series’ central claims: the most dangerous systems aren’t the ones that decide. They’re the ones that make decisions feel inevitable. They replace discernment with defaults. They turn judgment into a passive intake.

In that sense, the cultural argument and the governance argument are the same. One is simply easier to see.

The off-camera talk revealed the secondary economy of harm.

After the formal conversation ended, we touched on a topic I want to keep returning to in this series: what happens downstream when institutions normalize extraction. For instance, I shared how Microsoft used to work us until our eyes bled. Now with AI, the pressure to be “always productive” is amplified.

There is a whole secondary economy built on the damage that it inflicts on the human being: therapy for emotional burnout, legal defense, divorce attorneys when home/work boundaries can’t be maintained, identity repair when performance reviews go south, medical care for stress-related collapse, coaching programs designed to help people survive what they should never have been asked to endure.

This is not a sign of individual fragility. It’s evidence of system design. The business incentives are not aligned with individual or societal wellness; they are aligned with engagement and stock price.

Prevention work is hard to reward because its success condition is often a non-event. No scandal. No incident. No headline. In cultures that celebrate visible expansion, non-events are treated as…nothing, until the day they aren’t.

That is why accountability has to be designed as a viability layer rather than a values statement. You don’t build seatbelts because you enjoy talking about crashes. You build them because you don’t want people to become collateral. [That assumed a common value around the value of human life…still, which might be a rather large assumption in the U.S.]

The coalition I’m trying to build is temperamental, not ideological.

One of the most encouraging things about talking with Clara was the sense that we could touch on tense topics, even disagree at the edges, without losing each other.

That matters. Not because civility is the goal. But because the problems we’re dealing with are structurally complex and emotionally charged. The people who can work on them are not defined primarily by political identity; they’re defined by temperament under pressure.

  • Can you hold grief without collapsing into despair?

  • Can you hold anger without becoming reckless?

  • Can you hold complexity without laundering harm?

That’s the guest bar for this series. Not a résumé. A nervous system. A willingness to name consequences. A refusal to perform certainty.

Where this leaves me

If I had to name what this first conversation clarified, it would be this:

The future of work will not be determined by who publishes the best roadmap.

It will be determined by whether anyone has the authority—and the courage—to introduce friction when the system is about to harm someone, and to insist on repair when it does.

That is what “dignity by design” means to me: not a mood, not a mission statement, but a structural demand. It’s a refusal to let institutions purchase their legitimacy by making people absorb the cost of their automation.

Clara’s episode helped set that tone. Not by providing answers, but by refusing the first bargain: to speak about human lives as if they are abstract.

In the next episodes, I want to keep pressing the same question from different angles:

Where is the institution buying its freedom by taking someone else’s?

Because once you can see that bargain, you can start to design against it.

that is what I mean by governance with teeth.



Download the Episode 1 “What Next” handout

If you want to translate this conversation into action, download a one-page (printable) guide with Clara’s practical starting point: map how AI is actually being used, name the “why,” and align it to mission/values before anything scales.

Next
Next

Broken Succession