Stewardship for the 99%: A Manifesto
We are at a crossroads
In the fall of 2025, Microsoft’s President for Central Europe and Central Asia told a ballroom full of leadership scholars, most of whom paid dearly to be there, that, while preparing the night before, she “hoped CoPilot helped make her presentation more relevant.” This isn’t about one late night; it’s the doctrine: optimize the story, outsource the work, invoice the audience. She wore tech-optimism like a uniform—health sensors for her body, book summaries for her mind, copilots for on-the-fly-relevance. Then the closer: “AI is like the camera, painting didn’t die, it evolved.” The metaphor was designed to soothe, but it is wrong. Cameras weren’t engineered to maximize compulsion or scale persuasion; they didn’t learn our vulnerabilities between frames, or A/B-test our attention. The comparison launders individual agency, treating engagement, surveillance, and influence as a new paintbrush. The talk landed upbeat and generous with metaphors. “Skepticism is healthy,” she assured us, “but only as a waypoint,” a stop on the way to adoption. Then the moral pivot. She had no qualms about counseling the crowd that, “responsibility begins with us.” Use tech wisely. Don’t waste compute. Be responsible. A fluent story that smooths pace, energy, data sourcing, and labor impact into décor. The smoothness is the point.
In the same program, a Senior Researcher at the Institute of Criminology, Faculty of Law, Ljubljana, Slovenia, began where keynotes rarely dwell: with the feelings that cross a room faster than facts, and with who benefits from that speed. She invoked Havel’s greengrocer, whose flag in the window costs less than dissent, to show how systems survive on performance, not belief. The regime doesn’t need conviction, she warned, only the small, complicit acts of display. Then she widened the lens. Platforms don’t just register anger; they stage it. Ignorance isn’t singular, but triple: what we avoid because it hurts; what is engineered so we can’t see it; and what cannot honestly be known on a quarterly timetable. Only the first yields to training; the other two require jurisdiction—the power to compel disclosure, to slow a rollout, to say “not yet.” In a world of synthetic images and confident models, provenance stops being a decorative watermark; it’s the chain of custody that lets a claim hold up in court instead of merely traveling well online. Inside firms, as in the town square, fluency outruns proof; slides and sample sizes impersonate authority. Our contemporary party flags are dashboards, talking points, and now, increasingly, palliative news headlines: “AI augments people,” “quick wins,” “staying nimble,” “responsible use.” We hang them because dissent is pricey, and display is cheap.
These are not merely two different keynote styles: they’re two power structures fighting over the terms of AI use and stewardship. Used here as illustrations of common narratives, the argument concerns systems and incentives, not individuals. On one side, Big Tech scale-first camp (and its admirers) treats AI as the engine of extraction-led growth. Progress in this narrative means shipping models fast, pushing telemetry into every role, and celebrating access to the same tools that, in the next breath, are used to justify head-count cuts, normalize surveillance as productivity, saturate information spaces with synthetic media, and push energy and water costs onto the public. It is blitz-scale automation as civic virtue, asking people to be grateful that where managers once monitored, an automated, polished, and fluent dashboard now does the watching, scoring, and judging.
In sharp contrast, the public-interest stewardship camp treats AI as a public-risk technology. Some safeguards are non-optional: contestability by design; provenance; and chain-of-custody for evidence so evidence can stand outside the demo; disclosed energy and water budgets; and slow, multi-party gates wherever identity, safety, pay, or due process are touched. Here, jurisdiction comes first: who can change determine what change is needed, how the change should occur, who can stop it, what counts as proof, and who pays—all settled before scale. This is the ground the Steward of Jurisdiction points to. They do not sentimentalize reflection. Interior work, they argue, licenses exterior limits. Limits set without reflection are theater; reflection that never sets limits is complicity. Wisdom is the hinge; the pause that makes a decision answerable to more than mere speed. In practice, it looks ordinary: timeboxed slack for doubt, the habit of naming unknowns aloud, and holding a release until the harms are fairly priced rather than depending on the labor of invisible hands.
Faced with these two visions, we stand at a real crossroads. To one side, glossy AI is overlaid onto institutions that never had to answer to the people they affect: work is shaved down to whatever a sensor can count, “certain” outputs float around with no way to prove who made them, attention is siphoned into endless prompts, and the bill drifts quietly downstream as layoffs, silent exits, and water pumped out of towns already on ration. The other road is more direct and harder: systems you can interrupt and win against; AI outputs that carry a chain of custody sturdy enough for a grievance, a court, or a journalist; deployments that show their energy and water costs before they scale; transitions that keep wages and dignity intact. This isn’t a quarrel over using AI. It’s a fight over jurisdiction: who names the terms, who can say stop, what counts as evidence, and who pays (and how) when the story breaks.
But what makes the choice pressing now is that there is almost no middle ground left. It didn’t vanish by accident. It was crowded out by: a growth doctrine that treats scale as moral proof; there is no enough; by an extractive frame data that recasts consent as obstruction; by a geopolitical race that badges every pause as falling behind; by regulators who, instead of saying “not yet,” swallowed exit threats and wrote “innovation-friendly” rules that promised to “watch;” and the hollowing of the very institutions (unions, faculties, and public watchdogs) that could have made slowness legitimate. Platforms then make fluent optimism trend and measured caution disappear. Hence, Prague: an adoption story that feels effortless and upbeat, and a stewardship story that feels heavy before it speaks. The first is easy because the system is built for it. The second is hard because the system has been stripping out its brakes while driving downhill.
This is a manifesto for an alternate path, a course both necessary and feasible. An anti-AI stance has become unthinkable now, not because AI is benign, but because the credibility of the people who could have drawn real lines has collapsed. The casualties include not only the centrist parties that sold innovation as social progress, but also their platform-corporate allies whose techno-optimist sheen rubbed off sometime between the 2016 election and the 2018 platform hearings. That was the moment the Obama-era faith (that platforms would modernize democracy, widen participation, and make government smarter) ran straight into the fact that those same platforms could be used to distort elections faster than institutions could correct them. From that point on, responsible AI and stewardship coming from the same quarters read less like care and more like control. Now, the question is no longer “To AI, or no AI?” That door is shut. The question is whether AI will be governed as an instrument of extraction and pace, or as a public-risk technology answerable to the people it observes, scores, and sometimes harms.
During the Obama era, a platform-progress consensus, echoed by European centrists, maintained the long-held belief that scale was a key to civic modernization. Government and the public largely missed what the platforms’ leaders understood. These weren’t utilities; they were ad-funded surveillance markets that turned users into inventory and attention into currency. By the 2016 election and the 2018 privacy hearings, that assumption collapsed, exposing the asymmetry. Influence could be manufactured faster than accountability could arrive. From then on, “responsible AI” read less like care and more like control and corporate theater. This inaction ceded global influence to the EU, whose GDPR and digital markets policies now define international conversation.
Meanwhile, despite growing concerns over algorithmic bias, misinformation, and data exploitation, the U.S. has produced no meaningful safeguards. The gap remains and arguably has deepened as AI becomes more pervasive. The exception was the Consumer Privacy Bill of Rights, a promising 2012 framework that ultimately lacked teeth and died in Congress. Since then, U.S. policy has prioritized national tech dominance over global tech ethics, doubling down on innovation rhetoric while drifting further from regulatory leadership and shaping global governance norms.
This manifesto is an effort to chart another path. It is not to sketch a tech utopia, but to mark the road that must be taken if AI is to serve people rather than extract from them. The task is threefold: to show why builders and leaders must choose stewardship over adoption-on-autopilot; to link arms with those who already resist the inevitability narrative (labor organizers, public educators, privacy jurists, environmental scientists, content and ghost-worker advocates), because they are the ones paying for “frictionless” AI; and to insist that this alliance speak for the 99 percent who are being asked to display enthusiasm while losing discretion, wages, and trust. Platforms engineer the emotion that makes the story travel, and institutions reward the performance of going along; that is why the adoption story feels effortless, and the stewardship story feels costly. Only by refusing acceleration dogma and the Big Nine’s corporate idioms can we rebuild a technology politics in which evidence has a chain of custody, decisions can be appealed, and speed is answerable to human consequence.
What makes this project possible now is the emergence of a different kind of stewardship—one fought for, not branded. We saw it when Google workers walked out over Project Maven and forced a company worth hundreds of billions to back away from weaponizing their code. We saw it again when staff protested the firing of Timnit Gebru and said, without euphemism, that ethical inquiry is not a cost center. We saw it in the Amazon and Apple employee campaigns against surveillance-heavy return-to-office tools, as well as in Hollywood’s 2023 writers’ and actors’ strikes, where AI clauses became a line in the sand about voice, likeness, and residuals. Those actions drew the red line the CSR era never did: human dignity, privacy, authorship, and security are not perks to be negotiated after launch; they are preconditions for participation. Unlike the purpose wave, which individualized vocation, and unlike corporate social responsibility, which diffused accountability upward into slogans, these worker-led, cross-disciplinary actions practiced stewardship for the 99 percent—people who build, teach, or perform inside systems that can erase them.
Originally published on Substack and LinkedIn