AI Is Not Just A Tool
From Tool to Institution
At every conference, someone reaches for the tranquilizer line: “AI is just a tool, like a camera.” It sounds sensible because cameras once calmed us: art didn’t die; it changed. But cameras point at the world and capture what’s there. Modern AI points at us and proposes what comes next: labels, scores, sentences that other systems treat as facts.
Two failures show the split. Press a shutter at the wrong moment and you get a blur. Everyone can see the error; the fix is to take another photo. Let a model flag a teacher as a cheater or a traveler as a risk, and there’s no blur to see, only the cold grammar of authority: “The system found a pattern.” Now the fix isn’t a retake. It’s a process: evidence, appeal, reversal. That is the move from tool to institution. It’s not aesthetic; it’s jurisdiction.
We cling to the camera line because it flatters us as operators. If AI is a tool, then we’re authors—responsible for taste, not power. But cameras are passive instruments; they don’t teach you or shape you. Generative models are active systems with trained objectives, guardrails, and sampling choices. Not a soul, but a policy. Not consciousness, but consequences.
Systems, Not Lenses
Where law and security intersect with the machine, the metaphor breaks down. Statutes and frameworks treat AI as systems that act, not as systems that merely implement or record. AI risk isn’t a ‘how to use the tool’ tip sheet; it’s cradle-to-grave stewardship, from design to retirement. Security treats models as attack surfaces: prompt injection, data poisoning, and insecure output handling. You don’t “poison” a Nikon.
Authorship doctrine draws a bright line: a photograph is protectable because a human author made it; purely AI-generated artifacts, absent sufficient human control, aren’t. The law refuses to pretend that a prompt is a picture. And regulators warn about manipulation risks that are baked in, not bolted on—systems that learn vulnerabilities and operationalize persuasion at scale.
Regulate a model like a camera, and you’ll end up auditing the tripod while the system makes decisions in the background, decisions that bind people who never opted into your metaphor. The implications are institutional: provenance and chain-of-custody for claims; contestability by design; slow, multi-party gates wherever identity, safety, pay, or due process are at stake. That’s the ground your manifesto staked out: the fight has moved to who can change it, who can stop it, what counts as proof, and who pays when it’s wrong.
The Jurisdiction Hinge
We get tangled up in the concept of “agency” because we imagine a mind. That is the wrong frame. Agency refers to the built-in logic and policy, as well as the training distribution and objectives, that determine which token to choose next. The model is not a person, but it isn’t an empty pipe. It embodies choices that will be made, at scale, with the kind of confidence humans mistake for competence.
So stop asking, “Is AI art?” Ask, “Who controls the switch?” Who can change the model? Who can halt it? What counts as proof when it’s wrong? Who pays for correction, delay, and denial? If the remedy is “take a better picture,” you have a camera. If the remedy is “file an appeal,” you’re dealing with an institution. Institutions need rules before scale, not after harm.
The camera made images cheap. AI makes judgments cheap. History suggests that when judgment gets cheap, someone else pays the price. This is the messy middle: old bundles of trust are fracturing; new ones aren’t yet stable. The answer isn’t a comforting metaphor. It’s design discipline: naming decision points, assigning duty, costing risk, and building brakes, so fluency doesn’t outrun proof.
Summary
Tool → Institution: Cameras capture; models propose—and their mistakes require appeals, not retakes.
Systems, not lenses: Law, risk, and security already treat AI as acting systems (provenance, redress, attack surfaces). Regulating it like a camera, the tripod is audited while the system decides.
Jurisdiction over fluency: Ask who can change or stop the system and what counts as proof. If outputs touch identity, safety, pay, or due process, govern before scale, not after harm.