AI Legislation Weather Report: This Week in “Please Stop Letting Robots Talk to Kids Like That”

Image: Unsplash

If the bill numbers in Transparency Coalition’s weekly roundup make your eyes blur, that’s normal, and it’s also the wrong way to read it. This isn’t “AI regulation” in the abstract. It’s a rolling, state-by-state effort to write product-safety rules for the places AI is already touching everyday life: chatbots that impersonate humans, tools that can generate sexual content, systems that can imitate your face and voice, and—most urgently—interfaces designed to keep kids engaged.

Think of this update as an AI weather report:

  • What moved this week

  • What’s building pressure

  • What’s likely to become enforceable next

Because there won’t be a single national “AI law” that arrives all at once. There’s a patchwork forming in real time, and the pattern matters more than the bill numbers.

Landfall: Kansas enters “signature territory”

Kansas lawmakers approved an AI-related child exploitation bill (HB 2183) and sent it to the governor’s desk.

That’s a different category of event than “a bill was introduced.” One is basically a press release with ambition. The other is the policy equivalent of the plane leaving the gate.

Whatever your views on regulation, Kansas is not debating hypotheticals. It’s treating AI-modified or AI-generated child exploitation material as an immediate enforcement problem. Not “someday.” This week.

Atmospheric River: Illinois floods the zone

Illinois legislators introduced more than a dozen AI-related measures in a single week, including multiple chatbot safety bills.

When a legislature drops that much volume at once, the subtext is simple:
“We don’t know which of these will survive, but we want to shape the field.”But the pile isn’t random. Two themes keep resurfacing across states:

  • chatbot safety, especially where minors are involved

  • provenance / disclosure, i.e., how AI-made content is labeled and traced

Illinois is signaling: Fine, if this is going to exist, it’s going to exist with rules.

Fast-Moving Front: Oregon + Washington hit short-session sprint mode

Meanwhile, in Oregon and Washington, chatbot safety bills moved through committee and are now awaiting floor votes.

Here is the key point: short sessions make policy brutal. Bills don’t evolve; they move or die lonely in a hallway while everyone pretends it was “always a study bill.”

So yes: last week had that specific short-session smell—Rules committee… floor calendar… now or never.

Washington isn’t just moving on one thing. The “Bills to Watch” list shows THREE lanes that tell you exactly what lawmakers think is happening in the real world:


Which weather system are you reacting to?

1) Kids + chatbots: “Stop treating minors like beta testers.”

This is the center of gravity right now.

Across multiple states, the premise is basically:

“If your product can talk like a person, it needs rules like a product that affects people.”

So we’re seeing recurring requirements like:

  • clear notice you’re interacting with AI

  • age verification / minor protections

  • restrictions on sexual content, especially with “companion” bots

  • crisis safeguards (self-harm, suicidal ideation)

Washington’s HB 2225 and SB 5984 (companion bills) are now waiting for floor votes in their chambers. Oregon’s SB 1546 could move to a floor vote early next week. California’s SB 300 is about preventing companion chatbots from producing or facilitating sexually explicit material.

Translation: states are treating certain chatbots like youth-risk products: guardrails first, innovation second.

And honestly? That’s not a crazy instinct. We regulate toys. We regulate apps. We regulate seatbelts. But until now, we’ve mostly let “emotionally persuasive software” wander around like a golden retriever at a toddler birthday party.

2) Deepfakes + digital likeness: “My face isn’t your training data’s side hustle

Washington also has a bill moving that treats “forged digital likeness” as something people have a property right in.

That is a big rhetorical shift: it’s not just “this is creepy,” it’s “this is theft.”

It’s the state saying: your name/face/voice shouldn’t be free raw material for impersonation, profit, or coercion.

3) Training data transparency: “What did you build this thing out of?”

Washington’s HB 2503 concerns transparency of training data.

This is part of a growing push for baseline documentation. Not “show us every record,” but “stop acting like the inputs are a sacred mystery.” Because if a system is powerful enough to shape behavior, markets, or public trust, the public will eventually ask: what is it made of?

4) Liability: “If the bot harms someone, who’s holding the bag?”

Under everything else is the question lawmakers don’t want to say too loudly yet:

If a chatbot encourages harm, simulates intimacy with minors, or offers dangerous advice—who is responsible?

That’s why we see movements toward:

  • operator duties (you must build safety features)

  • limits on therapy/medical posing

  • liability hooks (some states)

This is the state-level version of refusing to accept: “Well, the model did what it did.”
Because that sentence is just negligence in a lab coat.

Saving you a seat at the next hearing,

 

Also published on LinkedIn and SubStack

Previous
Previous

Robert Tinney: A Piece of Public Imagination Vanishes

Next
Next

Holding the Line @ Backchannels