FAQ

Maine and Missouri Move to Ban AI Therapy Chatbots as States Race to Regulate Mental Health AI

Maine's LD 2082, which bans licensed mental health providers from using AI for independent therapeutic decisions, passed both legislative chambers on April 7–8 and heads to the governor. Missouri's HB 2372 passed the full House on April 2 with a $10,000 first-offense penalty clause, and now sits in the Senate. The twin developments signal an accelerating state-level movement to draw hard legal lines around AI in mental healthcare.

6 min read

A Legislative Wave Takes Shape: States Are Drawing Hard Lines Around AI in Mental Health

Two separate state legislatures, working independently on opposite sides of the country, arrived at near-identical conclusions in the span of a single week. Maine passed its AI-in-mental-health-therapy ban on April 7–8. Missouri’s House passed its own version two weeks earlier. Together, they represent the clearest signal yet that state lawmakers — frustrated with the pace of federal action and alarmed by the proliferation of AI companionship and therapy products — are moving to establish hard legal boundaries around artificial intelligence in mental healthcare.

The legislation arrives at a moment of genuine uncertainty about the safety of AI mental health products. Therapy chatbots have proliferated rapidly since the mid-2020s, filling a real access gap in a country where psychiatrists and therapists are in critically short supply and waitlists stretch for months. But several high-profile incidents — including deaths linked to prolonged engagement with companion AI products — have raised urgent questions about what these systems should and should not be permitted to do.

Maine LD 2082: A Unanimous Committee, a Clear Rule

Maine’s LD 2082, formally titled “An Act to Regulate the Use of Artificial Intelligence in Providing Certain Mental Health Services,” passed through the Joint Standing Committee on Health Coverage, Insurance and Financial Services with a unanimous “Ought to Pass” vote before clearing both the House and Senate on April 7–8. The bill is now headed to Governor Janet Mills for signature, and must pass before the legislature adjourns on April 15.

The bill was sponsored by Rep. Amy Kuhn (D-Falmouth) and drafted in partnership with Spurwink, a Maine-based behavioral health organization.

The core prohibition is straightforward: the bill bans licensed mental health providers from using AI to make independent therapeutic decisions, interact directly with clients, or generate therapeutic recommendations. AI is permitted for purely administrative functions — scheduling, documentation, billing — but cannot substitute for the clinical judgment of a licensed professional.

The bill does carve out a narrow exception for therapy chatbots under strict conditions. To qualify for the exception, a chatbot must: (1) display a prominent disclaimer at the beginning of every interaction identifying itself as an AI, not a licensed clinician; and (2) operate only as part of a comprehensive treatment plan prescribed and monitored by a licensed mental health professional, with that professional specifically assessing the minor patient’s suitability for chatbot interaction.

That last phrase — “minor patient” — is important. The exception is specifically framed around minors, reflecting the legislature’s primary concern: young people forming dependent relationships with AI companions and therapy bots in the absence of clinical supervision.

“This is a common-sense solution that will preserve the human connection in mental health care,” a sponsor statement read. “AI may have a supportive role, but it cannot replace the judgment, empathy, and accountability of a licensed professional.”

Missouri HB 2372: $10,000 Penalties and Attorney General Enforcement

Missouri’s approach is different in important ways. HB 2372 is an omnibus healthcare bill that addresses multiple subjects, with the AI therapy chatbot provision embedded within a broader set of mental health regulations. The prohibition covers AI performing “therapy services, psychotherapy services, or a mental health diagnosis.”

The enforcement mechanism is notably aggressive. First violations carry a $10,000 penalty, enforced by the Attorney General’s office. That is a significant deterrent for startups and smaller telehealth providers that have been deploying AI-driven therapy tools to fill service gaps in underserved communities.

HB 2372 passed the full House on April 2 and now sits in the Senate Committee on Families, Seniors and Health. The legislative calendar suggests a vote before the session ends, though the bill’s omnibus nature means it may face amendment or negotiation in the Senate.

The Missouri bill is notable for what it does not address as much as what it does. Unlike Maine’s LD 2082, it does not establish a licensed-professional supervision exception for AI chatbot use in therapy contexts — effectively treating any AI system that performs therapeutic interaction as a prohibited practice, period. Advocates for rural mental health access have raised concerns that this bright-line rule could eliminate tools that currently serve patients who have no other option.

A Four-State Pattern Emerges

Maine and Missouri are not isolated. At least four states are now actively legislating AI use in mental health contexts in 2026:

  • New York enacted safeguards for AI companions earlier this year, requiring disclosure when a user is interacting with an AI and mandating crisis escalation protocols when a companion AI detects signs of acute distress.
  • Utah passed legislation in Q1 2026 restricting AI systems from issuing psychiatric medication prescriptions without licensed physician oversight.
  • Maine (pending governor signature) prohibits AI-based independent therapeutic decision-making by licensed providers.
  • Missouri (pending Senate vote) bans AI from performing therapy services or delivering mental health diagnoses.

The pattern reflects a broader dynamic in AI governance: Congress has moved slowly on federal AI legislation, and states are filling the vacuum with their own regulatory frameworks. The result is a patchwork of rules that differs by state — a pattern that the tech industry has consistently warned will create compliance complexity for companies operating nationally.

The Products Under Pressure

The legislation is primarily aimed at a specific category of AI products that has grown rapidly: conversational AI systems marketed directly to consumers as therapeutic, emotional support, or mental health tools. Products like Character.AI’s companion bots, Woebot, Wysa, and dozens of smaller applications have collectively accumulated tens of millions of users seeking support for anxiety, depression, loneliness, and relationship challenges.

The regulatory attention intensified after a 2025 lawsuit in which a family alleged that a minor’s suicide was connected to an intensive relationship with a Character.AI companion. That case — and several others that followed — became the catalyst for the current wave of state legislation. Lawmakers are not primarily targeting clinical software deployed within healthcare systems under professional supervision. They are targeting consumer-facing AI products that interact directly with vulnerable users outside of any clinical relationship.

This distinction matters for the scope of the legislation. Enterprise clinical decision support software — AI tools that help licensed clinicians with documentation, diagnosis coding, and treatment planning — is generally not affected by the current crop of state bills. The restrictions fall on autonomous AI-to-patient interaction.

The Access vs. Safety Tension

The central tension in this policy debate is real and unresolved. The United States has a severe shortage of mental health providers. The Health Resources and Services Administration estimated before these bills were introduced that the country had a shortage of more than 8,000 mental health practitioners, with wait times in rural areas extending to months. AI tools, whatever their risks, have been filling a gap that the existing healthcare system cannot fill.

Critics of the state bans argue that the legislative response is overcorrecting — banning technology that, used appropriately and transparently, could help millions of underserved patients who otherwise have no access to any mental health support. A poorly supervised human therapist can do harm too, they note, yet we do not ban human therapy.

Proponents counter that the risks of AI in this context are qualitatively different: AI systems cannot be held accountable, cannot recognize the limits of their competence, cannot form genuine therapeutic alliances, and can be deployed at scale by commercially motivated actors with minimal safety safeguards. The harm, they argue, is not equivalent, and the standard of proof for safety should not be the same.

Federal Action Still Absent

Notably absent from this landscape is any meaningful federal legislative response. The Trump administration’s 2025 executive order on AI preemption — which directed federal agencies to develop uniform AI standards — has not yet produced the kind of comprehensive mental health AI framework that would supersede state-by-state legislation.

The Federal Trade Commission has taken some action under existing unfair and deceptive practices authority, requiring certain AI companionship products to clarify their non-therapeutic nature to users. But the gap between FTC enforcement actions and the kind of comprehensive safe-harbor framework that the industry has sought remains wide.

As long as that gap persists, states will continue to act — and the patchwork will continue to expand.

What Comes Next

For the companies most directly affected — companion AI startups, telehealth platforms, and mental health app developers — the legislative calendar over the next 30 days is consequential. Maine’s governor must act before April 15. Missouri’s Senate must move by session close.

But the more important signal is the pattern itself: four states, legislating with real urgency, within a single quarter. Whether or not every bill passes in its current form, the message is clear. States are no longer willing to wait for federal leadership on AI in mental health. The window for industry self-regulation has narrowed significantly — and the legal landscape for AI therapy products is about to become significantly more complex.

AI regulation mental health therapy chatbots state legislation policy Maine Missouri

Related Stories

Trump Executive Order Activates DOJ Task Force to Override State AI Laws

The DOJ's AI Litigation Task Force, operational since January 10, is now actively challenging state AI statutes that conflict with the Trump administration's December 2025 executive order preempting local regulation. With over 20 states having enacted comprehensive AI laws, the outcome of this federal-state standoff will define who governs AI in America for the next decade.

5 min read

Utah Lets an AI Chatbot Renew Psychiatric Prescriptions Without a Doctor

Utah's regulatory sandbox has approved Legion Health to allow its AI chatbot to autonomously renew prescriptions for 15 psychiatric medications—making Utah the first government in the world to authorize AI for autonomous psychiatric prescribing. The approval comes weeks after a previous Utah AI prescription pilot was successfully jailbroken, raising immediate questions about the clinical safety of moving faster than medical safeguards can keep up.

4 min read