This year brought a quiet but significant milestone in AI governance. New York and California enacted laws targeting relational AI: systems that remember, adapt, and participate in sustained conversation. It’s a subtle but important evolution — moving from regulating models as abstractions to regulating AI as part of lived human experience, where real people are influenced, supported, and sometimes emotionally impacted by the systems they use. For those of us in the legal and product trenches, it signals that policymakers are beginning to meet AI at the level where it actually shows up in people’s lives, not just in technical documentation.
Regulators tend to step in when markets don’t set their own guardrails. BetterUp didn’t wait for that moment. Our AI Coach was built inside a safety and ethics architecture shaped by more than a decade of human coaching. Our operational discipline long predates relational AI’s appearance on any legislative agenda.
The new AI Companion Laws in NY and CA
Both states now define AI companions as systems that retain context, personalize interactions, and engage with users on personal or emotional topics. AI coaching sits squarely within that scope.
New York’s law (effective November 5, 2025)
- Requires conspicuous, recurring AI disclosure
- Requires detection of suicidal ideation or self-harm
- Requires redirection to crisis resources and prevention of harmful outputs
California’s SB 243 (effective January 1, 2026)
- Requires disclosure where a reasonable user could mistake AI for a human
- Requires crisis-detection and crisis-prevention safeguards
- Requires public transparency materials by January 2026
- Requires annual crisis-referral reporting beginning July 1, 2027
Together, these laws mark the first U.S. regulatory framework specifically aimed at AI systems that engage users emotionally and relationally. It’s a meaningful step toward building structure in an area that has, until now, relied heavily on the ethics of individual companies.
The safety and ethics infrastructure BetterUp built long before these laws
BetterUp’s head start is not coincidental; it’s structural and by design.
Long before “AI companions” had a statutory definition, BetterUp was supporting human beings through personal and professional growth at scale. Professional coaching includes multiple stages and aspects of a person’s professional journey, including those parts that can be difficult. Doing that well required a formal Coaching Ethics and Safety practice: crisis detection protocols, escalation pathways, supervision, and clear boundaries for emotionally charged conversations. Those foundations weren’t created for AI, they were created to responsibly support millions of human coaching interactions.
So when we began building an AI Coach, we didn’t retrofit safety around it. We extended a mature, deeply practiced safety model into a new modality.
- Our ethicists were integrated from day one, translating our human-coaching standards into AI contexts.
- Our guardrail logic, crisis protocols, and escalation thresholds grew out of real-world human coaching — not compliance checklists.
- Our AI Safety & Guardrail Framework formalizes this into an operational system with:
- A proprietary multi-agent detection architecture
- A human-on-the-loop model led by our Coaching Ethics team
- Staff who can initiate outreach in crisis scenarios
- Escalation pathways refined over a decade of practice
In short: the safety culture these laws envision is the one BetterUp already operates in.
And our system performs.
Our crisis-detection models are continuously monitored and refined, achieving industry-leading precision and recall. They detect patterns across turns, escalate appropriately, and maintain supportive boundaries without flattening the coaching experience. These protocols help create a safe space for growth and for our members to thrive. For an AI system intended to support real human growth, that is the bar — and we continue to push it.
Other AI services are still finding their footing
Across the landscape, many AI coaching or relational tools reference “guardrails,” but few make their methodologies public or demonstrate operational depth. California’s transparency requirement will, for many, be the first time they have to articulate how safety actually works in their systems.
BetterUp enters that era with something rare:
- A decade of documented practice
- A published safety framework
- A functioning operational ethics infrastructure
This isn’t new work for us. It’s native work.
What this means for BetterUp members and partners
For members
Your AI Coach inherits the same ethical standards and safety protocols that support our human coaching experience — proven across millions of moments of personal growth.
For enterprise partners
BetterUp assumes operator obligations under these laws, sparing organizations from building parallel compliance structures while still ensuring their employees are supported by a safe, ethical system.
For the broader AI ecosystem
These laws signal the beginning of more nuanced AI oversight. Safety, transparency, and responsible relational design are becoming table stakes, not differentiators.
BetterUp welcomes this direction.
The Human Transformation Platform
Process doesn't change your business. People do. Our platform removes the guesswork from developing your people at scale and delivers growth that's proven, predictable, and precise.
The Human Transformation Platform
Process doesn't change your business. People do. Our platform removes the guesswork from developing your people at scale and delivers growth that's proven, predictable, and precise.