The challenge
A multi-city mental health platform with 180 therapists was leaking 41% of new sign-ups before the first session. Intake forms were long, matching was a manual coordinator job that ran in batches every 2 days, and the platform had no automated escalation path for high-risk language in messages. The clinical team was rightly nervous about anything AI-driven touching this surface, so trust and a tight human handoff were non-negotiable.
How we deployed
- Built a conversational intake on web + WhatsApp that gathers concern, modality preference, language, schedule and budget over 8-12 messages.
- Trained a matching layer on past successful pairings — therapist specialisation, language, lived-experience tags, availability.
- Surfaced top-3 therapist matches with bios for the patient to confirm; coordinator only intervenes on edge cases.
- Deployed a crisis-flag classifier scanning every patient message for self-harm, suicidal ideation and acute distress language.
- Routed any positive crisis flag to an on-call clinician inside 3 minutes with a transcript and escalation script.
- Logged every flag, every override and every match outcome for the clinical governance committee.
What changed
- Median therapist-match time fell from 6.4 days to 14 hours.
- Waitlist drop-off rate dropped 58% in 60 days.
- Every crisis flag in 90 days was escalated to a clinician inside 3 minutes — zero misses on audit.
- Patient match-satisfaction score sat at 4.7/5 across 1,200+ sessions.
- Clinical coordinators reallocated to actual case supervision and quality calibration.
— Clinical Director · Mental health platform

