Playing with Fire: Hope and Dread in Artificial Intelligence and Machine Learning
A call to action, and preliminary recommendations
AI in mental health holds real promise — expanded access, steadier follow-up, earlier risk detection — but the risks are immediate: safety lapses, harm especially to naive users, privacy breaches, biased outcomes, workflow overload, and misplaced trust in confident-sounding tools. Safe deployment requires clear anchors: educate leaders and the public, regulate with nuance, address acute safety now, stay mindful of geopolitical stakes, and keep hype in check while preparing for discontinuities. Read to the end for bonus material.
Attend to acute safety — now
Current systems influence care; safety must be a living practice.
- Rapid escalation: clear pathways and time-bound SLAs for high-risk signals.
- “Never-alone” rules: AI alerts; human clinicians lead in crises; no autonomous crisis guidance.
- Fallbacks and rollback: manual modes, auto-rollback on thresholds, business continuity for outages.
- Auditable links: trace AI outputs to actions; review incidents and near-misses without blame.
Educate leaders
Leaders shape culture and risk appetite — literacy prevents overtrust and reflexive rejection.
- Core literacy: How ML learns; failure modes (hallucinations, drift, brittleness); calibration vs. accuracy; why de-identification isn’t privacy.
- Bias basics: Sources of bias; fairness metrics; intersectionality; handling missing demographic data; limits of “diverse data” alone.
- Safety and uncertainty: High-risk contexts (suicide/self-harm); error trade-offs; communicating uncertainty.
- Data governance: Minimization, retention/deletion, vendor risk, and auditability.
- Practical tools: Evaluation checklists; pre-deployment questions; thresholds for pause/retire.
Public education and dialogue
Beyond leaders, the public needs practical, trustworthy guidance to use AI safely and avoid overreliance.
- Trusted explainers: Point to accessible resources including experts and clinician-educators who demystify capabilities, limits, and safe use.
- Plain-language primers: Short guides on what AI can/can’t do in mental health; how to spot hallucinations; when to seek human help.
- Safety norms for consumers: Don’t use AI for emergencies; call crisis lines or seek urgent care. Protect privacy; avoid sharing identifiers or detailed histories with general chatbots. Treat outputs as suggestions, not diagnoses; verify with clinicians.
- Community engagement: Town halls, library talks, and patient advocacy partnerships to co-create materials and address concerns.
- Media hygiene: Encourage skepticism about hype; promote checklists for evaluating claims and recognizing cherry-picked anecdotes.
- Identify vulnerable groups: Teenagers and younger children are at the leading edge of a vast, uncontrolled, dangerous social experiement. Learn from past mistakes, and get ahead of the curve with integration with family and parent education, and integrate with school health programs and public health efforts with coherent, organized campaigns, with measurable goals.
Regulate and oversee — with nuance
Guardrails must be strong and proportionate, enabling responsible iteration.
- Risk-tiered oversight with independent audits and red-teaming.
- Post-market vigilance: continuous monitoring, incident reporting, requalification, public-facing summaries when feasible.
- Transparent change logs and empowered multidisciplinary governance to pause/retire tools.
Protect privacy and data integrity
Behavioral health data requires rigorous protection.
- Minimize and encrypt; role-based access.
- Vendor diligence: security posture, secure MLOps, SBOMs.
- Clear lifecycles: retention/deletion; patient access and contest mechanisms.
- Consider federated learning or differential privacy when feasible.
Build equity in, not on
Equity is ongoing, measurable work.
- Diverse, representative data plus external validation across sites.
- Fairness monitoring: calibration and error rates across demographics and intersections; procedures for missing data.
- Community input to interpret disparities and guide changes.
- Digital literacy supports to reduce access gaps.
Implement gradually, fit the workflow
Small, careful steps beat grand promises.
- Start low-risk (admin support, triage aid); scale with evidence.
- Real-world validation with your population; measure outcomes you value.
- Human factors to prevent alert fatigue; co-design with clinicians and patients.
- Training that sticks: brief simulations, checklists, quick references.
Communicate with realism
Trust grows with clear, candid, two-way communication.
- Living informed consent in the EHR; opt-out where safe; patient-facing materials co-designed with patients.
- Show uncertainty: calibrated confidence, not just single scores.
- Share progress and setbacks; explain changes.
Hold geopolitical complexity and ethical tension
Balance individual rights with collective preparedness.
- Dual-use awareness: misuse, surveillance, supply-chain dependencies.
- Resilience: localization and redundancy for critical services.
- Balanced policy: protect individuals rigorously while maintaining readiness against malicious state actors.
- Whoever wins the AI arms race will dominate.
Temper hype — and plan for discontinuities
Be measured about benefits; prepare for capability jumps.
- Set expectations: AI supports; clinicians decide; empathy isn’t automatable.
- Stress-test governance for rapid changes, including powerful but wisdom-lacking systems.
- Iterate responsibly: update safeguards as capabilities evolve.
Immediate steps checklist
- Educate broadly on AI/ML basics, bias, privacy, and uncertainty.
- Establish nuanced oversight: risk tiers, monitoring, audits, empowered governance.
- Implement acute safety: escalation SLAs, never-alone rules, rollback/fallback.
- Strengthen privacy: minimization, encryption, vendor vetting, retention/deletion clarity.
- Embed equity: diverse data plus fairness monitoring, intersectional analyses, community input.
- Pilot carefully: low-risk use cases, real-world validation, human factors testing, practical training.
- Communicate transparently: living consent, uncertainty displays, published updates; include patient voice.
- Plan for geopolitical and supply-chain risks; maintain resilience.
- Temper hype; prepare for discontinuities; keep people at the center.
Admittedly, it’s an impossible task to grasp what is happening with AI. In some ways, yes — we can see the shape of the landscape, and some of the vistas, valleys and chasms. In other ways, we have to be especially humble about what we don’t know, and the risks of human ego-centrism. One is reminded of humanity responded to ego-shattering revelations over the centuries — the response to Copernicus, the rejection of truth, when it imploded the human narcissism.
We go from geocentric to heliocentric. Now we are on a planet at the edge of one of countless galaxies. Likewise, intelligence may be decentered and expanded, from anthropocentric, to a broader understanding. Read my interview with Professor Michael Levin, about new forms of intelligence, here: Expanding Our Understanding of Life and Intelligence.
Daedalus warned Icarus not to fly too close to the sun, lest the heat melt the wax holding together his wings. What I didn’t know, is that he also warned him not to fly too close to the sea, lest his wings get waterlogged and too heavy to fly.
* I am referencing psychoanalyst Steven Mitchell’s paper, Hope and Dread in Psychoanalysis. He drew attention to the full scope of Daedalus’ admonition, which is where I learned it.
Thanks to Johns Hopkins AI in Healthcare certificate program for a transformative educational experience.
I’d love to hear what I missed, or got wrong. Please comment with civility, and share.
