In Brief
.png)
What if your medication wasn’t prescribed by a human? That’s the question at the core of a recently introduced bill in the United States House of Representatives.
The bill in question is H.R.238 - the Healthy Technology Act of 2025, which represents a pivotal moment in the ongoing evolution of how AI is being applied in healthcare. If enacted, this legislation would establish artificial intelligence (AI) and machine learning technologies as eligible prescribers of medication under certain conditions, fundamentally redefining the role of AI in clinical decision-making.
What You Should Know about the Healthy Technology Act
The bill in question is the Healthy Technology Act of 2025 (H.R.238) represents a significant shift in healthcare policy that could allow artificial intelligence (AI) to prescribe medication under certain circumstances, without direct human oversight. This controversial bill in the United States House of Representatives raises important questions about the future of AI in clinical decision-making.
From Clinicians’s Assistant to Replacement?
While innovation in healthcare is both necessary and inevitable, the prospect of removing human clinicians from the loop in prescribing decisions raises significant clinical, ethical, and safety concerns.
Let’s start with the role of AI. For years, AI has demonstrated immense value as an augmentative tool in healthcare. Helping clinicians:
- Reduce administrative workload
- Optimize workflows
- Provide real-time clinical insights that enhance, rather than replace, human expertise.
This augmentation model ensures that clinicians remain central to patient care, using AI-generated data and recommendations to support - rather than supplant - their judgment. In short, this ensures that AI is a source of help to clinicians to enhance their decisions, rather than hinders or replaces the essential human connection at the center of care.
Clinical and Safety Concerns with AI Prescribers
H.R.238 proposes a fundamental move toward automation, where AI systems would independently prescribe medications. This shift from augmentation to automation (wherein AI systems make independent clinical decisions – including prescribing medications as proposed in H.R.238) - introduces serious risks. For years, the effects and evolving role of AI in an automation capacity in the clinical space have been studied. This growing body of research on automation bias highlights the potential dangers of over-reliance on AI in clinical settings - even when deployed in augmentation capacities. Studies have shown that clinicians, when presented with AI-generated recommendations, may inadvertently place undue trust in these systems, leading to omission and commission errors. In the realm of prescribing, such errors could have dire consequences, including misdiagnosis, inappropriate medication selection, adverse drug interactions and diminished patient safety. Now imagine AI not only recommending a medication - but actually prescribing it.
Special Challenges in Behavioral Healthcare
The very nature of behavioral healthcare introduces additional complexities. Unlike other medical domains where objective biomarkers can guide AI-driven decisions, mental health assessments rely heavily on:
- Nuanced clinical judgment
- Patient narratives
- Contextual understanding
- Psychosocial history
- Cultural factors
- Social determinants of health
Can an AI truly grasp the lived experience of a patient struggling with depression, anxiety or trauma? Can it account for the indelible human factors the same way a trained clinician can? These elements are difficult for current AI prescribing systems to fully comprehend and integrate into treatment decisions.
Legislative Impact of H.R.238
While proponents of H.R.238 may argue that AI-powered prescribing could increase access to care, particularly in underserved areas, this must not come at the cost of safety, efficacy and ethical responsibility. The push toward automation in clinical decision-making should not be driven solely by efficiency and cost reduction, but by a commitment to preserving the highest standards of patient care.
The Future of AI in Mental Health
As leaders in behavioral healthcare, we must advocate for responsible AI policies that prioritize clinician oversight and patient safety. AI should remain an indispensable assistant, not an autonomous prescriber. The future of AI in mental health must be one in which human expertise remains at the core, guiding technology to enhance care - never to replace the clinician's judgment.
As H.R.238 advances through legislative channels, it is imperative that clinicians, policymakers and AI developers engage in open dialogue to ensure that innovation in healthcare does not come at the expense of patient well-being. The Healthy Technology Act represents a pivotal moment in healthcare regulation that could fundamentally change how medications are prescribed. As this legislation develops, staying informed about its provisions and potential consequences for patient care becomes increasingly important for healthcare professionals and patients alike.