We live in interesting regulatory times. In January, a bill was introduced to the US Congress proposing that AI “can qualify as a practitioner eligible to prescribe drugs” if overseen by the States and FDA. This a bold and contentious move. Even proponents of AI’s swift integration into medicine must recognize the deep paradox: this proposal emerges even as the FDA’s world-leading infrastructure for AI oversight faces dismantling.
On January 7, 2025, Republican US Congress Representative David Schweikert introduced the Healthy Technology Act of 20251, a bill seeking to “amend the Federal Food, Drug, and Cosmetic Act [FFDCA] to clarify that artificial intelligence and machine learning technologies can qualify as a practitioner eligible to prescribe drugs”. The proposed legal mechanism is strikingly simple: a mere 330-character amendment to the FFDCA, stating, “In this subsection, the term ‘practitioner licensed by law to administer such drug’ includes artificial intelligence and machine learning technology”.If enacted, this legislation would fundamentally redefine both the character of medical practice and the role of the physician. In the ensuing mainstream media and social media debate, critics argued that AI remains too error-prone and immature for such responsibilities2. Perhaps counter to prevailing opinion, we take the other side of this argument. At the same time, however, this legislative shift cannot be considered in isolation—to our knowledge, the U.S. is the first country in the world to propose legislation that would allow AI to autonomously prescribe medications, a move that coincides with a period of regulatory upheaval. The simultaneous weakening of FDA oversight makes this concerning and places the U.S. healthcare landscape - particularly in digital medicine - on precarious ground (Fig. 1).Fig. 1: Autonomous AI with and without guardrails and effective regulatory oversight.Autonomously prescribing AI could bring advantages for the public and for health systems, if appropriated developed and implemented under the 'green light' of suitable regulatory guardrails and oversight. However, it is critical that a 'red stop light' is shown to the introduction of speedily developed and inappropriately monitored autonomous prescribing systems, particularly if introduced alongside a disrupted and demotivated regulatory system.Full size imageAre autonomous medical AI systems a bad idea?Some of the criticism of the bill may be as neither the public nor much of the medical community is familiar with the scope of currently approved and adopted AI systems, particularly in diagnosis. Autonomous AI systems have been cleared for clinical use in both the US and the EU, beginning with the U.S. Food and Drug Administration (FDA)’s 2018 authorization of IDx-DR (later rebranded as LumineticsCore), the first such system for diagnosing diabetic retinopathy3. More recently, near-autonomous AI-enabled skin cancer detection devices have been authorized4, and in the EU a fully autonomous AI-enabled skin cancer diagnosis device, DERM, has been CE-marked (i.e., effectively granting approval for clinical use5,6. This system is designed to alleviate pressures on health systems through independently identifying clear-negative cases, helping ensure that patients who do not require further invasive diagnostic work-up are not subjected to unnecessary procedures. In essence, DERM makes independent life-or-death decisions—- a role permitted under both EU law and US law (although the device has not yet been approved for this purpose in the US).It is important that AI systems entrusted with such high-stakes medical decisions are rigorously regulated and continuously monitored, just as human doctors are expected to undergo formal training, supervision, and ongoing professional development. The DERM system underwent the EU’s highest-risk classification (Class III) under the MDR, requiring clinical studies7,8 and a rigorous benefit-risk assessment before approval.This raises a critical question: are autonomous prescription decisions fundamentally different from autonomous diagnosis decisions? As of now, many autonomous diagnostic systems operate within a narrowly defined and highly constrained domain of medical activity (their ‘Intended Purpose’). However, when it comes to prescribing, there may be an assumption that AI would be granted immediate and broad authority. This need not be the case. It is highly likely that such authority would be restricted - at least in the near term - by both the US FDA and by the state legislators, limiting its applications to specific circumstances, such as particular patient populations, disease types, urgent situations, or pre-defined medication categories. In many cases, prescribing a drug is a less of a life-or-death decision than diagnosing cancer. Furthermore, prescriptions are generally reviewed by a pharmacist before being dispensed. That said, errors in prescribing can still have life-threatening consequences, much like a missed cancer diagnosis. Autonomous prescribing medical AI systems are not inherently problematic—provided they are effective, safe, and serve patient and societal interests.The track record of human prescribing practices is far from flawless, with well-documented and systemic failures. The error rate of medical prescribing has been estimated to be between 5% and 9%, often with serious consequences9,10,11. Unnecessary prescribing of antibiotics—driven in part by patient pressure—remains a persistent and concerning issue12, alongside the broader problem of overprescribing11. Compounding these challenges, commercial influence of prescribing decisions13, and irresponsible or even criminal prescribing practices played a major role in fueling the opioid crisis14,15. In response, laws have been enacted specifically to curb prescribing abuses14.Against this backdrop, autonomous AI prescribing would not be categorically worse than human prescribing. If properly designed, implemented, and monitored, AI could help mitigate some of these longstanding issues. Just as prescription drug monitoring programs have been introduced to improve prescribing practice16, rigorous oversight of AI prescribing could help improve safety, accuracy, and accountability.The new proposal is not deregulatory it just proposes an updateThe proposed Healthy Technology Act of 2025 would not necessarily bring about the fast introduction of autonomous prescribing AI if enacted, and it does not call for deregulation; rather, it seeks to clarify existing regulation while reinforcing the necessity of appropriate oversight. The bill explicitly states that autonomous AI prescription systems must be “authorized by the State involved and approved, cleared, or authorized by the Food and Drug Administration”. It is highly likely that states will want to exercise substantial control over autonomous prescribing AI.The US FDA has established a world-leading regulation framework for AI-enabled medical devices through a combination of existing laws, rules, and guidance, alongside evolving action plans and regulatory science initiatives that address emerging technologies, including large language models and generative AI17. While unsolved questions remain—such as medical liability for AI-related medical error18,19, and concerns over the 510(k) clearance system’s reliance on predicate devices and the issue of ‘predicate creep’20 —the US remains better positioned than any other country to introduce legislation that responsibly expands the scope of medical AI. The Healthy Technology Act of 2025 represents an effort to do just that.What is the catch?The increasing use of AI-enabled decision support systems in medicine, including autonomous AI systems, is in our view, a positive development when it goes together with well-designed, balanced regulations suited to medical applications. This also requires appropriate digital monitoring technologies, ensuring at least system-level (or “helicopter level”) human oversight21,22,23,24,25,26,27,28,29. Achieving this goal demands strong, well-functioning regulatory agencies capable of developing and testing regulatory frameworks.Weeks after the introduction of the Healthy Technology Act of 2025, the FDA was thrown into uncertainty, with staff encouraged to resign before February 6, followed by what has been described as “haphazard, poorly thought-out” job cuts across the agency30. These cuts disproportionately affected probationary employees31, and, critically, the regulation of AI-enabled devices. Given thatAI regulation is still an emerging field—requiring new, highly skilled, and technologically adept experts—these layoffs disproportionately impacted staff working on AI-related regulatory initiatives at the FDA’s Center for Devices and Radiological Health32. Many were dismissed abruptly and reportedly received standard termination letters that disparaged the quality of their work32.The true impact of these disruptions on the FDA’s ability to regulate existing and future AI-enabled medical technologies remains to be seen. However, in our view, the consequences are likely profound, potentially signaling a broader policy shift that undermines independent oversight of AI technologies.The treatment of FDA staff has been deeply troubling, marked by dehumanizing and demoralizing actions. However, this article focuses on a far greater concern—the risk of a human-induced human public health catastrophe—one that could surpass the scale of the opioid crisis in the US. The risk becomes real if autonomous medical AI systems for diagnosis and treatment, including prescription, are deployed at scale without proper oversight, guardrails, or monitoring22,25 (Fig. 1).The US could “move fast and break things”—but in this case, those “things” are human lives. Alternatively, it could move fast with intelligent safeguards, ensuring that rapid advancements in medical AI remain safe and effective. Our previous interactions with the FDA’s CDRH have shown that their AI-enabled medical device mission was committed to efficient, and even fast, but fundamentally safe progress. The question now is: What will be the focus of a newly restructured FDA? The concern is not merely a change in the political landscape, but the nature of the changes, which has introduced a high degree of uncertainty. Their manner in which they have been implemented has brought considerable disruption, and the large-scale staff cuts, even if many staff have been subsequently reinstated33, appear to lack a clear overarching strategy or a concrete alternative framework for medical AI oversight. As such, uncertainty persists. Even if stability is restored in the future, the interim regulatory gap creates a period of heightened risk, particularly if groundbreaking legislation proceeds without the necessary oversight to ensure safety and accountability.Congress should reject any legislation introducing AI-enabled autonomous prescribing unless it is accompanied by empowered, well-resourced regulatory oversight. The promise of AI in medicine is immense—but without strong safeguards, so are the risks.
Data availability
No datasets were generated or analysed during the current study.
ReferencesRep. Schweikert, D. [R-A.-1. Text—H.R.238—119th Congress (2025-2026): Healthy Technology Act of 2025. https://www.congress.gov/bill/119th-congress/house-bill/238/text (2025).Proposed legislation paves the way for AI to prescribe drugs. MobiHealthNews https://www.mobihealthnews.com/news/proposed-legislation-paves-way-ai-prescribe-drugs (2025).Commissioner, O. of the. FDA permits marketing of artificial intelligence-based device to detect certain diabetes-related eye problems. FDA https://www.fda.gov/news-events/press-announcements/fda-permits-marketing-artificial-intelligence-based-device-detect-certain-diabetes-related-eye (2020).Venkatesh, K. P., Kadakia, K. T. & Gilbert, S. Learnings from the first AI-enabled skin cancer device for primary care authorized by FDA. NPJ Digit. Med. 7, 1–4 (2024).Article
Google Scholar
Salt, H. Europe Greenlights World’s First Autonomous AI for Skin Cancer Detection. FMAI Hub https://www.fmai-hub.com/europe-greenlights-worlds-first-autonomous-ai-for-skin-cancer-detection/ (2025).DERM makes medical history as world’s first autonomous skin cancer detection system is approved for clinical decisions in Europe—Skin Analytics. https://skin-analytics.com/news/regulatory-certification/derm-class-iii-ce-mark/ (2025).Marsden, H. et al. Accuracy of an artificial intelligence as a medical device as part of a UK-based skin cancer teledermatology service. Front. Med. 11, 1302363 (2024).Article
Google Scholar
Phillips, M. et al. Assessment of accuracy of an artificial intelligence algorithm to detect melanoma in images of skin lesions. JAMA Netw. Open 2, e1913436 (2019).Article
PubMed
PubMed Central
Google Scholar
Pownall, M. Complex working environment, not poor training, blamed for drug errors. BMJ 339, b5328 (2009).Article
PubMed
Google Scholar
Torjesen, I. Poor monitoring and processes are responsible for errors in one in 20 GP prescriptions. BMJ 344, e3163 (2012).Article
PubMed
Google Scholar
Hoff, H. What about the sheer number of drugs prescribed? BMJ 344, e3561 (2012).Article
Google Scholar
Limb, M. GP bashing not the answer to antibiotic overprescribing, professor tells summit. BMJ 349, g6718 (2014).Article
PubMed
Google Scholar
Van Zee, A. The promotion and marketing of oxycontin: commercial triumph, public health tragedy. Am. J. Public Health 99, 221–227 (2009).Article
PubMed
PubMed Central
Google Scholar
Chang, H.-Y. et al. Impact of prescription drug monitoring programs and pill mill laws on high-risk opioid prescribers: a comparative interrupted time series analysis. Drug Alcohol Depend. 165, 1–8 (2016).Article
PubMed
PubMed Central
Google Scholar
National Science and Technology Council. Preparing for the future of artificial intelligence. The White House (2017). Available at: https://trumpwhitehouse.archives.gov/sites/whitehouse.gov/files/images/Final_Report_Draft_11-15-2017.pdf.Griggs, C. A., Weiner, S. G. & Feldman, J. A. Prescription drug monitoring programs: examining limitations and future approaches. West J. Emerg. Med. 16, 67–70 (2015).Article
PubMed
PubMed Central
Google Scholar
Warraich, H. J., Tazbaz, T. & Califf, R. M. FDA perspective on the regulation of artificial intelligence in health care and biomedicine. JAMA 333, 241–247 (2025).Article
CAS
PubMed
Google Scholar
Price, W. N. II, Gerke, S. & Cohen, I. G. Potential liability for physicians using artificial intelligence. JAMA 322, 1765–1766 (2019).Article
PubMed
Google Scholar
Dai, T. & Singh, S. Artificial intelligence on call: the physician’s decision of whether to use AI in clinical practice. SSRN Scholarly Paper at https://doi.org/10.2139/ssrn.3987454 (2025).Muehlematter, U. J., Bluethgen, C. & Vokinger, K. N. FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks. Lancet Digit. Health 5, e618–e626 (2023).Article
CAS
PubMed
Google Scholar
Mathias, R., McCulloch, P., Chalkidou, A. & Gilbert, S. Digital health technologies need regulation and reimbursement that enable flexible interactions and groupings. NPJ Digit. Med. 7, 1–4 (2024).
Google Scholar
Gilbert, S. & Kather, J. N. Guardrails for the use of generalist AI in cancer care. Nat. Rev. Cancer 24, 357–358 (2024).Article
CAS
PubMed
Google Scholar
Mathias, R., McCulloch, P., Chalkidou, A. & Gilbert, S. How can regulation and reimbursement better accommodate flexible suites of digital health technologies? NPJ Digit. Med. 7, 1–3 (2024).
Google Scholar
Gilbert, S., Harvey, H., Melvin, T., Vollebregt, E. & Wicks, P. Large language model AI chatbots require approval as medical devices. Nat. Med. 29, 2396–2398 (2023).Article
CAS
PubMed
Google Scholar
Mathias, R. et al. Safe AI-enabled digital health technologies need built-in open feedback. Nat. Med. 31, 370–375 (2025).Article
CAS
PubMed
Google Scholar
Riedemann, L., Labonne, M. & Gilbert, S. The path forward for large language models in medicine is open. NPJ Digit. Med. 7, 1–5 (2024).Article
Google Scholar
Freyer, O., Wiest, I. C., Kather, J. N. & Gilbert, S. A future role for health applications of large language models depends on regulators enforcing safety standards. Lancet Digit. Health 6, e662–e672 (2024).Article
CAS
PubMed
Google Scholar
Freyer, O., Wiest, I. C. & Gilbert, S. Policing the boundary between responsible and irresponsible placing on the market of large language model health applications. Mayo Clin. Proc. Digit. Health 3, 100196 (2025).Gilbert, S., Pimenta, A., Stratton-Powell, A., Welzel, C. & Melvin, T. Continuous improvement of digital health applications linked to real-world performance monitoring: safe moving targets? Mayo Clin. Proc. Digit. Health 1, 276–287 (2023).Article
Google Scholar
Former FDA chief calls job cuts ‘haphazard, poorly thought-out’. Endpoints News https://endpts.com/former-fda-chief-calls-job-cuts-haphazard-poorly-thought-out/.Trump administration cuts reach FDA employees in food safety, medical devices and tobacco products. AP News https://apnews.com/article/fda-job-cuts-trump-hhs-kennedy-cdc-nih-76dee97eee8209b2605fadac34427aab (2025).Whooley, S. FDA MedTech regulators are among latest Trump layoffs. MassDevice https://www.massdevice.com/fda-medtech-regulators-latest-trump-layoffs/ (2025).Jewett, C. FDA Reinstates Fired Medical Device, Food and Legal Staffers. New York Times https://www.nytimes.com/2025/02/24/science/fda-safety-workers-reinstated.html.Download referencesAcknowledgementsThis work was supported by the European Commission under the Horizon Europe Program, as part of project ASSESS-DHT (101137347) via funding to S.G. and R.M.Author informationAuthors and AffiliationsElse Kröner Fresenius Center for Digital Health, TUD Dresden University of Technology, Dresden, GermanyStephen Gilbert & Rebecca MathiasCarey Business School, Johns Hopkins University, Baltimore, MD, USATinglong DaiAuthorsStephen GilbertView author publicationsYou can also search for this author in
PubMed Google ScholarTinglong DaiView author publicationsYou can also search for this author in
PubMed Google ScholarRebecca MathiasView author publicationsYou can also search for this author in
PubMed Google ScholarContributionsS.G., T.D., and R.M. developed the concept of the manuscript. S.G. wrote the first draft of the manuscript. T.D., R.M. contributed to the writing, interpretation of the content, and editing of the manuscript, revising it critically for important intellectual content. S.G., T.D., and R.M. have read and approved the completed version. S.G., T.D., and R.M. take accountability for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.Corresponding authorCorrespondence to
Stephen Gilbert.Ethics declarations
Competing interests
T.D. declares no competing financial interests and a nonfinancial interest as a member of multiple study teams using LumineticsCore from Digital Diagnostics and as co-lead of Johns Hopkins University’s Bloomberg Distinguished Professorship Cluster on Global Advances in Medical Artificial Intelligence. T.D. is an Editor for npj Digital Medicine. T.D. played no role in the internal review or decision to publish this News and Views article. R.M. declares no nonfinancial interests and no competing financial interests. S.G. declares a nonfinancial interest as an Advisory Group member of the EY-coordinated "Study on Regulatory Governance and Innovation in the field of Medical Devices" conducted on behalf of the DG SANTE of the European Commission. S.G. declares the following competing financial interests: he has or has had consulting relationships with Una Health GmbH, Lindus Health Ltd., Flo Ltd, ICURA ApS, Rock Health Inc., Thymia Ltd., FORUM Institut für Management GmbH, High-Tech Gründerfonds Management GmbH, DG SANTE, Prova Health Ltd, Haleon plc and Ada Health GmbH and holds share options in Ada Health GmbH. S.G. is a News and Views Editor for npj Digital Medicine. S.G. played no role in the internal review or decision to publish this News and Views article.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by-nc-nd/4.0/.
Reprints and permissionsAbout this articleCite this articleGilbert, S., Dai, T. & Mathias, R. Consternation as Congress proposal for autonomous prescribing AI coincides with the haphazard cuts at the FDA.
npj Digit. Med. 8, 165 (2025). https://doi.org/10.1038/s41746-025-01540-2Download citationReceived: 21 February 2025Accepted: 25 February 2025Published: 18 March 2025DOI: https://doi.org/10.1038/s41746-025-01540-2Share this articleAnyone you share the following link with will be able to read this content:Get shareable linkSorry, a shareable link is not currently available for this article.Copy to clipboard
Provided by the Springer Nature SharedIt content-sharing initiative