Raising the Bar for Medical AI

Thought leaders, patient advocates develop guidelines for ethical use of AI in medicine

A computer generated image of a digital display featuring a rotating body.
Video: Treedeo/Getty Images

Key recommendations

  • Health systems, health plans, and physician groups should consider adopting AI. Done right, early benefits of adoption include enhanced doctor-patient interactions, optimized analysis of tests and imaging results, improvement of differential diagnosis, and a more focused discussion of treatment options and treatment plan.
  • Financial models of reimbursement should be transparent. Once a year, regulators should identify and evaluate these models to ensure they do not incentivize overuse but rather pay for quality of care and better patient outcomes.
  • Regulators and medical system leaders should establish guides for clinicians, trainees, and patients on opportunities and optimal use of AI. These should include widespread education for patients and staff on how to use AI in health care. 
  • Regulators and medical system leaders should create clear outcome expectations to verify that the use of AI is serving patient and provider interests rather than just the financial gain of private health systems and the budgetary constraints of government-funded healthcare systems. 

Key recommendations

  • Clinicians should remain legally responsible for patient care and clinical decisions.
  • If AI is adopted widely by health systems, AI technology companies should accept a portion of the legal liability if the use of their tools leads to harm.
  • Tech companies should accept some responsibility for outcomes when patients use their AI products, as is the case with any other direct-to-consumer health tool or product. 

Key recommendations

  • Prefer opt-out over opt-in models for patient consent.
  • Ensure AI models use plain, accessible language that is specific and tailored to the use.
  • Develop ways to measure and prevent the privacy risks inherent when patient data are used to train AI models.
  • AI developers and vendors should provide guarantees that patient data are protected, and patients are not identified.
  • Consider the incorporation of existing international guidelines for the ethical and safe use of patient data, such as the U.K.’s STANDING Together program and The Five Safes framework.

Key recommendations

  • Require AI developers to reveal the data sources the model was trained on in ways that are accessible to patients and regulators.
  • Clinicians should anticipate that patients will arrive in the clinic with information from AI and encourage them to share what they’re finding.
  • AI developers should bring patients, patient advocates, and clinicians into the process of designing the technology at every stage. 

Key recommendations

  • Favor subscription or up-front payment models rather than pay-per-use. 
  • Tie funding to outcomes and improvements in care.
  • Build an infrastructure to track over time whether AI delivers on the variables it was designed to deliver—i.e., has it improved patient outcomes, has it reduced administrative costs.