Report: Doctors increasingly use AI tools despite accuracy concerns

Healthcare providers across the United States are rapidly adopting AI tools for taking notes and drafting patient communications, but questions about accuracy and effectiveness remain. According to an investigation by Geoffrey A. Fowler at The Washington Post, millions of patients are now being treated by doctors using AI assistance. Epic Systems reports that their AI tools currently transcribe about 2.35 million patient visits and draft 175,000 messages monthly.

Studies have revealed concerning error rates in medical AI applications. Research shows that ChatGPT gave inappropriate medical advice in 20% of test cases, while another study found AI responses posed a risk of “severe harm” in 7% of cancer-related questions. Stanford professor Roxana Daneshjou’s research has demonstrated how AI can include fabricated details in patient summaries and perpetuate biases.

While AI scribes promise to reduce administrative burden and improve doctor-patient interactions by eliminating the need for manual note-taking, evidence of time savings remains inconclusive. A recent study found that AI scribes “did not make clinicians as a group more efficient,” though some reports suggest savings of 10-20 minutes per visit.

Medical institutions are implementing various safeguards. Dr. Christopher Sharp at Stanford Health Care emphasizes that doctors must carefully verify AI-generated content. The University of California, San Francisco monitors how much doctors edit AI-generated documents to assess potential over-reliance on the technology.

The FDA currently does not regulate most medical AI software because it technically doesn’t make independent medical decisions. This lack of oversight, combined with rapid adoption rates, has raised concerns among medical professionals about potential risks to patient care.

Related posts:

Stay up-to-date: