Today's Clinical Lab - News, Editorial and Products for the Clinical Laboratory

Misleading Claims That AI Outperforms Clinicians Could Pose Patient Safety Risk

Researchers warn that many studies make exaggerated claims about the performance of AI compared to doctors based on questionable evidence

Photo of the Today's Clinical Lab logo
Today's Clinical Lab

Today’s Clinical Lab is a reader-centric publication that keeps clinical professionals up to date with today’s rapidly changing lab industry with in-depth and timely editorial content and resources, including clinical industry news and insights into the latest trends, technologies, and techniques in the clinical lab.

ViewFull Profile
Learn about ourEditorial Policies.
Published:Mar 26, 2020
|1 min read
Register for free to listen to this article
Listen with Speechify
0:00
1:00

Many studies claiming that artificial intelligence is as good as (or better than) human experts at interpreting medical images are of poor quality and are arguably exaggerated, posing a safety risk to patients, researchers warn in a paper published March 25, 2020, in the BMJ

Researchers reviewed the results of published studies over the past 10 years—two eligible randomised clinical trials and 81 non-randomised studies—and compared the performance of a deep learning algorithm in medical imaging with expert clinicians. Of the non-randomised studies, only nine were prospective and just six were tested in a 'real world' clinical setting. 

The average number of human experts in the comparator group was just four, while access to raw data and code (to allow independent scrutiny of results) was severely limited. More than two thirds (58 of 81) studies were judged to be at high risk of bias, and adherence to recognised reporting standards was often poor. Three quarters (61 studies) stated that performance of AI was at least comparable to (or better than) that of clinicians, and only 31 (38 percent) stated that further prospective studies or trials were needed.

The findings raise concerns about the quality of evidence underpinning many of these studies, highlighting the need to improve their design and reporting standards. The researchers say that many of the studies presented arguably exaggerated claims about superior performance of AI to clinicians, which could pose a risk to patient safety.