Today's Clinical Lab - News, Editorial and Products for the Clinical Laboratory
One of the most widely recognized issues is that AI models can replicate societal biases and systematically underestimate risk or misclassify disease in historically marginalized populations.
One of the most widely recognized issues is that AI models can replicate societal biases and systematically underestimate risk or misclassify disease in historically marginalized populations.

ADLM Advocates for the Federal Government to Support Equitable Healthcare AI

The statement calls on Congress and federal agencies to modernize existing laboratory regulations and to implement policies to ensure that AI clinical systems are safe and effective

Association for Diagnostics and Laboratory Medicine

The Association for Diagnostics and Laboratory Medicine (ADLM), formerly AACC, is a global scientific and medical professional organization dedicated to clinical laboratory science and its application to health care.

ViewFull Profile
Learn about ourEditorial Policies.
Published:Feb 10, 2026
|2 min read
Register for free to listen to this article
Listen with Speechify
0:00
2:00

WASHINGTON — The Association for Diagnostics & Laboratory Medicine (ADLM) released a position statement today underscoring the risks that artificial intelligence (AI) in laboratory medicine could pose for patients—especially those from demographics that are historically marginalized. 

To mitigate these risks and to realize the full promise of AI in healthcare, the statement calls on Congress and federal agencies to modernize existing laboratory regulations and to implement policies to ensure that AI clinical systems are safe and effective.

Laboratory medicine plays a vital role in ensuring that patients get the right diagnoses and care—and AI has the potential to transform laboratory medicine by enhancing diagnostic accuracy, improving efficiency of laboratory workflows, and enabling more precise, data-driven clinical decision-making. 

However, AI models are only as accurate as the data they “learn” from, and several issues can arise if AI models are trained on limited, low-quality, or inconsistent data. 

One of the most widely recognized issues is that AI models can replicate societal biases and systematically underestimate risk or misclassify disease in historically marginalized populations. 

This is due to the fact that AI health tools are often trained on historical data sets that underrepresent certain racial and ethnic groups, age ranges, and socioeconomic groups.

To address bias in laboratory AI and ensure the appropriate monitoring of tools that impact test interpretation, diagnosis, and treatment decisions, ADLM strongly recommends that the federal government take the following steps:

  • In collaboration with federal agencies, Congress should update existing laboratory laws and regulations, such as the Clinical Laboratory Improvement Amendments (CLIA), to explicitly encompass AI systems.
  • Federal health agencies, in partnership with professional societies, should convene laboratory medicine experts and informatics professionals to develop consensus guidelines for validating and verifying AI tools in laboratory medicine.
  • Federal agencies should expand and support initiatives to harmonize laboratory test results and standardize data reporting.

ADLM also urges AI developers, in coordination with regulators and healthcare organizations, to implement measures to promote data diversity and reduce bias in laboratory AI applications. Additionally, developers and vendors of AI tools should ensure that clinical laboratories have access to the data and technical resources necessary to independently verify and validate an algorithm’s performance.
 
 “Clinical laboratories are uniquely positioned to help develop and assess the integration of AI health tools into testing workflows and, most importantly, how they influence patient test results and health outcomes,” said ADLM President Dr. Paul J. Jannetto. “We therefore urge the federal government to draw on the expertise of laboratory medicine professionals in order to develop AI regulations that support innovation, as well as transparent, consistent performance monitoring of this potentially revolutionary technology.”