In Toronto, many artificial intelligence (AI) systems designed for diagnosing and tracking Alzheimer’s disease and other dementias have received FDA clearance, but researchers find the development process and effectiveness across various patient demographics to be unclear. Out of the 24 systems authorized since 2015, 14 lacked training data from participants, while 22 provided no information on validation sets, as highlighted by Krista Y. Chen, MPH, and her team at Johns Hopkins University School of Medicine.
While some information was derived from peer-reviewed studies shared at the Alzheimer’s Association International Conference and published in a JAMA research letter, it only offered training data for five systems and validation data for ten. Less than half of these clinical tools had basic characteristics analyzed in their training and validation phases, and participant details such as race and ethnicity were absent for 23 systems, with many also missing age, sex, and disease status data. The single system featuring race data showed a 90% white training set but did not provide validation details.
This lack of clarity raises issues regarding algorithmic bias and the risk of unequal care for underrepresented groups, which complicates care planning and compliance with past FDA guidelines promoting demographic diversity in AI and machine learning medical devices. The failure of manufacturers to disclose this information, which remains mostly voluntary, implies a potential disregard for FDA standards. Chen pointed out that women and Black and Hispanic populations face unique hurdles in dementia care, including delayed diagnoses and limited treatment access, and stressed that transparency in datasets is crucial for understanding performance differences and ensuring suitable applications.
The ainewsarticles.com article you just read is a brief synopsis; the original article can be found here: Read the Full Article…