STAT+: As problems emerge in medical AI models, research points to a new way to strip them of bias

After an explosion of excitement in the potential for machine learning in medicine, cracks in the foundation are emerging. Now some researchers advocate a framework for assessing bias.

After an explosion of excitement in the potential for machine learning in medicine, cracks in the foundation are emerging.

More and more research is focusing on the ways that medical models can introduce algorithmic bias into health care. But in a new paper, machine learning researchers caution that such self-reflection is often ad hoc and incomplete. They argue that to get “an unbiased judgment of AI bias,” there needs to be a more routine and robust way of analyzing how well algorithms perform. Without a standardized process, researchers will only find the bias they think to look for.

Continue to STAT+ to read the full story…