AI obviously has huge potential to improve healthcare, but I know (most) radiologists have a real problem with trusting a black box.
Does anyone have any insight on any cases where an interpretable ML learning approach has been adopted to analyse images for disease diagnosis/prognosis? By interpretable, I guess I mean a non-deep learning approach, where the output can be linearly traced back to the image features so the clinician can see why the prediction was made.
Or if they don't, why don't they?
Just as some background info, I'm a PhD student in machine learning with radiology. In particular, we are working on cardiac MRIs.
[link] [comments]
source https://www.reddit.com/r/Radiology/comments/iv8rib/anyone_know_if_the_nhseu_use_ai_for_imagingbased/
Comments
Post a Comment