INTERPRETABILITY FOR THE AUTOMATED DIAGNOSIS OF MUSCULOSKELETAL RADIOGRAPHS
by Nisha Balaji
Category: STEM
Abstract – Despite tremendous developments in performance, diagnostic machine learning algorithms are widely regarded as “black boxes.” To break down this perception, we propose an investigation of interpretable models: systems that substantiate their conclusions with “justifications.” We explore explainable models through the perspective of the diagnosis of musculoskeletal disorders which are among the most prevalent in the world. Two different avenues are pursued for interpretability: saliency imaging techniques to visually localize irregularity region within a radiograph and a clustering of abnormally classified radiographs to isolate different causes of abnormality into separate clusters. As for the model itself, three DenseNet-169 models are fine-tuned to various extents and tested on classification accuracy. The top-bottom (all parameters) fine-tuned model achieves the best AUROC of 0.865, and the saliency maps effectively localize the cause of irregularity within bone X-rays. The clustering algorithm makes substantial associations based on properties such as anatomical region, image orientation, and hardware type. The implications of such developments in interpretability within healthcare range from auditing models to garnering patient and physician trust.