Type:
Educational Exhibit
Keywords:
Artificial Intelligence, Neural networks, Computer Applications-Detection, diagnosis, Education, Education and training
Authors:
C. Tang, J. C. Y. Seah, Q. Buchlak, C. Jones; Sydney/AU
DOI:
10.26044/ecr2021/C-13640
Background
In the past decade, the number of AI-enabled tools, especially deep learning solutions, has exploded onto the radiological scene with the promise of revolutionising healthcare[1]. However, these data-driven models are often treated as numerical exercises and black boxes, offering little insight into the reasons for their behaviour.
Trust in novel technologies is often limited by a lack of understanding of the decision-making processes behind the technology. In medical AI, this problem is twofold - firstly, AI technologies are not widely taught in any medical curriculum so there is limited understanding in practice, and secondly, AI technologies have previously been shown to produce incorrect predictions due to hidden biases in the training data[2][3]. In response to this “black-box” problem in medical AI, there has been a growing call for “explainable” or “interpretable” AI tools to allow more transparency in its thought processes[4][5][6][7][8][9].
Here we present our experiences during the development of the Annalise.ai CXR tool as a commercial product case study, exploring the key steps in the creation of an accurate, user-friendly, and interpretable AI diagnostic tool guided by these principles, with the added benefit of seamless workflow integration. This process requires understanding of the practical requirements for the end user, as well as the software engineering challenges in model development. The onus is on any developer of such tools to organise the AI output in ways that radiologists and other medical practitioners can understand intuitively[10].