Purpose
Novel applications of artificial intelligence (AI) are increasingly entering the work of medical diagnosis. These AI algorithms are opaque, offering limited chances to know how the outcomes are developed [1]. As a result, the professionals who work with them experience unexpected, ambiguous situations where they have limited capacity to grapple with and act mindfully on the AI applications [2]. Therefore, without deep, critical knowledge, professionals are prone to commit errors, their attentions become biased, their cognitive processes remain at the surface, and their actions tend...
Methods and materials
This exhibit presents an environment that enables a practice-based learning process, which we call the “AI Learning Lab”.
Our research methodology follows an iterative process through which the learning lab is designed and validated through multiple iterations, each round offering new insights into how to further develop and enrich the learning lab. This process consists of two main cycles, the design cycle and the validation cycle, that not only yielded the preliminary results presented in this exhibit, but also enable the continuous improvement of the...
Results
[Fig 1]
The design cycle presented in the previous section yielded a design for the learning lab as depicted in Figure 1.
The learning lab consists of three inputs:
Working scenarios, representing various conditions under which medical professionals work with the AI tools
AI technologies, consisting of a wide range of AI applications with different operation modes
Medical use-cases
These inputs are integrated into the “learning environment”, where the participants of the learning lab experience working with various AI technologies under different working scenarios on...
Conclusion
The AI Learning Lab proposed in this exhibit creates a novel opportunity for integrating the knowledge and research across:
medical practice and education
the development of AI technologies
work and organizational learning
The results from our pilot implementation of the lab provided surprising findings that deliver value in threefold, as they provide:
medical professionals with a practice-based experience of the pitfalls of automation bias when working with medical AI
AI developers with insights into the efficacy of “AI explainability” in preventing over-reliance on their AI...
Personal information and conflict of interest
F. Mol:
Nothing to disclose
M. Rezazade Mehrizi:
Grant Recipient: Comenius Grant 2022
W. Grootjans:
Nothing to disclose
References
Burrell, Jenna. 2016. “How the Machine ‘thinks’: Understanding Opacity in Machine Learning Algorithms.” Big Data & Society 3 (1): 1-12.
Zhang, Zhewei, Youngjin Yoo, Kalle Lyytinen, and Aron Lindberg. 2021. “The Unknowability of Autonomous Tools and the Liminal Experience of Their Use.” Information Systems Research, August. https://doi.org/10.1287/isre.2021.1022.
Jarrahi, Mohammad Hossein, Gemma Newlands, Min Kyung Lee, Christine T. Wolf, Eliscia Kinder, and Will Sutherland. 2021. “Algorithmic Management in a Work Context.” Big Data & Society 8 (2): 1-18.
Newell, Sue, and Marco Marabelli. 2015. “Strategic Opportunities...