Type:
Educational Exhibit
Keywords:
Artificial Intelligence, CT, MR, Computer Applications-3D, Computer Applications-Detection, diagnosis, Segmentation, Education and training
Authors:
F. Buemi, C. Giardina, A. Perri, S. Caloggero, A. Celona, N. Sicilia, O. Ventura Spagnolo, F. Galletta, G. Mastroeni
DOI:
10.26044/ecr2024/C-21327
Background
One of the most significant challenges that practicing radiologists face when adopting Al is how to integrate it into their daily practice and contribute to its development. Despite the availability of numerous software, web-based platforms, and services, they are often costly and out of reach for financially constrained healthcare institutions. Conversely, open-source tools are frequently challenging to utilize due to their requirement for knowledge that extends beyond the typical expertise of radiologists. Furthermore, segmentation tasks frequently entail laborious and time-consuming processes. MONAI Label is an open-source and freely available tool for annotating radiology datasets. It offers the distinct advantage of integration into graphical user interface (GUI) software like 3D-Slicer [1] or the web-based OHIF viewer (Open Health Imaging Foundation) [2]. In this educational poster, we will show how to start using MONAI label with 3D-Slicer.
MONAI label applications
MONAI label can be used with different applications, which are not only restricted to radiology but employed in other fields. Currently, four distinct applications are available:
- Radiology
- Bundle
- Pathology
- Video
Only the radiology and bundle applications will be discussed, as pathology and video applications are respectively useful for pathologists and endoscopists.
Radiology app
The radiology app offers two types of segmentation:
- Interactive
- Autosegmentation
Interactive segmentation
Interactive segmentation includes three annotation approaches:
- Deepgrow: This approach relies on positive and negative interactions. Positive clicks from users contribute to expanding the segmentation, incorporating the selected location into the segmentation label. Conversely, negative clicks serve to exclude a particular region from the area of interest [2]. With Deepgrow 2D, users can annotate images slice by slice, whereas Deepgrow 3D offers the capability to annotate entire volumes.
- Deepedit: Deepedit enhances Deepgrow's segmentation by incorporating a two-stage process. In the initial non-interactive stage, the segmentation is generated through an automated segmentation process (i.e. inference such as U-Net) without the need for user clicks. Subsequently, in the interactive stage, users provide clicks, similar to Deepgrow [2, 3].
.
- Scribbles: Scribbles-based segmentation model enables interactive segmentation through free-hand drawings, specifically foreground (FG) or background (BG) scribbles [2].
Autosegmentation
Autosegmentation is based on a standard convolutional neural network (CNN), such as U-Net, without any interaction by the user [2].
U-Net is a CCN architecture designed for image segmentation tasks [4].
MONAI label allows the user to test other different kinds of CCN, such as UNesT or DynUNet.
The autosegmentation module can be viewed as an easy way to conduct inference, allowing for the assessment of model accuracy either during or after the training.
Bundle app
The Bundle app offers pre-trained models for tasks such as inference, training, and pre/post-processing of diverse anatomical targets, through integration with MONAI Model Zoo (https://monai.io/model-zoo.html). A comprehensive and updated list of the models for radiology is available on the Model Zoo website.