Edit this page

NA-MIC Project Weeks

Back to Projects List

New Radiology and Pathology Deep Learning Models into MHub.ai

Key Investigators

Presenter location: In-person

Project Description

The MHub.ai project at Harvard has developed methods to execute machine learning models on medical images in an easy to use and standardized way. There is already a Slicer plugin for running MHub.ai format models. For this project, we propose to add two models of different types to the MHub library.

Objective

  1. Objective A. Test a MONAI-based deep learning model in MHub and validate the instructions for new developers to follow.

  2. Objective B. Evaluate how well the MHub approach works for supporting pathology models in addition to radiology models.

Approach and Plan

Step 1. Port one of the pre-trained MONAIAutoSeg3D radiology models developed at Queens (by Andros Lasso et al.) for execution using the MHub framework as a docker container. Test the MHub I/O converters to read a DICOM image and reformat as needed from the input. Write out a DICOM segmentation object as the result.

Step 2. Start converting a published pathology DNN model (Rhabdomyosarcoma segmentation) for the MHub framework. This will Evaluate how well the MHub approach works for supporting pathology models in addition to radiology models. For example, can the same base Docker image work for pathology?

Progress and Next Steps

  1. We selected two of the MONAIAutoSeg3D models from the Slicer Extension and wrapped them using the MHub.ai framework as an exercise to learn the MHub approach. As part of this process, we wrote a converter to produce the class descriptions used by MHub to describe model outputs from the original model descriptions. This approach could be used to convert other models later.

  2. We started adapting a trained Rhabdomyosarcoma pathology model for MHub. the first part of the MHub pipeline works in our prototype but we arent processing the model ooutputs correctly yet.

  3. We completed a prototype implementation of the RMS model inside MHub.ai. This demonstrated that the MHub approach can be used for pathology as well as radiology models. Some cleanup is needed yet, but this was a lot of progress this week.

Illustrations

Below is a Slicer screenshot showing a segmentation created by an MHUb.ai model. For this example, we took the low-res MONAIAutoSeg3D thoractic segmentation model from Andras’ Slicer Extension and ported it to execute inside an MHub.ai workflow. Others of the pre-trained AutoSeg models could also be ported with minimal effort. This model uses the SegResNet DNN from the MONAI project:

MONAIAutoSeg-in-MHub-result-thoracic

Here is a rendering of a Fractional DICOM segmentation superimposed over the source image. The segmentation was created by a trained model executing inside the MHub.ai environment. This model was ported during the project week.

fractional_mhub_1

Background and References

MONAI AutoSeg3D: https://github.com/Project-MONAI/tutorials/tree/main/auto3dseg

Slicer Extension: https://github.com/lassoan/SlicerMONAIAuto3DSeg

pathology model: https://github.com/knowledgevis/rms-infer-code-standalone