Edit this page

NA-MIC Project Weeks

Back to Projects List

Automatic classification of MR scan sequence type

Key Investigators

Presenter location: In-person

Project Description

Data curation is a necessary step before using many AI or ML models, but it can be difficult and time-consuming to do manually. For instance in prostate cancer, most tools use multiple types of MR sequences as input to develop models and perform tasks such as segmentation.

In this project, we will develop methods for automatic classification of MR sequences. We had some great discussions and headway last project week, and are continuing this work.

We also made some progress since last project week and developed a few methods for classification of T2 axial, diffusion weighted (DWI), apparent diffusion coefficient (ADC) images, and dynamic contrast enhanced (DCE) images. We used combinations of image data and DICOM metadata as input, and developed a random forest classifier, and also two CNN-based classifiers – see our paper here and code here.

This project week, we’d like to talk to more people, address limitations of our work, and hopefully work on developing a more robust method for classification of scans.

Objective

  1. We would like to discuss the limitations of our previous work, and brainstorm new ideas for automatic classification of the MR series type.
  2. We would like to create an easy colab notebook for people to try out the methods
  3. We would like to think about developing a more robust method

Approach and Plan

  1. We will talk to people to discuss limitations of our method. For instance, what types of metadata should we use for the classification? Should we have a class for unknown scan type? Should we do a hierarchical classification method? How can we make the model agnostic to the area scanned?

Progress and Next Steps

  1. Colab notebook - we download data from IDC, and run inference using the three pretrained models.
  2. Check out our HuggingFace space demo! - we download data from IDC, and run inference using the three pretrained models. We display the classification results and the image used for classification. Later, we want to allow the user to upload their own images.

Illustrations

HuggingFace space demo:

Here the user can select a specific collection –> patient –> study –> series to perform the classification. Then you run inference using the three models we developed. DICOMClassification_demo1

Then the results of the classification are displayed, along with the image chosen for the classification. The user can also download the output colab notebook. DICOMClassification_demo2

Video of the HuggingFace space demo:

No response

Background and References

Progress from previous project week

Current work

Current code