Edit this page

NA-MIC Project Weeks

Back to Projects List

VolumeAXI - Volume Analysis, Explanability and Interpretability on CBCT

Key Investigators

Presenter location: In-person

Project Description

This project aims to develop interpretable deep learning models for the automated classification of impacted maxillary canines and assessment of dental root resorption in adjacent teeth using Cone-Beam Computed Tomography (CBCT). Impacted maxillary canines (IC) are a common clinical problem that can lead to complications if not diagnosed and treated early. We propose to develop a 3D slicer module, called Volume Analysis, eXplainability and Interpretability (Volume-AXI), with the goal of providing users an explainable approach for classification of bone and teeth structural defects in CBCT scans gray-level images. We test various deep learning models based on Monai Convolutional Neural Network (CNN) architectures to classify impacted maxillary canine position and detect root resorption. Gradient-weighted Class Activation Mapping (Grad-CAM) has already been integrated to generate visual explanations of the CNN predictions, enhancing interpretability and trustworthiness for clinical adoption.

Objective

  1. Classify tooth position within the bone using the Monai Densenet 121 and 201.
  2. Enhance Explainability and Interpretability of the Classification by generating salience maps using Monai GradCAM
  3. Create the VolumeAXI 3D Slicer module and deploy the model as part toe the Slicer automated Dental tools extension

Approach and Plan

  1. Data Preparation and Pre-processing
  2. Model Development and Training: Explore and select appropriate neural network architectures (e.g., ResNet, SENets, DenseNet) for image classification and feature visualization.
  3. Explainability and Visualization Techniques: Implement methods to make AI decisions transparent and understandable such as Grad-CAM.
  4. Validation and Testing
  5. Documentation and Training: Create comprehensive documentation and user guides explaining the functionality and benefits of the AI tools.

Progress and Next Steps

  1. Trained models with DenseNet architecture to classify the buccolingual position of the impacted maxillary canine.
  2. Implementation of GRAD-CAM with MONAI for visualization

Project Week Update:

  1. Evaluated various root resorption assessment techniques, concluding that alternative methods are required for optimal results.
  2. Conducted additional experiments on position classification to enhance current performance metrics.
  3. Initiated deployment of the VolumeAXI module.

Next Steps:

  1. Change pipeline direction to classify root resorption.
  2. Find the best hyper-parameters for the given applications to improve the results.
  3. Finish the module by implementing the interpretability.
  4. Clean and organise the code.
  5. Write the documentation and provide examples to use the code.

Illustrations

3D Slicer Interface of VolumeAXI

Screenshot 2024-06-28 at 9 33 30 AM

Well predicted case

Position_grouped

Background and References

VolumeAXI repository