Edit this page

NA-MIC Project Weeks

Back to Projects List

Methods for self-supervised depth estimation and motion estimation in colonoscopy under deformation

Key Investigators

Presenter location: In-person

Project Description

Estimating Depth and localizing endoscope in surgical environment is critical in many tasks such as, intra-operative registration, augmented reality, surgical automation, among many others. Monocular self-supervised depth and pose estimation methods can estimate depth and camera pose without requiring labels. However, how do these methods perform in presence of deformation while endoscope moves through the lumen is not known. Therefore, through this project we want to evaluate the effect of addition of primarily two modules on depth and pose estimation accuracy. These modules are TransUnet and Optical Flow Module. Optical Flow can capture the image intensity changes in the scene because of deformation. And TransUnet can potentially capture the temporal correlations between the image frames to give better pose and depth predictions. For the project open sources dataset and github codes will be utilized.

Objective

  1. Objective A. To build and run and train the flowNet on colonoscopy dataset
  2. Objective B. To integrate the flowNet module in the Monodepth2 framework
  3. Objective C. To integrate and evaluate TransUnet blocks in Monodepth2 framework.

Approach and Plan

  1. Run the Monodepth2 on the colonoscopy dataset.
  2. Train the optical flow network on the colonoscopic dataset

Progress and Next Steps

  1. Run the model on the colonoscopic dataset.
  2. Self-supervised training with supervision from scale-invariant depth loss.
  3. Hosting the model on Huggingface

Next Steps: creating a 3D mesh from generated depth values.

Illustrations

Left : Ground Truth, Right : The 3D Depth prediction (Purple - Yellow : Farther - Close)

HuggingFAce link: https://huggingface.co/spaces/mkalia/DepthPoseEstimation

Simple Upload and Predict

upload_model

depth_image_huggingface

Background and References

images_combined