Weakly supervised learning in medical imaging (various projects)

Data is often only weakly annotated: for example, for a medical image, we might know the patient’s overall diagnosis, but not where the abnormalities are located, because obtaining ground-truth annotations is very time-consuming. Multiple instance learning (MIL) is an extension of supervised machine learning, aimed at dealing with such weakly labeled data. For example, a classifier trained on healthy and abnormal images, would be able to label a both a previously unseen image AND local patches in that image.

 

 

 

 

 

 

 

 

Figure 1: Supervised learning and multiple instance learning, shown for the task of detecting abnormalities in chest CT images. Images from Cheplygina, V., de Bruijne, M., & Pluim, J. P. W. (2018). Not-so-supervised: a survey of semi-supervised, multi-instance, and transfer learning in medical image analysis. arXiv preprint arXiv:1804.06353.

There are still a number of open research directions. For example,

  • How can we evaluate the patch-level predictions without ground-truth labels?
  • Could we improve MIL algorithms by asking experts only a few questions, where they verify the algorithm’s decisions?
  • What can we learn about MIL in medical imaging from other applications where it has been applied?

As a MSc student you would choose one or more medical imaging applications you are interested in, using an open dataset or a dataset available through collaborators, and work with us on formulating your own research question. Participating in a machine learning competition, creating open source tools and/or writing a paper for a scientific conference are also encouraged.

Some experience with machine learning is required (for example 8DC00 if you are a TU/e student). Experience with Python is preferred but experience with another programming language and willingness to learn Python is also sufficient.

Supervisor TU/e: Dr. Veronika Cheplygina (v.cheplygina at tue.nl)

 

Liver Cancer Recurrence Prediction

The only potentially curative option for patients with colorectal liver metastases (CRLM) or hepatocellular carcinoma (HCC) is surgical resection. However, 80–85% of these patients are not eligible for liver surgery because of extensive intrahepatic metastatic lesions or the presence of extrahepatic disease. Neoadjuvant chemotherapy (NAC) is increasingly applied with the aim to downsize tumors in patients with initially unresectable disease to attain a resectable situation.

Accurate imaging of the liver following neoadjuvant chemotherapy is crucial for optimal selection of patients eligible for surgical resection and preparation of a surgical plan. MRI is the most appropriate imaging modality for preoperative assessment of patients with CRLM or HCC.

However, NAC may impair lesion detection and underestimate lesion size. As a result, patients whose tumors were considered resectable on preoperative imaging may turn out to have unresectable tumors during surgery. Or the underestimation may result in insufficient surgery, resulting in positive margins and re-excisions.

The incidence of recurrence after liver resection is very high. In different series between 43% and 65% of the patients had recurrences within 2 years of removal of the first tumor, and up to 85% within 5 years.  Without any form of treatment, most patients with recurrent cancer will die within one year.

Following surgical treatments,  doctors will frequently use MRI to check for residual tumors and will look at the risk that cancer will come back (recur) to decide if the patient should be offered additional treatments (called adjuvant therapy) or repeat the hepatectomy.

Project goal

The aim of this study is to design and develop a deep learning-based algorithm to predict five-year liver cancer recurrence using series of liver MRI exams. Patients have serial liver MRI exams: pre-treatment baseline MRI, at follow-up MRI exams during the course of therapy or surgery, and a final MRI after completing the therapy protocol.

 Prerequisites

  • Enthusiastic Master student in electrical engineering, biomedical engineering, computer science, or a related field
  • Interest in the intersection of machine learning and deep learning
  • Understanding of basic machine learning concepts and image analysis
  • Programming experiences in MATLAB and Python
  • A good team player with excellent communication skills
  • A creative solution-finder

Duration: 9 months (BME or ME or MWT)

Start date: a.s.a.p.

Collaboration:  Netherlands Cancer Institute (NKI)

Location: TU/e (Eindhoven) and NKI (Amsterdam)

Contact: For project details, please contact Dr. Behdad Dasht Bozorg, email: B.Dasht.Bozorg@nki.nl

Real-time Multimodal Image Registration

Multimodal imaging is increasingly being used within healthcare for diagnosis, planning treatment, guiding treatment, biopsy, surgical navigation and monitoring disease progression.

Multimodality imaging takes advantage of the strengths of different imaging modalities to provide a more complete picture of the anatomy under investigation. The goal of this study is to develop a real-time MRI and ultrasound image registration.

MRI is used widely for both diagnostic and therapeutic planning applications because of its multi-planar imaging capability, high signal to noise ratio, and sensitivity to subtle changes in soft tissue morphology and function. Ultrasound imaging, on the other hand, has important advantages including high temporal resolution, high sensitivity to acoustic scatterers such as calcifications and gas bubbles, excellent visualization and measurement of blood flow, low cost, and portability. The strengths of these modalities are complementary, and the two are combined regularly (though separately) in clinical practice. The benefits of combining these modalities through image registration have been shown for intra-operative surgical applications and breast/prostate biopsy guidance.

Image registration is the process of transforming different modalities into the same reference frame to achieve as much comprehensive information about the underlying structure as possible. While MRI is typically pre-operative imaging techniques, ultrasound can easily be performed live during surgery.

Project goal

The aim of this study is to design and develop a deep learning-based method for the registration of multimodal images (MRI and ultrasound).

1st phase: Using built-in multi-modality image fusion feature in ultrasound machines on phantoms

2nd phase: Error estimation in multimodal registration application using CNN

Prerequisites

  • Enthusiastic Master student in electrical engineering, biomedical engineering, computer science, or a related field
  • Interest in the intersection of machine learning and deep learning
  • Understanding of basic machine learning concepts, image analysis and signal processing
  • Programming experiences in MATLAB and Python
  • A good team player with excellent communication skills
  • A creative solution-finder

Duration: 9 months (BME or ME or MWT)

Start date: a.s.a.p.

Collaboration:  Netherlands Cancer Institute (NKI)

Location: TU/e (Eindhoven) and NKI (Amsterdam)

Contact: For project details, please contact Dr. Behdad Dasht Bozorg, email: B.Dasht.Bozorg@nki.nl

Surgical Workflow Analysis

Minimally invasive surgery using cameras to observe the internal anatomy is the preferred approach to many surgical procedures. Furthermore, other surgical disciplines rely on microscopic images. As a result, endoscopic and microscopic image processing, as well as surgical vision, are evolving as techniques needed to facilitate computer-assisted interventions (CAI). Algorithms that have been reported for such images include 3D surface reconstruction, salient feature motion tracking, instrument detection or activity recognition.

Analyzing the surgical workflow is a prerequisite for many applications in computer-assisted surgery (CAS), such as context-aware visualization of navigation information, specifying the most probable tool required next by the surgeon or determining the remaining duration of surgery. Since laparoscopic surgeries are performed using an endoscopic camera, a video stream is always available during surgery, making it the obvious choice as input sensor data for workflow analysis. Furthermore, integrated operating rooms are becoming more prevalent in hospitals, making it possible to access data streams from surgical devices such as cameras, thermoflator, lights, etc. during surgeries.

This project focuses on the online workflow analysis of laparoscopic surgeries. The main goal is to segment surgeries into surgical phases based on the video.

 

Project Phases

  • Designing and developing deep architectures for surgical tools detection and segmentation of colorectal surgeries into surgical phases based on the video input (public dataset)
  • Achieving the highest performance compared to the winners of Endoscopic Vision Challenge-MICCAI 2018
  • Applying the developed technique on prostatectomy (in-house dataset)
  • Detection of deviation from normal habit patterns during surgery
  • Participation in the Endoscopic Vision Challenge-MICCAI 2019

 Prerequisites

  • Enthusiastic Master student in electrical engineering, biomedical engineering, computer science, or a related field
  • Interest in the intersection of machine learning and deep learning
  • Understanding of basic machine learning concepts, image analysis and signal processing
  • Programming experiences in MATLAB and Python
  • A good team player with excellent communication skills
  • A creative solution-finder

Duration: 9 months (BME or ME or MWT)

Start date: a.s.a.p.

Collaboration:  Netherlands Cancer Institute (NKI)

Location: TU/e (Eindhoven) and NKI (Amsterdam)

Contact: For project details, please contact Dr. Behdad Dasht Bozorg, email: B.Dasht.Bozorg@nki.nl

Combining Optics and Acoustics For Real-time Guidance during Cancer Surgery

Surgery forms the mainstay in the treatment for solid tumors. However, in up to 30% of the cases, surgery is inadequate either because tumor tissue is left behind erroneously or surgical resection is too extensive compromising vital structures such as nerves. So in cancer surgery, surgeons generally operate at a delicate balance between achieving radical tumor resection and preventing morbidity from too extensive surgical resection. Within this context, there is long lasting but still unmet need for a precise surgical tool that informs the surgeon on the tissue type at the tip of his instrument and in this way can guide the surgical procedure.

To tackle these shortcomings we propose to employ an innovative approach of image-guided surgery which allows real-time intra-operative tissue recognition during surgery. To this end, we will combine the unique characteristics of ultrasound imaging (US) with the excellent tissue sensing characteristics of optical spectroscopy (DRS). The two techniques, US and DRS, both have proven track records in the field of cancer diagnosis. However, both have their critical limitations. DRS has excellent performance with respect to tissue diagnosis, distinguishing cancer from healthy tissues. Because DRS is a point measurement that samples small tissue volumes close to the measurement probe, its depth sensitivity is limited and thus the possibility to look deeper into the surgical resection plane. Ultrasound on the other hand, has more than sufficient sampling depth and resolution, but, cannot resolve cancer directly from the imaging architecture. Our approach will be to strategically combine these two techniques and using the best of two worlds within one smart device.

Project goals

  • Ultrasound raw data analysis and development of new algorithms for beamforming, image reconstruction, layer segmentation and elastography
  • Designing and developing a multimodal Machine Learning/deep learning technique for the discrimination of cancer from healthy tissues by using the diffuse reflectance spectrum, US data (raw or processed) and elasticity data as input features.

Prerequisites

  • Enthusiastic Master student in electrical engineering, biomedical engineering, computer science, or a related field
  • Interest in the intersection of machine learning and deep learning
  • Understanding of basic machine learning concepts, image analysis and signal processing
  • Programming experiences in MATLAB and Python
  • A good team player with excellent communication skills
  • A creative solution-finder

Duration: 9 months (BME or ME or MWT)

Start date: a.s.a.p.

Collaboration:  Netherlands Cancer Institute (NKI)

Location: TU/e (Eindhoven) and NKI (Amsterdam)

Contact: For project details, please contact Dr. Behdad Dasht Bozorg, email: B.Dasht.Bozorg@nki.nl