Anatomically-Informed Self-Supervised Pre-training in Cardiac Imaging
Ahmed Alaa, Professor
Electrical Engineering and Computer Science
Closed. This professor is continuing with Fall 2023 apprentices on this project; no new apprentices needed for Spring 2024.
In Self-supervised learning (SSL), a model is pre-trained through a pretext task that only involves unlabeled data—the pre-trained representation is then fine-tuned on downstream tasks of interest where only a small number of labeled examples may be available. In this project, we will explore novel pre-training methods that use experimentally-acquired 3D anatomic models of the heat to create synthetic pretext tasks for pre-training visual representations of cardiac imaging.
Qualifications: Successful applicants will work with publicly-available anatomic models of the heart to create synthetic 2D images of cardiac anatomy using CycleGAN models. These synthetic images will then be used to pre-train a Vision transformer model on view-to-view translation tasks. Successful applicants should have knowledge of Python and strong background/interest in machine learning and computer vision. Students are expected to meet with the faculty mentor twice a week, and will be trained to conduct literature review, formulate research problems and engage in scientific writing.
Hours: 12 or more hours
Digital Humanities and Data Science Engineering, Design & Technologies