Stability driven interpretation and compression of neural networks
Reza Abbasi-Asl, Professor
Neuroscience
Applications for Fall 2024 are closed for this project.
Deep neural networks achieve state-of-the-art performance in many tasks such as computer vision and natural language processing. Interpreting deep networks is essential for applying them to scientific applications such as healthcare. Two prominent approaches to interpret neural networks are saliency methods and network compression. However, both methods could result in unstable or unreliable interpretations. In this project, we study the problem of stability in interpretation of neural networks. In particular, we use statistical techniques to build more interpretable neural networks and assess our methods in computer vision tasks.
Role: The successful candidate will start by reading relevant papers, implement the basic interpretability frameworks for off-the shelf convolutional neural networks and quantify stability in these methods. Then use statistical principles to increase the stability in interpretations. Finally, the results will be documented and presented in form of a research publication. Note that the successful candidate will be working remotely on this project.
Qualifications: Candidate should have experience programming with Python and implementation of neural networks in platforms such as PyTorch, TensorFlow, Keras, or Caffe. Familiarity with convolutional neural networks and image processing, and medical image analysis is desirable but not essential.
Hours: 12 or more hours
Off-Campus Research Site: remote
Related website: http://abbasilab.org
Digital Humanities and Data Science Biological & Health Sciences