Auditing LLMs for Clinical Bias
Irene Chen, Professor
Computational Precision Health
Closed. This professor is continuing with Fall 2023 apprentices on this project; no new apprentices needed for Spring 2024.
This project will investigate the presence of biases in large language models (LLMs) designed for clinical applications.
Role: The undergraduate researcher will use an audit methodology to analyze popular clinical LLMs outputs for demographic biases which could propagate inequities. Initial experiments will focus on probing LLMs trained by systematically varying patient demographic attributes to quantify differences in generated summaries. Further work may examine models for medical interventions, scientific literature analysis, and solutions involving fine tuning.
Prof. Chen is an Assistant Professor in Computational Precision Health and Electrical Engineering and Computer Science. Her research group values curiosity, collaboration, compassion, and inclusion. We uphold rigorous standards of quality research and ethics, while ensuring each person is regarded first as a human with goals that are understood and supported. Our culture enables scholars to produce their best work.
Qualifications: Ideal candidates will be undergraduate students with coursework or experience in areas like machine learning, natural language processing, and model evaluation. The ability to code in Python is preferred. Students are expected to meet with the faculty mentor twice a week, and will be trained to conduct literature review, formulate research problems, and engage in scientific writing.
Hours: 12 or more hours
Related website: http://irenechen.net
Biological & Health Sciences Digital Humanities and Data Science