Empirical Analysis & Data Visualization -- Behavioral Evidence & Algorithmic Bias
Diag Davenport, Professor
Information, School of
Applications for Spring 2026 are closed for this project.
This project examines how institutions shape behavior not only through incentives, but through responsibility, timing, and attention—and how individuals and algorithms adapt to those institutional environments.
Across a set of empirical and experimental papers, we study situations where people are placed into short-run institutional “states” (e.g., waiting for a court date, interacting with an algorithmic system) that alter what problems they are effectively solving. Rather than changing information, prices, or formal incentives, these institutional contexts can lower or raise psychological and practical frictions, shift salience, and reallocate responsibility—leading to meaningful changes in behavior without changes in underlying preferences or outcomes.
One line of work uses large administrative and observational data to study how the timing of routine legal or bureaucratic procedures affects downstream behavior, such as short-term increases in healthcare utilization that dissipate quickly and do not reflect worsening health or provider behavior. Another line uses controlled laboratory experiments with computational components to study how algorithmic decision systems can amplify or diffuse bias across groups, including intersectional patterns that are not visible in aggregate statistics.
A core goal of the project is to make these mechanisms visible and interpretable through careful data visualization, robustness analysis, and clear empirical diagnostics. The work emphasizes reproducibility, transparency, and the translation of complex empirical results into figures that reveal how institutional design subtly but powerfully shapes human and algorithmic behavior.
Role: Tasks
The undergraduate researcher will support ongoing empirical and experimental research by focusing on data visualization, figure development, and robustness exploration. Typical tasks include:
Creating clear, publication-quality figures (e.g., event-study plots, decay curves, heterogeneity panels, diagnostic plots) from existing analysis code and datasets.
Improving figure design and interpretability by refining labels, annotations, scales, and layout to better communicate mechanisms and empirical patterns.
Running structured robustness and sensitivity checks (e.g., alternative time windows, subsamples, functional forms) and summarizing results visually.
Producing diagnostic plots (pre-trend checks, placebo analyses, distributional checks) to help evaluate identifying assumptions.
Organizing figure pipelines so outputs are reproducible and easy to update as analysis evolves.
Documenting changes and interpretations so results can be incorporated cleanly into paper drafts or appendices.
Learning Outcomes
Through this role, the undergraduate will:
Learn how empirical social science research moves from analysis to evidence—and how figures are used to test assumptions, not just present results.
Gain hands-on experience with data visualization and exploratory robustness analysis in Python or R.
Develop an understanding of how institutions, algorithms, and behavioral frictions are studied empirically.
Build research craftsmanship skills, including reproducibility, documentation, and attention to detail.
Gain exposure to the standards and workflows of publishable academic research in economics and computational social science.
Qualifications: Required
Enrollment as an undergraduate student.
Comfort working with data in Python or R (e.g., pandas/ggplot or similar), including loading datasets, modifying scripts, and generating plots.
Ability to read and follow existing analysis code and adapt it for new figures or robustness checks.
Strong attention to detail and willingness to iterate on figures and outputs.
Reliability, organization, and clear communication about progress and questions.
Preferred (but not required)
Prior coursework or experience in economics, statistics, data science, computer science, or a related field.
Experience with data visualization beyond defaults (e.g., thoughtful use of scales, annotations, multi-panel figures).
Familiarity with version control (e.g., Git) or reproducible research workflows.
Interest in behavioral economics, institutions, public policy, or algorithmic fairness.
Experience working on longer-term projects where polish and documentation matter.
Good fit if you enjoy
Turning messy outputs into clean, interpretable figures.
Stress-testing results to see what holds up and what doesn’t.
Working on research that sits at the intersection of social science, policy, and computation.
Hours: 12 or more hours
Education, Cognition & Psychology Social Sciences