Improving Human-AI Interactions in the Future of Work (Data Science/Econometrics/Algorithms)
Park Sinchaisri, Professor
Business, Haas School
Closed. This professor is continuing with Fall 2025 apprentices on this project; no new apprentices needed for Spring 2026.
Our lab works on human-AI interfaces for applications in the future of work. The overarching goals are to (i) understand how humans learn and make decisions in complex environments, (ii) design interpretable algorithms that improve human decision-making, and (iii) apply our frameworks in real-world settings, taking into account interpretability/explainability, human biases, and when people may resist or not comply with algorithmic advice.
Current and ongoing project themes include (non-exhaustive):
1. Impact of algorithmic precision on human sequential decision-making
2. Improving worker learning in the future of work
3. Optimal design of incentives for the future of work
4. Modeling human-AI interactions in various settings
5. Improving teachers’ productivity with generative AI
6. Improving how humans learn to use new tools (e.g., AI)
7. Human learning across tasks and contexts
8. Algorithmic management and its impact on human behavior
9. Recommendation systems for a network of agents (multi-agent recommendations/coordination)
10. Designing performance feedback with AI (how feedback format, timing, and explanations change behavior)
11. Improving team collaboration with AI (coordination, division of labor, shared context)
Role: This track is for students who want to do serious research with real-world data at the intersection of human-AI interfaces, behavioral operations, and the future of work. You will work directly with datasets from global companies across multiple industries, including ride-hailing, on-demand delivery, crowdsourcing, freelancing platforms, and consumer product manufacturing.
The goal is to use data to answer high-impact research questions about how humans learn, make decisions, and respond to algorithms, incentives, and information. Depending on the project, we will either:
- draw credible causal conclusions using modern statistics/econometrics, or
- recover decision policies and behavioral mechanisms using applied ML (for example, inverse reinforcement learning and related methods).
Typical tasks
- Build reproducible pipelines for large-scale behavioral/operational data (cleaning, merging, validation checks, documentation)
- Design and implement empirical strategies:
- Econometrics: fixed effects, event studies, difference-in-differences, IV/RD when appropriate, clustering choices, robustness checks
- ML for behavior/policy: discrete choice models, representation learning as needed, inverse RL / imitation learning, off-policy evaluation, counterfactual prediction
- Create “research-grade” outputs: publication-quality figures and tables, clear result summaries with assumptions, diagnostics, and limitations, well-documented code and analysis that another RA can reproduce.
Qualifications: - Strong coding ability in Python (preferred) and/or R, with comfort handling large datasets
- Evidence of strength in econometrics/causal inference and/or applied ML (coursework or substantial projects)
- Ability to reason carefully about assumptions, validity threats, and robustness
- High standards for documentation and reproducibility (clean code, clear notes, version control)
Hours: 9-11 hrs
Off-Campus Research Site: Remote is possible.
Related website: https://parksinchaisri.github.io
Related website: https://parksinchaisri.github.io/files/paper-tips.pdf