Fairness of Machine-Assisted Decisions

Headshot of Jann Spiess smiling at camera

Using algorithms to support, rather than determine, a decision requires addressing several issues of fairness, as discussed by Jann Spiess in a recent CHDS seminar. Spiess is an Assistant Professor of Operations, Information, and Technology at Stanford University who works on integrating techniques and insights from machine learning into the econometric toolbox. His recent research focuses on (1) building robust tools for causal inference, (2) modeling data-driven decisions with conflicts of interest, including the design of pre-analysis plans, algorithmic fairness, and AI regulation, and (3) understanding human–AI interaction.

In the seminar, Spiess discussed the difference between automation, where algorithms decide the course of action, and assistance, where algorithms help humans reach a decision. He noted that when machine-learning algorithms are deployed in high-stakes decisions, we want to ensure that their deployment leads to fair outcomes. This concern has motivated a fast-growing literature that focuses on disparities in machine predictions.

However, many algorithms are deployed to assist rather than replace human decision-makers; bias is also an important concern in this context. Spiess described how a biased human decision-maker can interact and revert common relationships between the structure of the algorithm and the qualities of resulting decisions. His team developed and implemented a formal model for assessing the fairness of algorithm-assisted human decisions over any relevant factor, by comparing the group “blind” average and group “aware” average. With this framework, he provided a stylized example as well as experimental data to demonstrate that excluding information from the prediction about factors that may influence decision making may increase disparities.

His work highlights the intricacies and dynamics in human and machine interactions. It has many implications for health care settings where a large number of factors must be considered by patients and clinicians.

Learn more: Read the publication, On the Fairness of Machine-Assisted Human Decisions
Learn more: Read the publication, Algorithmic Assistance with Recommendation-Dependent Preferences

Related news: Deep Data Analysis and Intelligent Policy Design
Related news: Learning Optimal Treatment Rules