If you want decisions that are less biased . . . use algorithms
This Harvard Business Review article is highly relevant to mediation where parties and lawyers must make decisions amidst uncertainty. In litigated disputes, lawyers advise clients and maintain bargaining positions based on their prediction of what judges and juries will or won’t do in the future. If machines can make better predictions in other areas of life, why not in litigation?
The article reports a “steady increase in the automation” of decisions made by people. Despite flashy news headlines, “it is fairly conventional machine learning and statistical techniques — ordinary least squares, logistic regression, decision trees — that are adding real value to the bottom line of many organizations. Real-world applications range from medical diagnoses and judicial sentencing to professional recruiting and resource allocation in public agencies.” Other areas where algorithms out-perform decisions made by people include credit applications, job screenings, and corporate governance.
Although many critics “have done a good job in disabusing us of the notion that algorithms are purely objective . . . critics . . . rarely ask how well the systems . . . would operate without algorithms. And that is the most relevant question for practitioners and policy makers: How do the bias and performance of algorithms compare with the status quo? Rather than simply asking whether algorithms are flawed, we should be asking how these flaws compare with those of human beings.”
Although algorithms are biased to some degree (because they are trained with data biased by people), “the humans they are replacing are significantly more biased.” “Decades of psychological research in judgment and decision making has demonstrated time and time again that humans are remarkably bad judges of quality in a wide range of contexts. . . . and we have known since at least the 1950s that very simple mathematical models outperform supposed experts at predicting important outcomes in clinical settings.”
The article does not argue in favor of “algorithmic absolutism or blind faith in the power of statistics. If we find in some instances that algorithms have an unacceptably high degree of bias in comparison with current decision-making processes, then there is no harm done by following the evidence and maintaining the existing paradigm. But a commitment to following the evidence cuts both ways, and we should to be willing to accept that — in some instances — algorithms will be part of the solution for reducing institutional biases. So the next time you read a headline about the perils of algorithmic bias, remember to look in the mirror and recall that the perils of human bias are likely even worse.”