Applied together, it is fair to say that machine learning and behavioral science have the potential to significantly magnify social impact. Yet, as machine learning algorithms become more prevalent in the systems people use to make important decisions, there is deep, and not unfounded, concern that algorithms – even those designed with social impact in mind – will perpetuate bias. As we continue to integrate machine learning into our work at ideas42, we are approaching it with the same commitment to mitigating bias and intention to positively impact people’s lives that we use in all of our work.
The myriad ways in which machine learning and algorithmic decision aids can create or reinforce unfair outcomes have been well-documented in the academic literature . Problematic applications of machine learning continue to make headlines and multiple non-profits, such as the AI Now Institute at New York University, have released in-depth reports on the ethical challenges that arise when using machine learning or artificial intelligence. In both the literature and the media, we see examples of bias across each stage of algorithm development and deployment.
In many (but not all) cases, biased outcomes originate from biased inputs. Like humans, machines learn from patterns in data, so when data is incomplete, unrepresentative, or reflects existing inequities, an algorithm will replicate those patterns in its predictions. But the fact that algorithms can pick up on historical biases does not mean that we are powerless to design models we would consider fair. Rather, it means that as researchers, we are responsible for a series of important decisions – beginning with which data we use. Throughout projects, there are opportunities to identify and combat bias.
When designing and evaluating algorithms, we must:
- Incorporate ethical principles and commitments into partner agreements to ensure accountability
- Conduct “data ethnographies” to understand where our data originates from and who it represents; make sure datasets are as complete and representative of the full population as possible
- Include key stakeholders and affected communities throughout the steps of model development, including: selecting a label (the variable a model will predict as its outcome), defining accuracy criteria, and determining what constitutes “fair” outcomes in a particular context.
- Pursue computational solutions (e.g. packages for R and python) that make machine learning models easier for humans to interpret
- Carefully evaluate whether outcomes meet our clearly articulated social impact goals and fairness standards; where appropriate, use tools like Microsoft’s Fairlearn to assess how specific groups would be impacted by a model
- Establish long-term plans for routine validation of machine learning models and make the process for appealing or correcting decisions clear
As behavioral scientists, we are also keenly aware of the problems that can arise beyond the development phase, when an algorithm is implemented in the real world. Algorithms are often used as “decision aids,” meaning a human consults an algorithmic tool for advice or information, and then makes a final decision themselves. If an algorithm produces recommendations that we consider fair, but a human decision-maker either ignores or systematically counters those recommendations, we might see null or biased results. Knowing this, our responsibility extends beyond the creation of unbiased algorithms – we must also design decision-aid systems that people will use, and then empirically test whether adoption matches our expectations. This is exactly how we approach the design and testing of all of our behavioral work– ensuring that things work for how people actually behave in the real world.
Underlying our approach to machine learning is an understanding that human decision-makers are also far from neutral: in some cases, it may be even harder to correct for human bias than for algorithmic bias. We must contend with biased decision-making, ambiguous definitions of fairness, and a lack of transparency with or without algorithms – and we’re committed to tackling these problems in all of the work that we do.
Our five guiding principles for ethical machine learning at ideas42 are engagement, rigorous review, accountability, privacy, and transparency.
- Since we know the definition of “fair” is subjective, we make decisions in tandem with the people who know the context best and the people who will be affected by the pilot or project. We include these stakeholders throughout our process, and define “fair” outcomes in a way that is agreeable to community members.
2) Rigorous review
- As researchers, our decision-making throughout a project matters a lot – from the partners we choose, to the data we collect, to the methods we use, to the way we evaluate outcomes. At each stage of the process, we are developing resources to help our teams think critically about these decisions.
- We continue to stay informed about the latest updates in this evolving field. This means keeping track of academic papers, news articles, and the most up-to-date best practices for applied machine learning.
- We evaluate (or ensure that our partners evaluate) outcomes using rigorous testing methods to assess both social welfare outcomes and whether outcomes meet the standard of fairness agreed upon by community stakeholders.
- We continuously evaluate (or ensure that systems are in place to continuously evaluate) the results of our model implementations and assess how impacts change over time, both for social welfare outcomes and for fairness of those outcomes. We create systems for ideas42 team members and stakeholders to raise concerns about or dispute decisions.
- We ensure that data is collected with consent and that all parties are practicing good data stewardship including data storage, management, security, access, and disposal.
- We keep careful documentation of decision-making throughout the process of creating and deploying algorithms. We share that documentation with community stakeholders as we gather their input. We opt for models that are more interpretable, where possible.
Best practices for fair, accountable and transparent machine learning are rapidly evolving, as are efforts to consolidate these principles . As the field continues to develop, we are committed to iterating on our approach and learning from others. We encourage you to reach out to firstname.lastname@example.org with any questions or thoughts.
 See for example:
Gillis and Spiess (2018). Big Data and Discrimination. Harvard Law School. Discussion Paper Series No. 84.
Kleinberg, J., Ludwig, J., Mullainathan, S., and Rambachan, A. (2018). Algorithmic Fairness. AEA Papers and Proceedings. 108: 22-27.
 For example:
Metz, C. (2019, Nov 11). We Teach A.I. Systems Everything, Including Our Biases. The New York Times.
Lohr, S. (2018, Feb 9) Facial Recognition Is Accurate, if You’re a White Guy. The New York Times.
 For an example of label selection gone wrong in a healthcare algorithm, see: Obermeyer, Z., Powers, B., Vogeli, C., and Mullainathan, S. (2019). Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations. Science. 366(6464): 447-453.
 To learn more about different ways to define “fairness,” see this interactive webpage from Google Research.
 Sendak, M.P., et al. (2020). A Path for Translation of Machine Learning Products into Healthcare Delivery. EMJ Innovations.
 Mullainathan, S. (2019, Dec 6). Biased Algorithms Are Easier to Fix Than Biased People. The New York Times.
 For example, this 2020 report from Harvard University analyzed 36 “principles documents” from across the world, and identified key themes and areas where these approaches converge.