By Merage Ghane and Ted Robertson

As a leader in creating strategies and validated solutions around bias evaluation and mitigation in artificial intelligence (AI), ideas42 welcomes the Biden Administration’s recent actions to promote guidelines for the responsible use of AI in healthcare. Advances in behavioral science offer powerful tools to guide the management and use of algorithms across the entire life-cycle. 

The announcement of the October 30 Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, including efforts around mitigating bias in AI and the specific support of a Health and Human Services office to oversee such efforts, is an encouraging step in line with ideas42’s vision. We envision a future where AI and machine learning (ML) tools are well-designed and well-managed with the goal to first “Do no harm” while also improving quality of care and quality of life for those who need it most.  

The new Executive Order joins other recent advances in ML guidelines, such as the White House Blueprint for an AI Bill of Rights, the National Institute of Standards and Technology (NIST) AI management framework, the FDA Guidance for AI, and the Coalition for Health AI (CHAI) Blueprint for Trustworthy AI.

AI/ML tools offer tremendous potential in healthcare applications, including improved diagnosis, precision medicine, and value-based care. But there is also a risk of perpetuating or even introducing new biases and systemic disparities: for example, evidence of unintended racial bias has already been demonstrated in a 2019 Science study.

Stemming from 15 years of experience as an applied behavioral science leader, ideas42 conceptualizes AI/ML tools not solely as a computer driven process, but as a series of human decisions. The primary risk of bias, therefore, arises from human choices made in designing, implementing, and using ML models, such as problem definition, data collection and selection, adjustments for imperfect data, choice of prediction and performance metrics, deployment, end-use, and monitoring.

We have already created initial solutions for these human-driven sources of bias and are now discovering additional ways to mitigate bias beyond data proxies in the algorithm such as by addressing gaps in race and ethnicity data, improving documentation and transparency around the use and risks of algorithms, and improving the evaluation of end-user behavior during testing and deployment of health AI tools. 

We are leveraging our expertise and collaborating with other healthcare organizations to develop industry and regulatory standards for mitigating bias in health AI, and to create a roadmap for the next 15 years for ongoing standards administration, innovation, and adoption.

ideas42 is honored to to be part of CHAI—alongside Duke Health, the Mayo Clinic, and Change Healthcare—working on the translation of broad AI guidelines and specific insights into specific regulatory and industry standards, and supporting the Biden White House, as well as the federal Department of Health and Human Services, and the FDA.

The focus for the field now is shifting from principles to specific standards and practices to combat bias in algorithms. Specifically, for ideas42 this entails:

  • Collaborating on the development of a technical standard for fair and trustworthy health AI with the CHAI and other stakeholders.
  • Sharing proposed standards with health systems, governments, and community organizations for feedback.
  • Advocating for transparency, inclusive design, and the role and voice of smaller health organizations and disadvantaged communities.
  • Once a standard is refined, helping to create a strategy for a full compliance ecosystem through the creation of a Health AI Standards Administrative Organization. 
  • Continuing to advise HHS and other regulatory bodies on operationalizing equitable regulations in health AI, through public comment and private consultations.
  • Creating a long-term roadmap for innovation in bias mitigation, focusing on more equitable processes around data collection, data quality, documentation, performance metrics, end-user integration of algorithms, and organization operations and governance. 

President Biden’s Executive Order is an encouraging and necessary step to keep momentum going on this important and collaborative work to ensure the rapid growth of AI and machine learning (ML) models doesn’t come at the cost of responsible, de-biased design. We would also welcome further leadership by the executive branch, such as folding AI bias mitigation under HHS’s anti-discriminatory Rule 1557, and compliance with the standards that the Coalition for Health AI establishes.

Interested in learning more about ideas42’s work to develop ethical AI and ML standards and practices from a behavioral science lens? Get in touch at info@ideas42.org