menu
  • About
    • Blog
  • Consulting
  • Trends
    • Algorithmic Bias in AI
    • Racial Bias in AI
    • COVID-19
  • Industries
    • Data Science
    • Financial Services
    • Human Resources
    • Identity Management
    • Medical Imaging
    • Consulting Services
  • Contact
  • Submit Listing

Bias in AI GitHub Open Source Toolkits

November 18, 2020


Toolkits are intended to be supplemental to data scientist accountability AND governance processes required to identify, define, detect and correct potential biases, not stand alone.  This bias in ai/ML fairness github list only serves to provide link references to toolkits being used by companies, research departments and organizations today; it does not endorse or vouch for a toolkit’s ability to deliver fairness, robustness, explainability in either data models or ML outcomes.

Aequitas

Aequitas is a Latin concept for justice, equality, or fairness. Aequitas is an open-source bias audit toolkit for machine learning developers, analysts, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools.

AI Fairness 360 Degree (IBM)

Helping developers examine, report, and mitigate bias and discrimination within their machine learning models and throughout the AI application lifecycle. The toolkit itself contains more than 70 fairness metrics and 11 unique bias mitigation algorithms developed within the research community, designed to translate algorithmic research from the lab into real-life practices throughout industries including finance, human capital management, healthcare, and education.

Audit AI (Pymetrics)

Designed to measure and mitigate the effects of discriminatory patterns in training data and the predictions made by machine learning algorithms trained for the purposes of socially sensitive decision processes.

Deon

A command line tool that allows you to easily add an ethics checklist to your data science projects.

FairML

Is a python toolbox auditing the machine learning models for bias. Analysts can more easily audit cumbersome predictive models that are difficult to interpret of black-box algorithms and corresponding input data.

Fairtest

FairTest enables developers or auditing entities to discover and test for unwarranted associations between an algorithm’s outputs and certain user subpopulations identified by protected features.

Fairness Gym 

A set of components for building simple simulations that explore the potential long-run impacts of deploying machine learning-based decision systems in social environments. It aims to explain the implications of different decisions made when training algorithms and make it easier to see intended and unintended consequences of those decisions.

Fairness LiFT Toolkit (LinkedIn)

A Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows. The library can be deployed in training and scoring workflows to measure biases in training data, evaluate fairness metrics for ML models, and detect statistically significant differences in their performance across different subgroups. It can also be used for ad-hoc fairness analysis.

Fair Learn (Microsoft)

Fairlearn is a Python package that empowers developers of artificial intelligence (AI) systems to assess their system’s fairness and mitigate any observed unfairness issues. Fairlearn contains mitigation algorithms as well as a Jupyter widget for model assessment.

FAT Forensics

A Python toolkit for evaluating Fairness, Accountability and Transparency of Artificial Intelligence systems.

Responsibly AI

Built with the goal of 1) being one-shop-stop for auditing bias and fairness of machine learning systems, and 2) mitigate bias and adjust fairness through algorithmic interventions, with a specialized focus on NLP models.

Themis

Greek for wisdom and good counsel, Themis™ is a testing-based approach for measuring discrimination in a software system. The Themis™ implementation measures two kinds of discrimination: group discrimination and causal discrimination.

What If Tool (WIT) (Google)

An easy-to-use interface for expanding understanding of a black-box classification or regression ML model. With the plugin, you can perform inference on a large set of examples and immediately visualize the results in a variety of ways. Additionally, examples can be edited manually or programmatically and re-run through the model in order to see the results of the changes. It contains tooling for investigating model performance and fairness over subsets of a dataset.

Post navigation

Previous Post Previous post:
Popular Bias in AI Podcasts
Next Post Next post:
Where can I report Bias in AI incidents or harmful AI outcomes?

Archives

  • March 2021
  • February 2021
  • January 2021
  • November 2020
  • September 2020
  • August 2020

Categories

  • AI bias responsibility in tech
  • AI books to read
  • AI cognitive biases
  • AI crime
  • AI data collection & cleaning outsourcing
  • AI digital ethics
  • AI facial recognition
  • AI fairness github open source toolkits
  • AI incident reporting
  • AI jobs
  • AI legal issues
  • AI limitations
  • AI podcasts
  • AI vocabulary
  • Blog
  • COVID-19 Contact Tracing
  • Resources

Copyright ©2023 Bias In AI. All rights reserved. Term of Services Privacy Policy