menu
  • About
    • Blog
  • Top Industries
    • Consulting Services
    • Data Science
    • Financial Services
    • Human Resources
    • Identity Management
    • Medical Imaging
  • AI Trends
    • Algorithmic Bias in AI
    • Racial Bias in AI
    • COVID-19
  • Contact
  • Submit A Listing

Who should be responsible for reducing bias in AI on tech product development teams?

November 24, 2020


In a utopian world, the answer should be everyone.

But, the reality is ownership often defaults to the data science team, where numerous AI projects are in R&D, beta or pilot phase.

However, beyond data science/engineering stakeholders, there are increasing opportunities for project managers or QA to help root out bias within the development-design process, and begin operationalizing ethics into role-based tasks that tier up into quarterly product roadmaps.

While these aren’t intended to be end-all, prescriptive solutions to fix the universal problem of bias in AI or unethical AI, they are a starting place for both team discussion and experimentation.


DATA SCIENCE & ENGINEERING

“There are at least 21 mathematical definitions of fairness. These are not just theoretical differences in how to measure fairness, but different definitions that produce entirely different outcomes. -Trisha Mahoney, IBM Senior Tech Evangelist for ML and AI

  • Use ethics checklists like Denon, ODSC Actional Ethics or The Trustworthy AI Assessment List (European Commission Independent Expert Group)
  • Bring transparency to dataset documentation for responsible AI systems with the Data Cards Playbook.
  • Download an Ethical Decision App designed by the Markkula Center for Applied Ethics to help you navigate decision making (App Store/Google Play compatible).  Preview a demo.
  • Leverage open source AI bias/ML fairness github toolkits
  • Submit your structured data file to Synthesized’s new Community Edition Bias Mitigation tool which is designed understand a wide range of legal and regulatory definitions regarding contextual bias that might lead to inaccuracies within data, across attributes such as gender, age, race, religion, sexual orientation and more. Developers can upload up to three datasets for free.
  • Build an AI red team.  OpenAI Researcher Paul Cristiano explains, “After a preliminary version of a ML system is trained, the red team gets access to it and can run it in simulation. The red team’s goal is to produce a catastrophic transcript — that is, to create a plausible hypothetical scenario on which the current system would behave catastrophically. If the red team cannot succeed, then that provides evidence that the system won’t fail catastrophically in the real world either. If the red team does succeed, then we incorporate the catastrophic transcript into the training process and continue training.”

PROJECT MANAGERS

“We must shift from an engineering disposition, building solutions to ‘obvious’ problems, to a design disposition — one that relentlessly considers if we’ve correctly articulated the problem we’re solving for.”- Joy Buolamwini

  • Hold a development-design brainstorm to identify project risks (addiction, disinformation, algorithmic bias and exclusion) using the Ethical Explorer Pack or the EthicalOSToolkit

  • Incorporate ethics by design into user story creation 
AGILE METHODOLOGY

“Within the agile method, it is especially the scrum process that appears to be able to provide inspiration to safeguard ethics in the design process, for instance because scrum applies so-called ‘user stories’, the advantage of which is that they focus on people and on the different factors that play a role in determining the relative weight of values.”

Example:

A user story is constructed as follows:

As……(stakeholder) I want……(values)

in order to……(interests) given……(context).

When we translate this to a specific application domain – like the self-driving car – and place it in a specific context – like a collision between two autonomous vehicles, for the value ‘transparency’, that leads to the following user stories:

  • As manufacturer, I want to increase traceability, in order to be able to track the system error and avoid collisions. I want to increase the value and integrity of the data, in order to help avoid in- accuracies, errors and mistakes in case of a collision.
  • As user/driver, I want to increase communication, in order to be informed about actions and further steps to be taken in the case of a collision.I want to increase privacy and data protection, in order to guarantee that my personal information is protected in case of a collision.
  • As legislator, I want to increase explainability, in order to impose even stricter requirements on the system in case of a collision. I want to control access to data, in order to be able to create protocols and manage access to data in case of a collision.
  • As insurer, I want to increase explainability, in order to be able to determine the guilty party in case of a collision. I want to control access to data, in order to get clarity about who can access data under what circumstances in case of a collision.
WATERFALL METHODOLOGY 

Apply Value-Sensitive Design (VSD), where the starting point is to define values as early on in the design process as possible. Value Sensitive Design offers toolkits to enable designers, engineers, technologists and researchers to weigh human values within the plan/design development process

(Source: For detailed agile and waterfall methodology ai ethics tactics, read more at Rudy van Belkom’s AI No Longer Has a Plug: About Ethics in the Design Process, 2020)


QA & TRUST + SAFETY TEAMS

“Capital as such is not evil; it is its wrong use that is evil. Capital in some form or other will always be needed.”- Gandhi

  • Hold ‘bias and and safety bounties’ that have financial rewards attached (like software bug bounties)

(Source: Venture Beat Article, AI Researchers Propose Bias Bounties to Put Ethics Principles into Practice, April 2020)


MORE READING

“The individualization [of ethical responsibility] is certainly happening in the entire industry, rather than locating it at the level where accountability for a company lies, which is the level of C-suite.” – Emanuel Moss

Ethics in Tech: Are Regular Employees Responsible?

Ethics Owners: A New Model of Organizational Responsibility in Data Driven Technology Companies

Post navigation

Previous Post Previous post:
What are the dangers of facial recognition?
Next Post Next post:
Data Cleaning: DIY or outsource for reducing bias in AI?

Archives

  • March 2021
  • February 2021
  • January 2021
  • November 2020
  • September 2020
  • August 2020

Categories

  • AI bias responsibility in tech
  • AI books to read
  • AI cognitive biases
  • AI crime
  • AI data collection & cleaning outsourcing
  • AI digital ethics
  • AI facial recognition
  • AI fairness github open source toolkits
  • AI incident reporting
  • AI jobs
  • AI legal issues
  • AI limitations
  • AI podcasts
  • AI vocabulary
  • Blog
  • COVID-19 Contact Tracing
  • Resources

Copyright ©2022 Bias In AI. All rights reserved. Term of Services Privacy Policy