3.1.3 Ethics and Bias

Course subject(s) Module 3. Connect your mind

How should we deal with bias in our AI systems?

When humans make decisions, we often do so in subjective, inconsistent and (implicitly) biased ways, which can lead to discrimination and favoritism. The AI systems we use to automate such decisions do not appear to share the same flaws on the surface. It seems you can hardly get more objective than a system that starts with no beliefs, learns based on simple mathematical rules that make no reference to humans (e.g. gradient descent), and apply the learned model consistently and impartially to any person or situation it must judge. Yet we see many complaints of sexist, racist and homophobic algorithms.

How should we deal with this apparent bias in our AI systems? Is it actually bias, or do these systems just learn uncomfortable truths about the world? If so, is that desirable or should we still correct it? If these systems are biased, what is the source of that bias? Is it the algorithmic bias that makes the system favor certain kinds of solutions over others (e.g. simple over complex)? Or is it the training data that are biased? Does the bias come from society or the AI’s developers? Would a more diverse workforce help? How can governments enforce anti-discrimination laws when the decisions are made by (often inscrutable) algorithms? Should citizens have a “right to explanation” about decisions that concern them? How can we ensure that AI systems help shape the future in a positive way, rather than perpetuating societal inequality and injustices?

Creative Commons License
Mind of the Universe: Robots in Society - Blessing or Curse by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/mind-of-the-universe-robots-in-society-blessing-or-curse/.
Back to top