6.2.6 AI and data governance: some tough questions

Course subject(s) Module 6. Considering digital innovation

Trustworthy data and AI

Now that we know some important aspects of Data and Governance let’s analyze some thought aspects of this relationship.

AI will greatly impact cities and is a driving factor for SSC.  That means that AI and the underlying data should be trustworthy.

Basically, the following ‘universal’ requirements should be met:

  1. Human agency and oversight
  2. Technical robustness and safety
  3. Privacy and data governance
  4. Transparency
  5. Diversity, non-discrimination, and fairness
  6.  Societal and environmental well-being
  7. Accountability

Source: https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/

Challenges

A fact of life is, however, that we almost have to make trade-offs when it comes to these principles.  AI and algorithms are never fully transparent, explainable or free from bias.

Some reasons are:

  • ‘Garbage in, garbage out.’
    This means that AI merely reproduces the human flaws and misconceptions that are then reproduced through the training of the algorithms and their operational use in society. It comes already at the stage of who gets to frame the problem and how the problem is defined. What are the criteria for facilitating a certain AI development? How are the data collected, and what criteria for selecting and preparing the data are deployed?
  • Data: we only measure what we can measure.
  • Autonomy
    In the future, even more so than now, AI and robotics will be involved in life or death decisions — whether in the medical domain, self-driving cars or advanced warfare. To what degree should we allow our machines to make autonomous decisions?
  • The Responsibility Gap
    Given that AI enables society to automate more tasks and automate to a larger extent than before, who or what is responsible for the benefits and harms of using this technology?
    A concrete example: there is an accident with a self-driving vehicle because of technical malfunctioning of the GPS.  Who is to blame? The car driver? The car manufacturer. The software engineers? The company that launched the satellite that did not function properly.  This is also referred to as’ the problem of the may hand’ and an easy way to blame others.
  • Transparency.
    it is notoriously difficult to translate algorithms into human-understandable concepts. You need the technical background
  • Privacy and cyber-security, as we have seen in module 1.

What are ways to address some of these challenges?

  • Simply by the awareness that bias and explainability are always a problem that cannot be eradicated
  • By diversification of the design teams according to many different criteria, for instance: age, gender, occupation, and ethnicity.  The idea is that the more entry points and diverse entry points we have to identify and mitigate bias, the more resilient we can make these systems to mitigate and accountability of bias.
  • And another strategy could also be not just the diversity of the design team but also to include the final users in the design practices by, for instance, provocative design or adversarial design practices.
  • By maintaining an ethical mindset. Put public as guiding principles
  • To be resilient in terms of capacity or countervailing power when speaking with vendors.

And last but not least: monitoring, monitoring, and monitoring! See module 1 again.

Creative Commons License
Smart and Sustainable Cities: New Ways of Digitalization & Governance by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/smart-and-sustainable-cities-new-ways-of-digitalization-and-governance/ /
Back to top