0.4.2 Innovation: some uncomfortable questions…
What we discussed so far is the importance of RRI when it comes to addressing societal needs. In other words: to have a positive impact. The other side of the coin is that we should be aware that new technologies come with uncomfortable questions. RRI implies that we have to be aware of this!
Let us a give a few examples:
- Nanotechnology raises concerns about health and environmental risks, i.e. risks that are still largely uncertain or contested and that may only become less uncertain in the long term (e.g. dual-use and issues of privacy and civil liberties). However, nanotechnology may also provide possible benefits in terms of sustainability (e.g., environmental remediation), human health (nanomedicine), and human enhancement.
- Synthetic biology also raises issues of risk and safety, for example, in the case of an unintended release of modified organisms into the environment. A main concern is also biosecurity, namely the intentional misuse of synthetic biology to create, for example, a deadly virus; another issue that is quite specific to synthetic biology is its possible effect on biodiversity. In addition, it raises a number of larger issues like “playing god,” or tinkering with nature, which, according to some, is unwarranted, and it has been criticized for widening the gap between rich and poor countries, for example through issues of intellectual property rights. Unlike some of the other transformative technologies described here, synthetic biology is already the topic of a highly polarized debate, which poses a challenge of its own.
- Whereas nanotechnology and synthetic biology raise different but overlapping RRI issues, drones and self-driving cars raise other specific issues. One issue specific to these technologies is human control and responsibility, fueled by a fear that these technologies will diminish human control and lead to undesirable consequences for which nobody is responsible, see e.g., issues relating to privacy, data ownership, surveillance, and spatial and city planning arise here. In the case of drones and self-driving cars, the fear of diminished human control has led to discussions about humans in or on the loop and the proposal of notions like “meaningful human control”. In the case of self-driving cars, it has led to discussions on how these cars should be programmed to behave in the event of an accident.
- The Internet of Things (IoT) directly raises issues of privacy, surveillance, and civil liberties. Obviously, it also raises issues of security and reliability. In as far as it is connected with artificial intelligence (AI) and decision algorithms, also accountability, transparency, and democracy are at stake.
We cannot escape these difficult questions! But how do we deal with this as a company?
That is what we will discuss in this course.
Below are a couple of suggested videos about these challenges and uneasy questions that come with new technologies.
As you watch the videos: Do you recognize the value conflicts?
- Design/ innovation challenges (with autonomous vehicles and the famous ethical dilemma known as trolley problem as a starting point).
- The many ethical concerns related to Artificial Intelligence (AI).
- Genomics: ethical concerns
- Autonomous weapons: who pulls the trigger
You will be able to find many more of such videos on the Internet.
Note: videos can be downloaded if we own copyright.
Self-driving vehicles: the trolley problem
Trolley problem, part 2
Responsible Innovation: Building Tomorrow’s Responsible Firms by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/responsible-innovation-building-tomorrows-responsible-firms/.