0.2.1 Meet the course team of AI in Practice: Preparing for AI Systems
Course subject(s)
Module 0: Getting started: AI in Practice: Preparing for AI
Meet the program’s support team
In this course AI in Practice: Preparing for AI we would like to introduce you to a wide variety of angles, ideas and research in the field of AI. We want to show you what different public and private organizations do with AI and which innovations are realized within the various ICAI labs. And as a result of our choice for variety, many people are involved in the program.
Hennie Huijgens – Course founder
Hennie Huijgens works as an independent software analytics expert at Goverdson. In 2018 he received a PhD at the Delft University of Technology in the Netherlands on the subject Evidence-Based Software Portfolio Management. He co-developed and led the AI for FinTech Research, an ICAI lab collaboration of ING and Delft University of Technology. He developed the AI in Practice program – supported by specialists from the Extension School and the NewMedia Center of Delft University of Technology.
“Learn how to integrate AI into your organization. Start by recognizing AI’s multiple benefits and implications and end by making a plan for its application.”
At the heart of the AI in Practice program are the professors, researchers, PhD candidates, and practitioners who work together daily in the ICAI labs that form the basis of our program. They bring you great content about the innovative research programs that they carry out in collaboration between universities and companies. Below we introduce them to you in alphabethic order.
Emma Beauxis-Aussalet – Vrije Universiteit Amsterdam
Emma Beauxis-Aussalet is Assistant Professor of Ethical Computing at Vrije Universiteit Amsterdam (VU Amsterdam). Her prior research at CWI has developed statistics and visualizations that make bias and error more understandable for the general public, and thus contribute to making AI more fair, transparent and accountable. She holds a PhD from Utrecht University, Masters in Computer Science and Communication, and has worked as an R&D engineer and a designer.
“On the long road towards regulating AI, we have not even walked the first steps towards standard AI assessment.”
Ron Boelsma – Police Lab AI
Ron Boelsma works as innovation- and knowledge broker for the Dutch National Police. In this course he explains the importance of AI and AI research for the police organization and the organizational background and practical relevance of such research.
“AI will contribute to the development of the police of tomorrow. In a rapidly changing and complex society, AI becomes increasingly important for the Netherlands in terms of safety. The police is gaining competence in the responsible use of AI and has to gain knowledge regarding its use by criminals.”
Michael Cochez – Discovery Lab and Vrije Universiteit Amsterdam
Michael Cochez is an Assistant Professor in the Knowledge Representation and Reasoning group at the Vrije Universiteit Amsterdam and in the Discovery Lab, a collaboration of the Vrije Universiteit and Elsevier.
“Computers have no idea what you are talking about. When we as humans communicate we have certain concepts or thoughts in mind which we want to convey to others.”
Daniel Daza – Discovery Lab and Vrije Universiteit Amsterdam
Daniel Daza is a PhD candidate on representation learning for graphs, and kowledge graph extraction from text at the Vrije Universiteit Amsterdam and in the Discovery Lab of the Vrije Universiteit and Elsevier.
“In general, any useful system for query answering should be able to cope with queries ranging from simple to complex, so that the knowledge in a graph can be employed effectively. “
Arie van Deursen – AI for FinTech Research and Delft University of Technology
Arie van Deursen is scientific director of the AI for FinTech Research and professor software engineering and head of the department of software technology at Delft University of Technology.
“AI in FinTech poses a substantial challenge: Regulations typically require clear audit trails, calling for explainable AI instead of black box machine learning models. On the other hand, AI also offers unique possibilities: Many regulations are highly documentation centric: Regulatory processes such a Know Your Customer require that banks collect all sorts of documentation about their customers, in order to prevent, for example, illegal money laundering activities.”
Cristina González Gonzalo – Thira Lab and RadboudUMC
Cristina González Gonzalo works as a PhD candidate at the University of Amsterdam and the RadboudUMC. In this course she presents a video lesson on ethics in healtcare research.
“In the A-Eye Research Group we work with retinal imaging, so we focused on two related tasks: automated grading in color fundus images of diabetic retinopathy or DR and age-related macular degeneration or AMD, which are two leading causes of blindness worldwide.”
Sadaf Gulshad – Delta Lab and University of Amsterdam
Sadaf Gulshad is a PhD candidate at the University of Amsterdam and in the Delta Lab with Bosch. In this course she presents a use case on ‘Adversarial and Natural Perturbations for General Robustness’.
“Our research learns us that although general robustness is hard to prove, natural perturbations training always showes better robustness than adversarial training.”
Ella Hafermalz – Vrije Universiteit Amsterdam
Ella Hafermalz is Assistant Professor on the topic of Digital Innovation at the School of Business and Economics, Knowledge, Information and Innovation (KIN) at the Vrije Universiteit Amsterdam.
“As a manager or decision maker in a company, it’s important to remember that AI requires constant oversight. Stay aware, stay informed, and stay involved to ensure you’re using AI ethically in your organization.”
Frank van Harmelen – Elsevier AI Lab and Vrije Universiteit Amsterdam
Frank van Harmelen is professor of knowledge representation and reasoning at the Vrije Universiteit in Amsterdam. He is scientific director of the Discovery Lab , an ICAI collaboration between Elsevier and the VU. And he is lead researcher of the Hybrid Intelligence Center, a 20 million euro, 10-year collaboration between researchers from 6 Dutch universities in AI systems that work with people instead of replacing them.
“When I ask you to think of three new technologies in AI, I bet that you didn’t think of AI in science. Still, that is exactly what our Discovery Lab is about.”
Rinke Hoekstra – Elsevier AI Lab
Rinke Hoekstra is lead architect in technology at Elsevier. In this course he presents Elsevier’s challenges on AI.
“How can we maintain a sustainable, reliable and timely flow of information from scientific insights into our products? And how can we use this rich, structured information in more ways to drive scientific discovery and accelerate research?”
Maartje ter Hoeve – Police Lab AI (Amsterdam) and University of Amsterdam
Maartje ter Hoeve is a PhD candidate Natural Language Processing & Information Retrieval at the Police Lab AI (Amsterdam) and the University of Amsterdam. In this course she presents a use case on the topic of Question answering and conversational AI.
“Question answering is very useful and powerful in many scenarios, but it is still hard to have a proper conversation with a voice assistant, where it remembers exactly what you already asked before.”
Herke van Hoof – Delta Lab and University of Amsterdam
Herke van Hoof is an assistant professor at the University of Amsterdam in the Netherlands. He is part of the Amlab headed by Professor Max Welling as well as the UvA-Bosch Delta lab.
“Deep learning works extremely well with large data sets. When little data is available, making good use of expert knowledge is essential to retaining good performance. This makes methods that combine expert insights with data driven techniques very promising.”
Sarah Ibrahimi – Police Lab AI (Amsterdam) and University of Amsterdam
Sarah Ibrahimi is a PhD candidate Computer Vision at the Police Lab AI (Amsterdam) and the University of Amsterdam. In this course she presents a use case on the topic of fake image detection.
“Since almost ten years many deep learning models have been developed for image-related tasks. These models are called Convolutional Neural Networks or CNNs. Over the past years, CNN-generated images have become of such high visual quality that humans have trouble distinguishing them from real images.”
Mert Imre – AIRLab Delft and Delft University of Technology
Mert Imre is a PhD candidate in the department of Cognitive Robotics (COR) at Delft University of Technology.
“Object manipulation is the results of robot actions. The actions are the way a robot change its environment by means of physical interaction just like humans. Consider a yourself trying to hold a cup. Where you hold the cup depends on the properties of it. Does it have a handle or not? If so where is the handle …Humans can solve this kind of problems very fast, thanks to their advanced brain, hands, skin and years of interaction experience with their surroundings.”
Pinar Kahraman – ING
Pinar Kahraman works as a Senior Product Owner and Data Scientist at ING, where she specialized in time series forecasting, operations research, mathematical modeling, and advanced probability & statistics.
“I address five main IT-related challenges in ING: service levels and monitoring Operations, Incident prevention and management, IT risk, topology and network behavior, and infrastructure investment decision management.”
Asterios Katsifodimos – AI for FinTech Research and Delft University of Technology
Asterios Katsifodimos is an Assistant professor and Delft Technology Fellow at the Web Information Systems group of the Faculty of Engineering, Mathematics and Computer Science (EEMCS/EWI) at Delft University of Technology.
“To get things straight: data integration cannot be fully automated. Whoever tells you the opposite, is either clueless, or is lying. Instead, what we believe we should be doing is to focus on methods that use AI. Not to replace humans, but to help them by learning from human provided examples, and by reducing the effort of discovering data relationships!”
Maximilian Kronmüller – AIRLab Delft and Delft University of Technology
Maximilian Kronmüller is a PhD candidate for On-Demand Last-Mile Delivery at Delft University of Technology.
“Last-mile delivery has gained importance during the last years and continues growing. Extreme circumstances like the COVID-19 pandemic also show the importance and need for last-mile delivery processes. Same-day delivery is a sub-part, in which goods are delivered the same day as they are ordered.”
Elvan Kula – AI for FinTech Research, ING and Delft University of Technology
Elvan Kula is PhD Candidate on the intersection of AI, Data Analytics and Software Analytics at ING and in the AI for FinTech Research.
“Currently we are building a deep learning model for story point estimation with the ability of adapting to team dynamics. We envision this model as support, instead of a replacement, for human judgement in project planning sessions. Our model is fully end-to-end trainable from raw input data (the user story description) and various automatically derived team features (i.e., features characterizing a team) to a story-point estimate for a given user story.”
Madelaine Ley – AIRLab Delft and Delft University of Technology
Madelaine Ley is a PhD Candidate integrating care ethics into tech design at Delft University of Technology.
“Providing a bleak list of potential problem isn’t very helpful; that’s because this is only the first step in the ethical design process. The next requires collaborative effort between ethicists and engineers, to see the ways technical decisions and design choices might prevent infringement on workers’ wellbeing and freedoms, and even come to promote a sense of wellbeing as robots join the workforce.”
Merlin Majoor – ING
Merlin Majoor is a philosopher and lawyer and works as Senior Expert Culture & Ethics at ING.
“It appears there are several advantages to the use of AI in assessing mortgage applications. It is time and cost efficient and, more importantly, AI models seem much better at making complex decisions, like assessing a mortgage application, than humans.”
Annibale Panichella – AI for FinTech Research and Delft University of Technology
Annibale Panichella is an Assistant Professor in the Software Engineering Research Group (SERG) at Delft University of Technology (TU Delft) in Netherlands. He is also a research fellow in the Interdisciplinary Centre for Security, Reliability and Trust (SnT), University of Luxembourg.
“As a case study, we will focus on self-driving cars, which is one of the very exciting examples of cyber-physical systems with safety-critical requirements. A self-driving car is equipped with various sensors that aims to understand what is going on in the surrounding environment. Different sensors are in place to analyze different elements in the environments. Some of these elements include the distance with other cars, the traffic sign, pedestrians, the lane, and so on.”
Elise van der Pol – Delta Lab and University of Amsterdam
Elise van der Pol is a PhD candidate at the Amsterdam Machine Learning Lab, supervised by Prof. Dr. Max Welling (UvA). Her research topic is in reinforcement learning and machine learning.
“We have shown our approach to work for a video game, but we believe this is a promising direction for other domains as well, such as traffic light control, autonomous driving, robotics, but also other reinforcement learning applications that exhibit symmetries.”
Ayushi Rastogi – Delft University of Technology
Ayushi Rastogi is a postdoctoral researcher in the Software Engineering Research Group at Delft University of Technology. In this course she presents a bonus track on Questions for Data Scientists in Software Engineering that was performed in close collaboration with ING.
“Five years ago, Microsoft conducted a study to investigate just for which questions should answers be sought. From surveys conducted among engineers, researchers distilled 145 questions “for data scientists in software engineering”. Fast forward to today, are these questions still relevant? Do they apply to ING?”
Ivan Sosnovik – Delta Lab and University of Amsterdam
Ivan Sosnovik is a PhD candidate at the Delta Lab and the University of Amsterdam. In this course he presents a use case on the topic of Scale Equivariance for Computer Vision.
“By making convolutional neural networks additionally scale-equivariant, we simplify their learning task. Now all scale changes are not required to be learned from the training data as they are built-in. It allows for more robust object classification both when the data is limited and when it is not.”
Peter-Paul Verbeek – University of Twente
Peter-Paul Verbeek is distinguished professor of Philosophy of Technology and co-director of the DesignLab of the University of Twente, The Netherlands. His research focuses on the philosophy of human-technology relations, and aims to contribute to philosophical theory, ethical reflection, and practices of design and innovation. Verbeek is among others chairperson of UNESCO – COMEST (World Commission on the Ethics of Science and Technology) and member of the Dutch Council for the Humanities.
“As you might have noticed, a lot of ethical codes have been developed. And three elements occurr in all these codes; we call them the FAT principles: fairness, accountability and transparency.”
Bart Voorn – Ahold Delhaize
Bart Voorn is Director Data, AI & Robotics (DAR) at Ahold Delhaize.
Martijn Wisse – AIRLab Delft and Delft University of Technology
Martijn Wisse is a Professor with the Delft University of Technology. His current research interests include underactuated grasping, open-loop stable manipulator control, design of robot arms and robotic systems, agile manufacturing, and the creation of startup companies in the same field.
“The key challenge in robotics is to deal with variability. For robots, their work becomes difficult if there is variability in objects, such as the endless products to handle in a store.”
Marcel Worring – Police Lab AI (Amsterdam) and University of Amsterdam
Marcel Worring is co-director of the Police Lab AI (Amsterdam) and professor in data science for business analytics at the Amsterdam Business School and director of the Informatics Institute at the University of Amsterdam. In this course he gives an introduction of the research performed within the Police Lab AI and explains the learning sequence Multimodal Machine Learning and Image Analysis in terms of ‘the big picture’ and the social relevance of the research performed in the lab.
“Deep learning is a technique which already has shown to be very effective and there are still a lot of new avenues to explore.”
AI in Practice: Preparing for AI by TU Delft OpenCourseWare is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Based on a work at https://online-learning.tudelft.nl/courses/ai-in-practice-preparing-for-ai/ /