What Could Possibly Go Wrong?  Speculative Practice Towards Anticipating the Negative Consequences of Humanitarian AI. June 20, 2022 (EST) / June 21, 2022 (KST)

Workshop Description

As is the case with numerous domains, the adoption of artificial intelligence (AI) technologies in disaster risk management (DRM) and humanitarian response has been growing in recent years. While implementation of the technologies is still at an early stage, numerous areas of applications, across several functions of disaster research and practice, are currently being explored. In domains where AI tools have already been deployed, the introduction of these tools has raised a number of concerns, ranging from privacy, to bias, and reduced transparency, explainability, and accountability. Some of these potential harms are in direct opposition to humanitarian principles. Considering the negative effects of AI tools that have been documented in the fields of criminal justice, healthcare, finance, and human resources, there is a need to critically and thoroughly evaluate the risks involved in deploying AI tools in disasters. Several characteristics of AI for disasters set it apart from analogous tools in other domains that have been more widely studied by the FAccT community, which makes anticipating negative impacts additionally challenging.

To address these challenges, our session aims to use speculative design practice to envision potential consequences of AI in disasters, and then explore them through social and ethical lenses. In this workshop, participants will explore the humanitarian AI problem space through the development of futuristic or alternative scenarios. Building scenarios around a case study in humanitarian AI, the Collective Crisis Intelligence project, participants will attempt to uncover obscure values and unintended consequences therein. The activities of this workshop will enable us to learn more about the role of speculative design in anticipating negative consequences of technology design and deployment, develop a more thorough understanding of the potential impacts of adopting AI in humanitarian work, and grow the community of researchers and practitioners working in this space.

Tentative Schedule

Time Activity  
10 minutes Welcome + Opening Remarks  
20 minutes Ice-Breakers  
  Experiential Futures: Setting  
20 minutes Collective Crisis Intelligence case study  
15 minutes Participant Q&A about project  
30 minutes Break  
  Experiential Futures: Scenario  
25 minutes Futures wheel  
10 minutes Group reflection  
  Experiential Futures: Storyworld  
25 minutes Story development  
10 minutes Group reflection  
30 minutes Group Presentation  
10 minutes Closing Reflections  

To Participate

To RSVP to participate in this workshop, or if you have questions, please email manveer.kalirai@mail.utoronto.ca.

Workshop Organizers

Robert Soden, Assistant Professor, University of Toronto

Aleks Berditchevskaia, Principal Researcher, Nesta

Isabel Stewart, Data Scientist, Nesta

Erin Coughlan de Perez, Dignitas Associate Professor, Tufts University

Saurav Poudel, Data Science Fellow, Collective Crisis Intelligence Project

Sakun Joshi, Community Engagement and Accountability, Nepal Red Cross TBC

Shreyasha Paudel, Graduate Student, University of Toronto

Manveer Kalirai, Graduate Student, University of Toronto