Understanding the Limits of Human Cognition When Using Machine Learning Systems – UROP Spring Symposium 2022

Understanding the Limits of Human Cognition When Using Machine Learning Systems

photo of presenter

Matthew Conrad

Pronouns: he/him

Research Mentor(s): Eric Gilbert
Co-Presenter: Rule, Davis
Research Mentor School/College/Department: / Information
Presentation Date: April 20
Presentation Type: Poster
Session: Session 2 – 11am – 11:50am
Room: League Ballroom
Authors: Davis Rule, Matthew Conrad, Harman Kaur
Presenter: 54

Abstract

With the increased prevalence of machine learning technology in many of the systems core to our society, the need to monitor these systems is more crucial than ever. In a previous study, we determined that many data scientists misuse interpretability tools and fail to accurately make inferences about their data. In this work, our goal is to determine what exactly causes this ineffective use of interpretability tools. We hypothesize that satisficing (choosing the “good enough” option instead of the most optimal option) causes data scientists to draw hasty and potentially inaccurate conclusions about their data. Further, we believe that many of the interactivity features built into interpretability tools to aid exploration might ultimately hinder the ability of data scientists to understand and analyze data. We first conduct pilot studies to formalize our hypotheses and refine our experimental design. This was followed by a large-scale experiment that exposed data scientists to various machine learning pipelines with both the presence and absence of interpretability tools, with and without interactive features. Based on these findings, we propose a framework for the optimal way in which data scientists can interact with interpretability tools. We conclude with the implications of these findings for the broader machine learning community.

Presentation link

Engineering, Interdisciplinary, Social Sciences

lsa logoum logo