"When two wrongs don't make a right" - examining confirmation bias and the role of time pressure during human-aI collaboration in computational pathology (2025)
Art
Vortrag
Autoren
Rosbach, Emily
Ammeling, Jonas
Krügel, Sebastian
Kießig, Angelika
Fritz, Alexis
Ganz, Jonathan
Puget, Chloé (WE 12)
Donovan, Taryn
Klang, Andrea
Köller, Maximilian C.
Bolfa, Pompei
Tecilla, Marco
Denk, Daniela
Kiupel, Matti
Paraschou, Georgios
Kok, Mun Keong
Haake, Alexander (WE 12)
de Krijger, Ronald R.
Sonnen, Andreas F.-P.
Kasantikul, Tanit
Dorrestein, Gerry M.
Smedley, Rebecca C.
Stathonikos, Nikolas
Uhl, Matthias
Bertram, Christof A.
Riener, Andreas
Aubreville, Marc
Kongress
CHI '25: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems
Yokohama, Japan, 26.04. – 01.05.2025
Quelle
CHI '25: proceedings of the 2025 CHI conference on human factors in computing systems — Naomi Yamashita,Vanessa Evers, Koji Yatani, Xianghua (Sharon) Ding, Bongshin Lee, Marshini Chetty, Phoebe Toups-Dugas (Hrsg.)
New York, NY, United States: Association for Computing Machinery, 2025 — S. 1–18
Robert-von-Ostertag-Str. 15
14163 Berlin
+49 30 838 62450
pathologie@vetmed.fu-berlin.de
Abstract / Zusammenfassung
Artificial intelligence (AI)-based decision support systems hold promise for enhancing diagnostic accuracy and efficiency in computational pathology. However, human-AI collaboration can introduce and amplify cognitive biases, like confirmation bias caused by false confirmation when erroneous human opinions are reinforced by inaccurate AI output. This bias may increase under time pressure, a ubiquitous factor in routine pathology, as it strains practitioners’ cognitive resources. We quantified confirmation bias triggered by AI-induced false confirmation and examined the role of time constraints in a web-based experiment, where trained pathology experts (n=28) estimated tumor cell percentages. Our results suggest that AI integration fuels confirmation bias, evidenced by a statistically significant positive linear-mixed-effects model coefficient linking AI recommendations mirroring flawed human judgment and alignment with system advice. Conversely, time pressure appeared to weaken this relationship. These findings highlight potential risks of AI in healthcare and aim to support the safe integration of clinical decision support systems.