Welcome to

Lancashire Online Knowledge

Image Credit Header image: Artwork by Professor Lubaina Himid, CBE. Photo: @Denise Swanson


When AI Stays Silent: Hidden Agreement May Undermine Trust Building in Adaptive Decision Support and Training

Ramon Alaman, Jonay orcid iconORCID: 0000-0002-8642-0422, Lafond, Daniel orcid iconORCID: 0000-0002-1669-353X, Marois, Alexandre orcid iconORCID: 0000-0002-4127-4134 and Tremblay, Sébastien orcid iconORCID: 0000-0002-7030-5534 (2026) When AI Stays Silent: Hidden Agreement May Undermine Trust Building in Adaptive Decision Support and Training. In: HCI International 2025 – Late Breaking Papers 27th International Conference on Human-Computer Interaction, HCII 2025, Gothenburg, Sweden, June 22–27, 2025, Proceedings, Part XIV. Lecture Notes in Computer Science . Springer, pp. 253-263. ISBN 978-3-032-13174-4

Full text not available from this repository.

Official URL: https://doi.org/10.1007/978-3-032-13174-4_17

Abstract

Integrating artificial intelligence (AI) into decision-support systems (DSS) for aviation offers real-time decision support but complicates trust calibration between human operators and AI. This study examined how feedback style from such a DSS, the Cognitive Shadow, influences trust during a simulated weather-avoidance task. Forty-four participants completed 150 knowledge-elicitation trials, followed by 20 test trials where the DSS generated predictions. When participant decisions diverged from the DSS suggestion, it issued explicit recommendations; matching human-DSS decisions prompted no feedback, representing implicit agreement. Trust was measured using the 12-item Checklist for Trust between People and Automation. Rejection of explicit recommendations, as a proportion of all such explicit cues, was negatively correlated with trust (r(41) = −0.62, p < 0.001), while acceptance was positively correlated (r(41) = 0.47, p = 0.001). The proportion of silent agreements showed no association with trust (r(41) = −0.02, p = 0.895). These results suggest that explicit feedback—both confirming and corrective—acts as a key cue for calibrating trust, while implicit agreement carries little weight. Trust appears more sensitive to how the system communicates than to whether its decisions align with those of the user. This aligns with recent findings that transparency, not just accuracy, drives trust in AI. Designing DSS that strategically balance explicit feedback with minimal intrusiveness may enhance operator trust and performance. Future research will manipulate feedback valence and visibility in a between-group design to further disentangle how communication style shapes trust in high-stakes human–AI collaboration.


Repository Staff Only: item control page