posted on 2024-11-01, 15:58authored byMichael Gerlich
This study analyses the dimensions of trust in artificial intelligence (AI), focusing on why a significant portion of the UK population demonstrates a higher level of trust in AI compared to humans. Conducted through a mixed-methods approach, this study gathered 894 responses, with 451 meeting the criteria for analysis. It utilised a combination of a six-step Likert-scale survey and open-ended questions to explore the psychological, sociocultural, and technological facets of trust. The analysis was underpinned by structural equation modelling (SEM) and correlation techniques. The results unveil a strong predilection for trusting AI, mainly due to its perceived impartiality and accuracy, which participants likened to conventional computing systems. This preference starkly contrasts with the scepticism towards human reliability, which is influenced by the perception of inherent self-interest and dishonesty in humans, further exacerbated by a general distrust in media narratives. Additionally, this study highlights a significant correlation between distrust in AI and an unwavering confidence in human judgment, illustrating a dichotomy in trust orientations. This investigation illuminates the complex dynamics of trust in the era of digital technology, making a significant contribution to the ongoing discourse on AI’s societal integration and underscoring vital considerations for future AI development and policymaking.