My research focuses on how to calibrate trust between human users and artificial intelligent systems. Optimal calibration of trust occurs when the trust from human users accurately reflects the performance of the system so that the user does not mistrust a faulty system, nor do they distrust a well-functioning system. While users primarily base their trust on the perceived performance of these autonomous systems, this calibration can be aided by the presentation of various cues which can provide the user with a more nuanced understanding of the decision-making of the system.
Therefore my research sought to explore these factors, in experiments participants worked with autonomous image classifier systems, which are technologies that can independently identify the contents of image data. Within these experiments, we explored how classifier performance, interface transparency, and user biases all contribute towards trust. The first of our experiments have been recently published, and we are currently working on a follow-up to this experiment.