Current Projects

Home Current Projects

Dr. Gang Li | Funding: ERC

AR/VR/XR technology

In Europe, people travel an average of 12,000km per year on private and public transport, in cars, buses, planes and trains. These journeys are often repetitive and wasted time. This total will rise with the arrival of fully autonomous cars, which free drivers to become passengers. We belive AR/VR/XR technology could radically improve all passenger journeys, however, one challenging issue is motion sickness. To be more specific, that is the mixed state of traditional car sickness and AR/VR/XR-induced cybersickness. Therefore, the aim of this project is to explore a new way to mitigate the mixed state of motion sickness.

Angie Ng | Funding: ERSC

Face Matching Within Human-AI Teams

Face matching is used in a variety settings which involves the verification of a person’s identity. Despite being widely used, it a surprisingly error prone task. There have been major gains in the accuracy of automated facial recognition algorithms and research has found that fusing algorithm scores with the ratings made by humans can provide almost perfect accuracy on a challenging face matching task. However, further research is required to examine how to best form these human-machine teams to carry out face matching tasks. Current projects aims to identify benchmark levels of human and algorithm face matching performance on a dataset of image pairs and explore variations in types of errors made by human and AI. We will be investigating how we can calibrate trust between human and AI to facilitate optimal human-machine team performance. Understanding the effect on trust can help people make quicker, more effective and more accurate decisions with the help of AI and help minimise the risk of errors and misidentification in applied settings.

Morgan Bailey | Funding: UKRI SOCIAL CDT

Social Intelligence towards Human-AI Team building

Visions of the workplace-of-the-future include applications of machine learning and artificial intelligence embedded in nearly every aspect (Brynjolfsson & Mitchell, 2017). This “digital transformation” holds promise to broadly increase effectiveness and efficiency. A challenge to realising this transformation is that the workplace is substantially a human social environment and machines are not intrinsically social. Imbuing machines with social intelligence holds promise to help build human-AI teams and current approaches to teaming one human and one machine appear reasonably straightforward to design.
However, if there are more than one human and more than one system that are working together we can see that the complexity of social interactions increases and we need to understand the society of human-AI teams. This research proposes to take a first step in this direction to consider the interaction of triads containing humans and machines.

Ogechi Onuoha | Funding: Qumodo/Innovate UK

How Can Computer Vision be Used to Geolocate Images of Indoor Spaces?

The project involves developing a robust and interpretable deep learning technique for analysing indoor images and other data related to human trafficking. The methods and tools will be created in collaboration with Qumodo Ltd and evaluated with their end users.

Martin Ingram | Funding: ESRC/SGSSS

How Is Trust Towards Technology Characterised by Users?

My research focusses on how to calibrate trust between human users and artificial intelligent systems. Optimal calibration of trust occurs when the trust from human users accurately reflects the performance of the system, so that the user does not mistrust a faulty system, nor do they distrust a well-functioning system. While users primarily base their trust on the perceived performance of these autonomous systems, this calibration can be aided by the presentation of various cues which can provide the user with a more nuanced understanding of the decision making of the system. Therefore my research sought to explore these factors, in experiments participants worked with autonomous image classifier systems, which are technologies that can independently identify the contents of image data. Within these experiments, we explored how classifier performance, interface transparency, and users biases all contribute towards trust. The first of our experiments has been recently published, and we are currently working on a follow-up to this experiment.

yingying

Yingying Huang | Funding: CSC

How do we perceive the environment around us?

How do we perceive and understand ambiguous stimuli? Why do all things look as they do? William James ever said, "whilst part of what we perceive comes through our senses from the object before us, another part (and it may be the larger part) always comes out of our own head." Visual experience can be triggered externally, by the events in the outside world (i.e. visual perception), or internally, by extracting information from memory via a mental process, mental imagery. However, we have a limited grasp of how our brain’s internal model operates in the visual perception and mental imagery via feedback information and what similarities and differences the two mental processes represent in the early visual cortex. My project wants to use brain imaging techniques to investigate the underlying neural mechanism of visual perception and address the above questions.

Thomas

Thomas Goodge | Funding: UKRI SOCIAL CDT​

Human-Car interactions in the context of autonomous vehicles

My PhD research will be looking at Human-Car interactions in the context of autonomous vehicles, with a focus on the point of handover of control between the driver and the car. Autonomous cars are sophisticated agents that can handle many driving tasks. However, they may have to hand control to the human driver in different circumstances, for example if sensors fail or weather conditions are bad. This is potentially difficult for the driver as they may have not been driving the car for a long period and have to quickly take control. This is an important issue for car companies as they want to add more automation to vehicles in a safe manner. Key to this problem is whether this interface would benefit from conceptualizing the exchange between human and car as a social interaction.

Lee Seul Shim

Multisensory processing of prosody and gesture

Multisensory processing of prosody and gesture in individuals with low and high levels of autistic traits