Share on Facebook
Share on Twitter
Share via e-mail

Copyright © Roger K. Moore | contact                                     Site designed using Serif WebPlus X6


<< back to research


Whilst there is considerable interest in the possibility of people interacting with ‘intelligent’ agents for a wide range of applications, as yet there is no underpinning science as to how create such artefacts to ensure that they are capable of providing an effective and sustainable interaction.  This is especially true in the case of human-robot interaction where the look, sound and behaviour of a robot may not be consistent - a situation that can trigger feelings of eeriness and repulsion in the users.

I have developed the first mathematical model of this so-called ‘uncanny valley’ effect (published in Nature), and I am currently investigating the implications for designing autonomous social agents (such as robots) whose visual, vocal, cognitive and behavioural ‘affordances’ are appropriate to the role for which they are designed.

Visual Expressivity

The consequences of emotional states on visual (and vocal) expression was first reported by Charles Darwin in his book The Expression of the Emotions in Man and Animals (published in 1872).  Since that time, there has been a considerable amount of research in this area, particularly thanks to HUMAINE - the EU-funded Human-Machine Interaction Network on Emotion.  Much of this type of research has been conducted using on-screen avatars, however it is likely that people will interact quite differently with synthetic characters portrayed on a computer screen than with real physical artefacts (such as people, animals or robots).  These issues are being investigated using Zeno (shown here) - the RoboKind humanoid expressive robot made by Hanson Robotics in the US.

Modelling the ‘Uncanny Valley’

The term ‘uncanny valley’ was coined by Masahiro Mori in 1970 to describe the observation that near-human artefacts can engender strong negative emotions in an observer.  Mori also proposed that the uncanny valley effect can be stronger when near-human artefacts are moving rather than still.  Although such phenomena are well documented, until recently there was no quantitative explanation for the findings.

Based on a Bayesian model of categorical perception, I have shown that differential perceptual distortion arising from stimuli containing conflicting cues can give rise to a perceptual tension at category boundaries that could not only account for the uncanny valley phenomenon, but it may also provide a mathematical explanation for a range of social situations in which conflicting cues give rise to negative, fearful or even violent reactions.

Masahiro Mori’s famous diagram illustrating the ‘Uncanny Valley’ effect.
Output of the mathematical model.

Mathematical explanation

The model is based on the hypothesis that the uncanny valley effect is a manifestation of the ‘perceptual magnet effect’ in which perception is distorted such that stimuli close to a category boundary are judged to be more dissimilar than stimuli that are away from a category boundary.

In the situation where there are multiple perceptual cues to category membership, there is the possibility that the multidimensional perceptual distortions induced at category boundaries could be misaligned.

As a result, conflicting perceptual cues can give rise to differential distortion in the region of a category boundary, and such distortion would be manifest as a form of perceptual ‘tension’.  This, in turn, may be experienced as physical or emotional discomfort (such as eeriness or creepiness).

The results were published in Nature (open access):