In pursuing PRESENCE-based approaches to modelling spoken language, I've become increasingly drawn into studying vocalisation in general, whether it is performed by human beings, animals or robots. I'm currently developing synthesisers for mammalian, insect and dolphin vocalisations, and embedding them in behavioural simulations implemented in Pure Data and in real-time embodiments using e-puck, Create and RoboKind robots. My aim is to demonstrate that many of the little-understood paralinguistic features exhibited in human speech (including prosody and emotion) are derived from characteristics that are shared by living systems in general. Modelling such behaviours in this wider (situated and embodied) context should eventually enable us to implement usable and effective interaction with artificial intentional agents such as robots.
This video depicts three e-puck robots, each of which has been configured to navigate its environment without colliding with objects it encounters (e.g. walls and other e-pucks). If a (near) collision occurs, then a robot not only performs an avoidance manoeuvre, but also vocalises in order to express it’s emotional state. Each robot has its own personalised sound (based on vocal tract size), but all three are based on a simulation of the squeak of a small rat.
Thanks to the Swarm Robotics Group at Bristol Robotics Laboratory, these e-pucks have been enhanced with an on-board Linux system with wireless connectivity. The robots are controlled using Pure Data operating with Player.
The Vocal Interactivity Laboratory (VILab) is concerned with fundamental and applied research in all areas relating to vocal interactivity in and between humans, animals and robots
Research topics range from speech-based human-robot interaction (HRI) to the analysis, modelling and simulation of vocal communication systems in animals.
The overall objective is to understand and exploit common features of vocal communication and interactivity, including expressions of individuality and emotion.
The VILab logo is inspired by the pictograms representing autopoietic systems that appear in Maturana, H. R., & Varela, F. J. (1987). The Tree of Knowledge: The Biological Roots of Human Understanding. Boston, MA: New Science Library/Shambhala Publications. Prof. Moore has extended the pictograms to encompass human beings (top), animals (bottom-left) and robots (bottom-right).