Muhammad Chishty Undergraduate Dissertation 2017/18
A computer-generated talking head for use in early diagnosis of dementia
Supervised by S.Maddock
Abstract
The number of people living with dementia at the time of writing is higher than ever before and is only estimated to increase as the years go on. There are many symptoms associated with having dementia including but not limited to memory loss, confusion, and tiredness. The diagnosis involves conversational analysis of answers to questions after specialists conversing with people who have dementia. This process is rather time-consuming and expensive as it requires the involvement of a lot of trained experts. There has been research into the use of an Embodied Conversational Agent (ECA) to converse with the people who have dementia rather than the trained experts in order to diagnose them.
Past applications of ECAs show limitations, namely the synchronisation of the speech output to the ECA's mouth. This can lead to off-putting feelings exhibited by the patients and can then result in a false diagnosis. This project aims to investigate the ability of the FaceFX software developed by OC3 Entertainment in implementing an ECA with synchronised speech and mouth output by modifying the default phoneme configuration created by FaceFX. Animations are generated by providing FaceFX with a sound file and matching text file, which will then be analysed and broken down into phonemes and assigned curves which are then interpolated, resulting in the animation of the ECA. This project looks at editing the curves and phoneme mapping in an attempt to improve the ECA's speech to mouth synchronisation.
The results of the implementation are positive as in most cases, participants from a controlled experiment preferred the developed ECA when put in a pair with the default ECA, saying the same sentence, regardless of whether a single rule was modified or if multi-rule modification was implemented.
|