Change search
ReferencesLink to record
Permanent link

Direct link
Acoustic impact on decoding of semantic emotions
Umeå University, Faculty of Arts, Philosophy and Linguistics.
Umeå University, Faculty of Arts, Department of language studies.
2007 (English)In: Speaker classification II: selected projects / [ed] Christian Müller, Berlin: Springer , 2007, 57-69 p.Chapter in book (Other academic)
Abstract [en]

This paper examines the interaction between the emotion indicated by the content of an utternance and the emotion indicated by the acoustic of an utterance, and considers whether a speaker can hide their emotional state by acting an emotion even though being semantically honest. Three female and two male speakers of Swedish were recorded saying the sentences “Jag har vunnit en miljon pa° lotto” (I have won a million on the lottery), “Det finns böcker i bokhyllan” (There are books on the bookshelf) and “Min mamma har just dött” (my mother just died) as if they were happy, neutral (indifferent), angry or sad. Thirty-nine experimental participants (19 female and 20 male) heard 60 randomly selected stimuli randomly coupled with the question “Do you consider this speaker to be emotionally X?”, where X could be angry, happy, neutral or sad. They were asked to respond yes or no; the listeners’ responses and reaction times were collected. The results show that semantic cues to emotion play little role in the decoding process. Only when there are few specific acoustic cues to an emotion do semantic cues come into play. However, longer reaction times for the stimuli containing mismatched acoustic and semantic cues indicate that the semantic cues to emotion are processed even if they impact little on the perceived emotion.

Place, publisher, year, edition, pages
Berlin: Springer , 2007. 57-69 p.
, Lecture notes in computer science, ISSN 0302-9743 ; 4441
Keyword [en]
Emotion identification, acoustic emotion, semantic emotion, perception, Swedish
URN: urn:nbn:se:umu:diva-2278DOI: 10.1007/978-3-540-74122-0ISBN: 978-3-540-74121-3OAI: diva2:140209
Available from: 2007-05-03 Created: 2007-05-03 Last updated: 2013-04-09
In thesis
1. That voice sounds familiar: factors in speaker recognition
Open this publication in new window or tab >>That voice sounds familiar: factors in speaker recognition
2007 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Humans have the ability to recognize other humans by voice alone. This is important both socially and for the robustness of speech perception. This Thesis contains a set of eight studies that investigates how different factors impact on speaker recognition and how these factors can help explain how listeners perceive and evaluate speaker identity. The first study is a review paper overviewing emotion decoding and encoding research. The second study compares the relative importance of the emotional tone in the voice and the emotional content of the message. A mismatch between these was shown to impact upon decoding speed. The third study investigates the factor dialect in speaker recognition and shows, using a bidialectal speaker as the target voice to control all other variables, that the dominance of dialect cannot be overcome. The fourth paper investigates if imitated stage dialects are as perceptually dominant as natural dialects. It was found that a professional actor could disguise his voice successfully by imitating a dialect, yet that a listener's proficiency in a language or accent can reduce susceptibility to a dialect imitation. Papers five to seven focus on automatic techniques for speaker separation. Paper five shows that a method developed for Australian English diphthongs produced comparable results with a Swedish glide + vowel transition. The sixth and seventh papers investigate a speaker separation technique developed for American English. It was found that the technique could be used to separate Swedish speakers and that it is robust against professional imitations. Paper eight investigates how age and hearing impact upon earwitness reliability. This study shows that a senior citizen with corrected hearing can be as reliable an earwitness as a younger adult with no hearing problem, but suggests that a witness' general cognitive skill deterioration needs to be considered when assessing a senior citizen's earwitness evidence. On the basis of the studies a model of speaker recognition is presented, based on the face recognition model by V. Bruce and Young (1986; British Journal of Psychology, 77, pp. 305 - 327) and the voice recognition model by Belin, Fecteau and Bédard (2004; TRENDS in Cognitive Science, 8, pp. 129 - 134). The merged and modified model handles both familiar and unfamiliar voices. The findings presented in this Thesis, in particular the findings of the individual papers in Part II, have implications for criminal cases in which speaker recognition forms a part. The findings feed directly into the growing body of forensic phonetic and forensic linguistic research.

Place, publisher, year, edition, pages
Umeå: Filosofi och lingvistik, 2007. 160 p.
speaker recognition, accent, emotions, hearing, spectral moments, formant transitions, dialect
National Category
Human Computer Interaction
urn:nbn:se:umu:diva-1106 (URN)978-91-7264-311-6 (ISBN)
Public defence
2007-05-24, Hörsal F, Humanisthuset, Umeå, 10:00
Available from: 2007-05-03 Created: 2007-05-03 Last updated: 2013-04-09Bibliographically approved

Open Access in DiVA

No full text

Other links

Publisher's full text

Search in DiVA

By author/editor
Eriksson, Erik J.Sullivan, Kirk P. H.
By organisation
Philosophy and LinguisticsDepartment of language studies

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

Altmetric score

Total: 56 hits
ReferencesLink to record
Permanent link

Direct link