↓ Skip to main content

PNAS

Covert digital manipulation of vocal emotion alter speakers’ emotional states in a congruent direction

Overview of attention for article published in Proceedings of the National Academy of Sciences of the United States of America, January 2016
Altmetric Badge

About this Attention Score

  • In the top 5% of all research outputs scored by Altmetric
  • High Attention Score compared to outputs of the same age (99th percentile)
  • High Attention Score compared to outputs of the same age and source (96th percentile)

Mentioned by

news
36 news outlets
blogs
7 blogs
twitter
91 X users
patent
1 patent
facebook
9 Facebook pages
googleplus
2 Google+ users

Citations

dimensions_citation
46 Dimensions

Readers on

mendeley
224 Mendeley
Title
Covert digital manipulation of vocal emotion alter speakers’ emotional states in a congruent direction
Published in
Proceedings of the National Academy of Sciences of the United States of America, January 2016
DOI 10.1073/pnas.1506552113
Pubmed ID
Authors

Jean-Julien Aucouturier, Petter Johansson, Lars Hall, Rodrigo Segnini, Lolita Mercadié, Katsumi Watanabe

Abstract

Research has shown that people often exert control over their emotions. By modulating expressions, reappraising feelings, and redirecting attention, they can regulate their emotional experience. These findings have contributed to a blurring of the traditional boundaries between cognitive and emotional processes, and it has been suggested that emotional signals are produced in a goal-directed way and monitored for errors like other intentional actions. However, this interesting possibility has never been experimentally tested. To this end, we created a digital audio platform to covertly modify the emotional tone of participants' voices while they talked in the direction of happiness, sadness, or fear. The result showed that the audio transformations were being perceived as natural examples of the intended emotions, but the great majority of the participants, nevertheless, remained unaware that their own voices were being manipulated. This finding indicates that people are not continuously monitoring their own voice to make sure that it meets a predetermined emotional target. Instead, as a consequence of listening to their altered voices, the emotional state of the participants changed in congruence with the emotion portrayed, which was measured by both self-report and skin conductance level. This change is the first evidence, to our knowledge, of peripheral feedback effects on emotional experience in the auditory domain. As such, our result reinforces the wider framework of self-perception theory: that we often use the same inferential strategies to understand ourselves as those that we use to understand others.

X Demographics

X Demographics

The data shown below were collected from the profiles of 91 X users who shared this research output. Click here to find out more about how the information was compiled.
As of 1 July 2024, you may notice a temporary increase in the numbers of X profiles with Unknown location. Click here to learn more.
Mendeley readers

Mendeley readers

The data shown below were compiled from readership statistics for 224 Mendeley readers of this research output. Click here to see the associated Mendeley record.

Geographical breakdown

Country Count As %
Japan 3 1%
France 1 <1%
Austria 1 <1%
Singapore 1 <1%
Germany 1 <1%
Russia 1 <1%
United States 1 <1%
Luxembourg 1 <1%
Poland 1 <1%
Other 0 0%
Unknown 213 95%

Demographic breakdown

Readers by professional status Count As %
Student > Ph. D. Student 52 23%
Researcher 34 15%
Student > Master 33 15%
Student > Bachelor 32 14%
Student > Doctoral Student 10 4%
Other 28 13%
Unknown 35 16%
Readers by discipline Count As %
Psychology 80 36%
Neuroscience 21 9%
Computer Science 19 8%
Engineering 12 5%
Agricultural and Biological Sciences 9 4%
Other 41 18%
Unknown 42 19%
Attention Score in Context

Attention Score in Context

This research output has an Altmetric Attention Score of 388. This is our high-level measure of the quality and quantity of online attention that it has received. This Attention Score, as well as the ranking and number of research outputs shown below, was calculated when the research output was last mentioned on 17 January 2022.
All research outputs
#82,546
of 26,194,269 outputs
Outputs from Proceedings of the National Academy of Sciences of the United States of America
#1,932
of 104,532 outputs
Outputs of similar age
#1,296
of 404,133 outputs
Outputs of similar age from Proceedings of the National Academy of Sciences of the United States of America
#30
of 825 outputs
Altmetric has tracked 26,194,269 research outputs across all sources so far. Compared to these this one has done particularly well and is in the 99th percentile: it's in the top 5% of all research outputs ever tracked by Altmetric.
So far Altmetric has tracked 104,532 research outputs from this source. They typically receive a lot more attention than average, with a mean Attention Score of 39.9. This one has done particularly well, scoring higher than 98% of its peers.
Older research outputs will score higher simply because they've had more time to accumulate mentions. To account for age we can compare this Altmetric Attention Score to the 404,133 tracked outputs that were published within six weeks on either side of this one in any source. This one has done particularly well, scoring higher than 99% of its contemporaries.
We're also able to compare this research output to 825 others from the same source and published within six weeks on either side of this one. This one has done particularly well, scoring higher than 96% of its contemporaries.