Skip to content

What do the emotions of animals, humans and machines sound like?


Humans express their emotions in various ways. For example, fear can be expressed as a loud scream and sadness by crying. However, emotions also play an important role in our music and in the sound of our voice. This way the sound of a song or the voice of a narrator can lead us to feel a certain emotion. For this reason, it has been hypothesized that the expression of emotion might have played a crucial role in the evolution of language and music. 

Listen to how different human emotions can sound! What could the person feel and which emotions do you feel when hearing these recordings?


Animals produce a multitude of different signals, which are strongly influenced by their emotional states. In our research, we investigate how emotions impact the communication systems of meerkats. Depending on the predator and urgency, meerkats use different alarm calls. However, a meerkat does not even have to see the predator – hearing an alarm call is sufficient to cause a respective reaction. 

In this situation, a meerkat has spotted a predator on land. However, the predator is still far away.

Listen to how the call changes as the predator gets closer and the animal becomes more agitated:


Nowadays, computers can imitate human emotions increasingly well. Therefore, it becomes more difficult to distinguish between sound generated from artificial intelligence and human music and poetry. This has become possible thanks to machine learning, which generates artificial knowledge based on previous experiences, similar to how humans acquire knowledge. 

The following audio files were generated exclusively by artificial intelligence.



Araya-Salas M & Wilkins M R. (2020). *dynaSpec: dynamic spectrogram visualizations in R*. R package version 1.0.0.

Briefer, E. F. (2018). Vocal contagion of emotions in non-human animals. Proceedings of the Royal Society B: Biological Sciences, 285(1873), 20172783.

Briefer, E. F. (2012). Vocal expression of emotions in mammals: mechanisms of production and evidence. Journal of Zoology, 288(1), 1-20.

Briot, J. P., Hadjeres, G., & Pachet, F. D. (2017). Deep learning techniques for music generation–a survey. arXiv preprint arXiv:1709.01620.

Bryant, G. A. (2013). Animal signals and emotion in music: Coordinating affect across groups. Frontiers in Psychology, 4, 990.

Chatterjee, A., Gupta, U., Chinnakotla, M. K., Srikanth, R., Galley, M., & Agrawal, P. (2019). Understanding emotions in text using deep learning and big data. Computers in Human Behavior, 93, 309-317.

Darwin, C., & Prodger, P. (1998). The expression of the emotions in man and animals. Oxford University Press, USA.

Devlin, E. (2021) POEMPORTRAITS. [online] Available at: <> [Accessed 15 January 2021].

Ellis, R.J. and Simons, R.F., 2005. The impact of music on subjective and physiological indices of emotion while viewing films. Psychomusicology: A Journal of Research in Music Cognition, 19(1), p.15.

Filippi, P., Congdon, J. V., Hoang, J., Bowling, D. L., Reber, S. A., Pašukonis, A., … & Güntürkün, O. (2017). Humans recognize emotional arousal in vocalizations across all classes of terrestrial vertebrates: evidence for acoustic universals. Proceedings of the Royal Society B: Biological Sciences, 284(1859), 20170990.

Filippi, P., Hoeschele, M., Spierings, M., & Bowling, D. L. (2019). Temporal modulation in speech, music, and animal vocal communication: evidence of conserved function. Annals of the New York Academy of Sciences, 1453(1), 99-113.

Finnegan, R. (2018). Oral poetry: its nature, significance and social context. Wipf and Stock Publishers.

Free Text-To-Speech and Text-to-MP3 for German. (2021). Ttsmp3. [Accessed 03 September 2021]

Kim, Y., Soyata, T., & Behnagh, R. F. (2018). Towards emotionally aware AI smart classroom: Current issues and directions for engineering and education. IEEE Access, 6, 5308-5331.

Köbis, N., & Mossink, L. D. (2021). Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry. Computers in Human Behavior, 114, 106553.

LeDoux, J. (1998). The emotional brain: The mysterious underpinnings of emotional life. Simon and Schuster.

Lau, J. H., Cohn, T., Baldwin, T., Brooke, J., & Hammond, A. (2018). Deep-speare: A joint neural model of poetic language, meter and rhyme. arXiv preprint arXiv:1807.03491.

Liu, Z. (2016). Impact of Soundtrack in Animated Movie on Audience: A Case Study of” LET IT GO” in” FROZEN”.

Misztal, J., & Indurkhya, B. (2014). Poetry generation system with an emotional personality. In ICCC (pp. 72-81).

Mohn, C., Argstatter, H., & Wilker, F. W. (2011). Perception of six basic emotions in music. Psychology of music, 39(4), 503-517.

Oliveira, H. G. (2017, September). A survey on intelligent poetry generation: Languages, features, techniques, reutilisation and evaluation. In Proceedings of the 10th International Conference on Natural Language Generation (pp. 11-20).

Oliveira, H. G., & Cardoso, A. (2015). Poetry generation with PoeTryMe. In Computational Creativity Research: Towards Creative Machines (pp. 243-266). Atlantis Press, Paris.

Perlovsky, L. (2010). Musical emotions: Functions, origins, evolution. Physics of life reviews, 7(1), 2-27.

Poetron. (2021). Poetron. [Accessed 03 September 2021]

Schubert, E., Canazza, S., De Poli, G., & Rodà, A. (2017). Algorithms can mimic human piano performance: the deep blues of music. Journal of New Music Research, 46(2), 175-186.

Turing, A. M. (1950). Mind. Mind, 59(236), 433-460.