My work pictured by AI – Sarah Saneei
"Computations supporting language functions and dysfunctions in artificial and biological neural networks." - By Sarah Saneei
What is this work about? It’s a research with the aim of finding the best stimuli (input) that can be provided with the brain to have the same brain signal results (best activation of Neurons) using deep learning approaches. We’ll use fMRI and ECOG to prepare the data for the model and as inputs, we plan to use texts and audio.
The first word that came to mind when seeing the AI-generated picture? /
Explore more illustrations!
My work pictured by AI – Nikhil Phaniraj

"Mathematical modelling of marmoset vocal learning suggests a dynamic template matching mechanism." - By Nikhil Phaniraj
What is this work about? Vocal learning plays an important role during speech development in human infants and is vital for language. However, the complex structure of language creates a colossal challenge in quantifying and tracking vocal changes in humans. Consequently, animals with simpler vocal communication systems are powerful tools for understanding the mechanisms underlying vocal learning. While human infants show the most drastic vocal changes, many adult animals, including humans, continue to show vocal learning in the form of a much-understudied phenomena called vocal accommodation. Vocal accommodation is often seen when people use similar words, pronunciations and speech rate to their conversing partner. Such a phenomena is also seen in common marmosets, a highly voluble Brazilian monkey species, with a simpler communication system compared to humans. In this project, I developed a mathematical model that explains the basic principles and rules underlying marmoset vocal accommodation. The model provides crucial insights into the mechanisms underlying vocal learning in adult animals and how they might differ from vocal learning in infant animals and humans.
The first word that came to mind when seeing the AI-generated picture? Monkey-learning.
Explore more illustrations!
My work pictured by AI – Alejandra Hüsser

In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Adrian Bangerter

In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Aris Xanthos

In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alexandra Bosshard

"Sequencing in common marmoset call structures." - By Alexandra Bosshard
What is this work about? Over the last twenty years researchers have become more and more interested in the way non-human animals communicate in order to explore what such findings could potentially tell us about the development of our own language. Through applying methods borrowed from computational linguistics, we were able to show that the very social common marmoset monkey strings calls together to form larger sequences up to nine calls of length. Superficially similar to the way we combine meaningful units, like words, into phrases or sentences, marmosets seem to follow a similar set of rules when stringing their calls together to form larger structures. We can conclude that the vocal systems of non-human animals might be built up in more complex ways than what we previously thought.
The first word that came to mind when seeing the AI-generated picture? Complexity.
Explore more illustrations!
My work pictured by AI – Daniel Friedrichs

In the style of Edward Hopper. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser

In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Abigail Licata
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
"Human vocal flexibility between accommodation and individualization: the effect of group size. " - By Elisa Pellegrino
What is this work about? Results revealed that vocal similarity between speakers increased with a larger group size which indicates a higher cooperative vocal behavior, with a negative impact on individual vocal recognizability. The results of this study inform about cross-species accommodative behavior, and human variability in cooperation in the lack of visual cues and have implications for voice processing and forensic voice comparison.
The first word that came to mind when seeing the AI-generated picture? Interaction.
Explore more illustrations!
My work pictured by AI – Monica Lancheros
In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj

In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Yaqing Su
In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
"Mothers reveal more of their vocal identity when talking to babies." - By Volker Dellwo
What is this work about? Voice timbre – the unique acoustic information in a voice by which its speaker can be recognized – is particularly critical in mother-infant interaction. Vocal timbre is necessary for infants to recognize their mothers as familiar both before and after birth, providing a basis for social bonding between infant and mother. The exact mechanisms underlying infant voice recognition are unknown. Here, we show – for the first time – that mothers’ vocalizations contain more detail of their vocal timbre through adjustments to their voices known as infant-directed speech (IDS) or baby talk, resulting in utterances in which individual recognition is more robust. Using acoustic modelling (k-means clustering of Mel Frequency Cepstral Coefficients) of IDS in comparison with adult-directed speech (ADS), we found across a variety of languages from different cultures that voice timbre clusters in IDS are significantly larger to comparable clusters in ADS. This effect leads to a more detailed representation of timbre in IDS with subsequent benefits for recognition. Critically, an automatic speaker identification Gaussian-mixture model based on Mel Frequency Cepstral Coefficients showed significantly better performance when trained with IDS as opposed to ADS. We argue that IDS has evolved as part of a set of adaptive evolutionary strategies that serve to promote indexical signalling by caregivers to their offspring which thereby promote social bonding via voice and acquiring language.
Comment about the picture from the author? The study is about ‘voice recognition’ and the advantage that infant-directed speech offers in learning a voice. I am not sure someone would conclude this from looking at the pictures.
Explore more illustrations!
My work pictured by AI – Alexandra Bosshard

In the style of a coloring book. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Abigail Licata
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.