My work pictured by AI – Kinkini Bhadra
"Think to speak: What if a computer could decode what you want to say?" - By Kinkini Bhadra
What is this work about? For people affected by neurological conditions like aphasia, who have intact thoughts but disrupted speech, a computer that decodes speech directly from the neural signal and converts it into audible speech could be life-changing. Recent research has demonstrated the potential for brain signals to decode imagined speech, which could be life-changing for individuals with neurological conditions affecting speech. While much of this research has focused on developing machine learning tools, there is also potential for training the human brain to improve BCI control. Our study utilized a Brain-Computer Interface to decode covertly spoken syllables and showed improved BCI control performance after just 5 days of training in 11 out of 15 healthy participants. This indicates the brain’s ability to adapt and learn new skills like speech imagery and opens up new possibilities for speech prosthesis and rehabilitation.
The first word that came to mind when seeing the AI-generated picture? Communication.
Explore more illustrations!
My work pictured by AI – Paola Merlo

"Blackbird's language matrices (BLMs): a new task to investigate disentangled generalization in neural networks." - By Paola Merlo
What is this work about? Current successes of machine learning architectures are based on computationally expensive algorithms and prohibitively large amounts of data. We need to develop tasks and data to train networks to reach more complex and more compositional skills. In this paper, we illustrate Blackbird’s language matrices (BLMs), a novel grammatical task modelled on intelligence tests usually based on visual stimuli. The dataset is generatively constructed to support investigations of current models’ linguistic mastery and their ability to generalize them. We present the logic of the task, the method to automatically construct data on a large scale, and the architecture to learn them. Through error analysis and several experiments on variations of the dataset, we demonstrate that this language task and the data that instantiate it provide a new challenging testbed to understand generalization and abstraction.
The first word that came to mind when seeing the AI-generated picture? Goofy.
Explore more illustrations!
My work pictured by AI – Piermatteo Morucci

In the style of computational art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alexandra Bosshard

In the style of a coloring book. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Adrian Bangerter

In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Sebastian Sauppe

"Neural signatures of syntactic variation in speech planning." - By Sebastian Sauppe
What is this work about? Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.
The first word that came to mind when seeing the AI-generated picture? Seeing into the mind.
Explore more illustrations!
My work pictured by AI – Chantal Oderbolz

"Tracking the prosodic hierarchy in the brain." - By Chantal Oderbolz
What is this work about? The speech signal carries hierarchically organized acoustic and linguistic information. Recent research suggests that the brain uses brain waves, called cortical oscillations, to process this information. Especially oscillations in the theta frequency range (4-8 Hz) have been found to be important: Theta oscillations process acoustic energy in the speech signal associated with the timing of syllables. However, there is also slower information in the speech signal that corresponds to stress and intonation patterns and are part of the prosody – the rhythm and melody – of a language.
To better understand how the brain processes these different levels at the same time, we conducted an experiment with 30 participants who listened to German sentences with manipulated stress and intonation patterns. We found that the brain is able to simultaneously process the syllable, stress and intonation patterns of speech. However, changes in stress patterns disrupted the brain’s ability to track syllables with theta oscillations. Conversely, the brain was able to compensate for changes in intonation patterns by using linguistic knowledge. Additionally, we found that individuals varied in their ability to process the prosodic structure of the speech signal, with some participants better able to compensate for acoustic changes than others. Overall, our results support the idea that the brain uses a hierarchical organization of cortical oscillations to process the speech signal.
The first word that came to mind when seeing the AI-generated picture? Nostalgia.
Explore more illustrations!
My work pictured by AI – Jessie C. Adriaense
In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Fabio J. Fehr
In the style of comics and superheros. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
"Fetal precursors of vocal learning: articulatory responses to speech stimuli in utero." - By Alejandra Hüsser
What is this work about? Newborn infants’ first cries carry the pitch accent of the language that dominated their environment while they were in the womb. The fact that their very first communicative acts bear traces of their linguistic environment indicates that the developing brain encodes articulatory patterns already in utero. This precocious learning has been proposed as a significant precursor for linguistic development. We aim to investigate fetal brain responses to speech stimuli in the womb, to illuminate the prenatal developmental trajectory of the brain’s expressive language network. Women during the last trimester of pregnancy will undergo a functional magnetic resonance imaging (fMRI) in which the fetus in utero is exposed to a variety of simple speech sounds.
The first word that came to mind when seeing the AI-generated picture? Universe.
Explore more illustrations!
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Monica Lancheros
In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
In the style of Joan Miro. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
"Human vocal flexibility between accommodation and individualization: the effect of group size. " - By Elisa Pellegrino
What is this work about? Results revealed that vocal similarity between speakers increased with a larger group size which indicates a higher cooperative vocal behavior, with a negative impact on individual vocal recognizability. The results of this study inform about cross-species accommodative behavior, and human variability in cooperation in the lack of visual cues and have implications for voice processing and forensic voice comparison.
The first word that came to mind when seeing the AI-generated picture? Interaction.
Explore more illustrations!
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
In the style of Surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Sebastian Sauppe

In the style of Hieronmyus Bosch. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
"Mothers reveal more of their vocal identity when talking to babies." - By Volker Dellwo
What is this work about? Voice timbre – the unique acoustic information in a voice by which its speaker can be recognized – is particularly critical in mother-infant interaction. Vocal timbre is necessary for infants to recognize their mothers as familiar both before and after birth, providing a basis for social bonding between infant and mother. The exact mechanisms underlying infant voice recognition are unknown. Here, we show – for the first time – that mothers’ vocalizations contain more detail of their vocal timbre through adjustments to their voices known as infant-directed speech (IDS) or baby talk, resulting in utterances in which individual recognition is more robust. Using acoustic modelling (k-means clustering of Mel Frequency Cepstral Coefficients) of IDS in comparison with adult-directed speech (ADS), we found across a variety of languages from different cultures that voice timbre clusters in IDS are significantly larger to comparable clusters in ADS. This effect leads to a more detailed representation of timbre in IDS with subsequent benefits for recognition. Critically, an automatic speaker identification Gaussian-mixture model based on Mel Frequency Cepstral Coefficients showed significantly better performance when trained with IDS as opposed to ADS. We argue that IDS has evolved as part of a set of adaptive evolutionary strategies that serve to promote indexical signalling by caregivers to their offspring which thereby promote social bonding via voice and acquiring language.
Comment about the picture from the author? The study is about ‘voice recognition’ and the advantage that infant-directed speech offers in learning a voice. I am not sure someone would conclude this from looking at the pictures.
Explore more illustrations!
My work pictured by AI – Paola Merlo

In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
"Mathematical modelling of marmoset vocal learning suggests a dynamic template matching mechanism." - By Nikhil Phaniraj
What is this work about? Vocal learning plays an important role during speech development in human infants and is vital for language. However, the complex structure of language creates a colossal challenge in quantifying and tracking vocal changes in humans. Consequently, animals with simpler vocal communication systems are powerful tools for understanding the mechanisms underlying vocal learning. While human infants show the most drastic vocal changes, many adult animals, including humans, continue to show vocal learning in the form of a much-understudied phenomena called vocal accommodation. Vocal accommodation is often seen when people use similar words, pronunciations and speech rate to their conversing partner. Such a phenomena is also seen in common marmosets, a highly voluble Brazilian monkey species, with a simpler communication system compared to humans. In this project, I developed a mathematical model that explains the basic principles and rules underlying marmoset vocal accommodation. The model provides crucial insights into the mechanisms underlying vocal learning in adult animals and how they might differ from vocal learning in infant animals and humans.
The first word that came to mind when seeing the AI-generated picture? Monkey-learning.
Explore more illustrations!
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Kinkini Bhadra
In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.
My work pictured by AI – Paola Merlo

In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
"Human vocal flexibility between accommodation and individualization: the effect of group size. " - By Elisa Pellegrino
What is this work about? Results revealed that vocal similarity between speakers increased with a larger group size which indicates a higher cooperative vocal behavior, with a negative impact on individual vocal recognizability. The results of this study inform about cross-species accommodative behavior, and human variability in cooperation in the lack of visual cues and have implications for voice processing and forensic voice comparison.
The first word that came to mind when seeing the AI-generated picture? Interaction.
Explore more illustrations!
My work pictured by AI – Kinkini Bhadra
In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Huw Swanborough
In the style of Bauhaus. ©With Midjourney – AI & NCCR Evolving Language.