My work pictured by AI – Sarah Saneei
"Computations supporting language functions and dysfunctions in artificial and biological neural networks." - By Sarah Saneei
What is this work about? It’s a research with the aim of finding the best stimuli (input) that can be provided with the brain to have the same brain signal results (best activation of Neurons) using deep learning approaches. We’ll use fMRI and ECOG to prepare the data for the model and as inputs, we plan to use texts and audio.
The first word that came to mind when seeing the AI-generated picture? /
Explore more illustrations!
My work pictured by AI – Fabio J. Fehr

"A variational auto-encoder for Transformers with Nonparametric Variational Information Bottleneck." - By Fabio J. Fehr
What is this work about? Today Transformer language models dominate the natural language processing domain. In our work, we introduce a new perspective on these models, which in turn provide new emerging capabilities!
The first word that came to mind when seeing the AI-generated picture? Superhero!
Explore more illustrations!
My work pictured by AI – Adrian Bangerter

In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group

In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Kinkini Bhadra

In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.
My work pictured by AI – Daniel Friedrichs

"Speaking Fast and Slow: Evidence for Anatomical Influence on Temporal Dynamics of Speech” - By Daniel Friedrichs.
What is this work about? We explored the connection between mandible length and the temporal dynamics of speech. Our study involved testing speakers with different mandible sizes and observing how their speech timing was affected. We found that mandible length can indeed influence the time it takes to open and close the mouth, which in turn can affect the length of syllables in speech. This finding is particularly important for language evolution, as the human jaw has undergone significant changes throughout human history. For example, the jaw has decreased in size due to softer diets since the transition from hunter-gatherer to agricultural societies. By considering the movements of the mandible as similar to that of a pendulum, it becomes apparent that the duration of an oscillation, or period, should depend entirely on its length. This analogy suggests that humans in the distant past might have spoken more slowly due to slower mouth opening and closing movements, resulting in slower transmission of information. If this were true, it could also have had an impact on the evolution of the human brain, as humans would have to process linguistic information at lower frequencies (for example, previous studies have shown that the brain tracks the speech signal at frequencies that correspond to the lengths of syllables). It seems possible that, over time, the human brain has adapted to changes in human jaw anatomy, resulting in the speech and language patterns we observe today. Our research sheds light on the fascinating relationship between anatomy and speech, and how changes in our physical makeup can influence the way we communicate.
The first word that came to mind when seeing the AI-generated picture? Adaptation.
Explore more illustrations!
My work pictured by AI – Abigail Licata

In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Chantal Oderbolz
"Tracking the prosodic hierarchy in the brain." - By Chantal Oderbolz
What is this work about? The speech signal carries hierarchically organized acoustic and linguistic information. Recent research suggests that the brain uses brain waves, called cortical oscillations, to process this information. Especially oscillations in the theta frequency range (4-8 Hz) have been found to be important: Theta oscillations process acoustic energy in the speech signal associated with the timing of syllables. However, there is also slower information in the speech signal that corresponds to stress and intonation patterns and are part of the prosody – the rhythm and melody – of a language.
To better understand how the brain processes these different levels at the same time, we conducted an experiment with 30 participants who listened to German sentences with manipulated stress and intonation patterns. We found that the brain is able to simultaneously process the syllable, stress and intonation patterns of speech. However, changes in stress patterns disrupted the brain’s ability to track syllables with theta oscillations. Conversely, the brain was able to compensate for changes in intonation patterns by using linguistic knowledge. Additionally, we found that individuals varied in their ability to process the prosodic structure of the speech signal, with some participants better able to compensate for acoustic changes than others. Overall, our results support the idea that the brain uses a hierarchical organization of cortical oscillations to process the speech signal.
The first word that came to mind when seeing the AI-generated picture? Nostalgia.
Explore more illustrations!
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Kinkini Bhadra

In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.
My work pictured by AI – Yaqing Su
In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
"Fetal precursors of vocal learning: articulatory responses to speech stimuli in utero." - By Alejandra Hüsser
What is this work about? Newborn infants’ first cries carry the pitch accent of the language that dominated their environment while they were in the womb. The fact that their very first communicative acts bear traces of their linguistic environment indicates that the developing brain encodes articulatory patterns already in utero. This precocious learning has been proposed as a significant precursor for linguistic development. We aim to investigate fetal brain responses to speech stimuli in the womb, to illuminate the prenatal developmental trajectory of the brain’s expressive language network. Women during the last trimester of pregnancy will undergo a functional magnetic resonance imaging (fMRI) in which the fetus in utero is exposed to a variety of simple speech sounds.
The first word that came to mind when seeing the AI-generated picture? Universe.
Explore more illustrations!
My work pictured by AI – Jamil Zaghir
In a futuristic style. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Fabio J. Fehr

In the style of comics and superheros. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.