My work pictured by AI – Sebastian Sauppe
"Neural signatures of syntactic variation in speech planning." - By Sebastian Sauppe
What is this work about? Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.
The first word that came to mind when seeing the AI-generated picture? Seeing into the mind.
Explore more illustrations!
My work pictured by AI – Diana Mazzarella

"Speaker trustworthiness: Shall confidence match evidence?" - By Diana Mazzarella
What is this work about? Speakers can convey information with varying degrees of confidence, and this typically impacts the extent to which their messages are accepted as true. Confident speakers are more likely to be believed than unconfident one. Crucially, though, this benefit comes with additional risks. Confident speakers put their reputation at stake: if their message turns out to be false, they are more likely to suffer a repetitional loss than unconfident speakers. In this paper, we investigate the extent to which perceived speaker trustworthiness is affected by evidence. Our experiments show that the reputation of confident speakers is not damaged when their false claims are supported by strong evidence, but it is damaged when their true claims are based on weak evidence.
The first word that came to mind when seeing the AI-generated picture? Trust me.
Explore more illustrations!
My work pictured by AI – Jamil Zaghir

"Human-Machine Interactions, a battle of language acquisition." - By Jamil Zaghir
What is this work about? Human-machine interactions have an impact on the language acquisition for both actors. On the one hand, technologies are able to “learn” a language from text written by humans through Machine Learning, whether to perform a specific task or to chat with humans. Then on the other hand, humans have the tendency to learn a pseudo-language to improve the efficiency of interactions with the technology.
The first word that came to mind when seeing the AI-generated picture? Interactiveness.
Explore more illustrations!
My work pictured by AI – Aris Xanthos

In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
In the style of Joan Miro. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Daniel Friedrichs
"Speaking Fast and Slow: Evidence for Anatomical Influence on Temporal Dynamics of Speech” - By Daniel Friedrichs.
What is this work about? We explored the connection between mandible length and the temporal dynamics of speech. Our study involved testing speakers with different mandible sizes and observing how their speech timing was affected. We found that mandible length can indeed influence the time it takes to open and close the mouth, which in turn can affect the length of syllables in speech. This finding is particularly important for language evolution, as the human jaw has undergone significant changes throughout human history. For example, the jaw has decreased in size due to softer diets since the transition from hunter-gatherer to agricultural societies. By considering the movements of the mandible as similar to that of a pendulum, it becomes apparent that the duration of an oscillation, or period, should depend entirely on its length. This analogy suggests that humans in the distant past might have spoken more slowly due to slower mouth opening and closing movements, resulting in slower transmission of information. If this were true, it could also have had an impact on the evolution of the human brain, as humans would have to process linguistic information at lower frequencies (for example, previous studies have shown that the brain tracks the speech signal at frequencies that correspond to the lengths of syllables). It seems possible that, over time, the human brain has adapted to changes in human jaw anatomy, resulting in the speech and language patterns we observe today. Our research sheds light on the fascinating relationship between anatomy and speech, and how changes in our physical makeup can influence the way we communicate.
The first word that came to mind when seeing the AI-generated picture? Adaptation.
Explore more illustrations!
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Yaqing Su
In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
"Mathematical modelling of marmoset vocal learning suggests a dynamic template matching mechanism." - By Nikhil Phaniraj
What is this work about? Vocal learning plays an important role during speech development in human infants and is vital for language. However, the complex structure of language creates a colossal challenge in quantifying and tracking vocal changes in humans. Consequently, animals with simpler vocal communication systems are powerful tools for understanding the mechanisms underlying vocal learning. While human infants show the most drastic vocal changes, many adult animals, including humans, continue to show vocal learning in the form of a much-understudied phenomena called vocal accommodation. Vocal accommodation is often seen when people use similar words, pronunciations and speech rate to their conversing partner. Such a phenomena is also seen in common marmosets, a highly voluble Brazilian monkey species, with a simpler communication system compared to humans. In this project, I developed a mathematical model that explains the basic principles and rules underlying marmoset vocal accommodation. The model provides crucial insights into the mechanisms underlying vocal learning in adult animals and how they might differ from vocal learning in infant animals and humans.
The first word that came to mind when seeing the AI-generated picture? Monkey-learning.
Explore more illustrations!
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Kinkini Bhadra
In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.