My work pictured by AI – Sebastian Sauppe
"Neural signatures of syntactic variation in speech planning." - By Sebastian Sauppe
What is this work about? Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.
The first word that came to mind when seeing the AI-generated picture? Seeing into the mind.
Explore more illustrations!
My work pictured by AI – Alejandra Hüsser

"Fetal precursors of vocal learning: articulatory responses to speech stimuli in utero." - By Alejandra Hüsser
What is this work about? Newborn infants’ first cries carry the pitch accent of the language that dominated their environment while they were in the womb. The fact that their very first communicative acts bear traces of their linguistic environment indicates that the developing brain encodes articulatory patterns already in utero. This precocious learning has been proposed as a significant precursor for linguistic development. We aim to investigate fetal brain responses to speech stimuli in the womb, to illuminate the prenatal developmental trajectory of the brain’s expressive language network. Women during the last trimester of pregnancy will undergo a functional magnetic resonance imaging (fMRI) in which the fetus in utero is exposed to a variety of simple speech sounds.
The first word that came to mind when seeing the AI-generated picture? Universe.
Explore more illustrations!
My work pictured by AI – Chantal Oderbolz

In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo

In the style of Surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jessie C. Adriaense

In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Aris Xanthos

"Learning Phonological Categories." - By Aris Xanthos (co-authored with John Goldsmith)
What is this work about? The paper explains how computers can be taught to recognize speech sounds in any language. In human language, there are sounds that carry meaning, and these sounds are called phonemes. The paper shows how a computer can learn to recognize these phonemes from raw speech data, without being told explicitly what the phonemes are. We use several mathematical techniques belonging to a family of methods called “unsupervised learning” to analyze the speech data and group similar sounds together. The resulting groups correspond to phonemes, which are the basic building blocks of language. This research helps us better understand how aspects of natural languages can be learnt by machines or by humans.
The first word that came to mind when seeing the AI-generated picture? Language and computer.
Explore more illustrations!
My work pictured by AI – Yaqing Su

In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Adrian Bangerter
"Every product needs a process: unpacking joint commitment as a process across species." - By Adrian Bangerter
What is this work about? Joint commitment, which arises from a gradual process of signal exchange and varies in strength, is more complex than simple promising, as it involves prior joint actions, coordination problems, and specific commitments that persist over time. This perspective offers new opportunities for studying joint commitment across different species.
The first word that came to mind when seeing the AI-generated picture? Self-abasement.
Explore more illustrations!
My work pictured by AI – Piermatteo Morucci
In the style of computational art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Diana Mazzarella
In the style of René Magritte. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Paola Merlo
In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Diana Mazzarella
"Speaker trustworthiness: Shall confidence match evidence?" - By Diana Mazzarella
What is this work about? Speakers can convey information with varying degrees of confidence, and this typically impacts the extent to which their messages are accepted as true. Confident speakers are more likely to be believed than unconfident one. Crucially, though, this benefit comes with additional risks. Confident speakers put their reputation at stake: if their message turns out to be false, they are more likely to suffer a repetitional loss than unconfident speakers. In this paper, we investigate the extent to which perceived speaker trustworthiness is affected by evidence. Our experiments show that the reputation of confident speakers is not damaged when their false claims are supported by strong evidence, but it is damaged when their true claims are based on weak evidence.
The first word that came to mind when seeing the AI-generated picture? Trust me.