My work pictured by AI – Sebastian Sauppe
"Neural signatures of syntactic variation in speech planning." - By Sebastian Sauppe
What is this work about? Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.
The first word that came to mind when seeing the AI-generated picture? Seeing into the mind.
Explore more illustrations!
My work pictured by AI – Moritz M. Daum Group
"Differences between monolingual and bilingual children's communicative behaviour." - By Moritz M. Daum group
What is this work about? This paper talks about a new way of thinking about how children learn to communicate. The idea is that when kids have different kinds of experiences talking with others, it affects how they communicate in the future. If they have lots of experiences where talking doesn’t work well, they will learn to use more ways to communicate and be more flexible when they talk. The authors use bilingual children as an example to explain this idea. They talk about how growing up with two languages affects how kids learn to communicate. Children who speak only one language and those who speak two or more languages communicate differently. Children who speak two languages are better at understanding what their communication partner is trying to say. They also adapt more easily to what the other person needs and use gestures to explain things more often. They are better at fixing misunderstandings and responding in a way that makes sense. The general idea is, however, not limited to bilingual communication but can also be applied to other challenge communicative situations.
The first word that came to mind when seeing the AI-generated picture? Confused.
Explore more illustrations!
My work pictured by AI – Alexandra Bosshard
In the style of a coloring book. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Paola Merlo
In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
In the style of Surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
"Fetal precursors of vocal learning: articulatory responses to speech stimuli in utero." - By Alejandra Hüsser
What is this work about? Newborn infants’ first cries carry the pitch accent of the language that dominated their environment while they were in the womb. The fact that their very first communicative acts bear traces of their linguistic environment indicates that the developing brain encodes articulatory patterns already in utero. This precocious learning has been proposed as a significant precursor for linguistic development. We aim to investigate fetal brain responses to speech stimuli in the womb, to illuminate the prenatal developmental trajectory of the brain’s expressive language network. Women during the last trimester of pregnancy will undergo a functional magnetic resonance imaging (fMRI) in which the fetus in utero is exposed to a variety of simple speech sounds.
The first word that came to mind when seeing the AI-generated picture? Universe.
Explore more illustrations!
My work pictured by AI – Sebastian Sauppe
In the style of Hieronmyus Bosch. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Fabio J. Fehr
In the style of comics and superheros. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Volker Dellwo
"Mothers reveal more of their vocal identity when talking to babies." - By Volker Dellwo
What is this work about? Voice timbre – the unique acoustic information in a voice by which its speaker can be recognized – is particularly critical in mother-infant interaction. Vocal timbre is necessary for infants to recognize their mothers as familiar both before and after birth, providing a basis for social bonding between infant and mother. The exact mechanisms underlying infant voice recognition are unknown. Here, we show – for the first time – that mothers’ vocalizations contain more detail of their vocal timbre through adjustments to their voices known as infant-directed speech (IDS) or baby talk, resulting in utterances in which individual recognition is more robust. Using acoustic modelling (k-means clustering of Mel Frequency Cepstral Coefficients) of IDS in comparison with adult-directed speech (ADS), we found across a variety of languages from different cultures that voice timbre clusters in IDS are significantly larger to comparable clusters in ADS. This effect leads to a more detailed representation of timbre in IDS with subsequent benefits for recognition. Critically, an automatic speaker identification Gaussian-mixture model based on Mel Frequency Cepstral Coefficients showed significantly better performance when trained with IDS as opposed to ADS. We argue that IDS has evolved as part of a set of adaptive evolutionary strategies that serve to promote indexical signalling by caregivers to their offspring which thereby promote social bonding via voice and acquiring language.
Comment about the picture from the author? The study is about ‘voice recognition’ and the advantage that infant-directed speech offers in learning a voice. I am not sure someone would conclude this from looking at the pictures.
Explore more illustrations!
My work pictured by AI – Abigail Licata
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Abigail Licata
"The impact of semantic similarity on neurocognitive mechanisms underlying conceptual representation in healthy bilinguals." - By Abigail Licata
What is this work about? The neural underpinnings of semantic representations involve a distributed network of cortical regions that integrate multimodal information relating to concepts. These semantic representations are formed dynamically through novel experience and information, including linguistic. Most models of semantic knowledge and its structure in the brain have been based on western monolingual populations that fail to capture the rich and diverse multilingual experience that is the reality for a majority of the global population. In multilingual speakers, a given concept is represented by multiple labels, with each label comprising its own phonological and lexico-semantic connections between and within languages. Moreover, evidence from linguistics and cognitive science suggest behavioral and physiological cross-linguistic differences in several conceptual domains and their respective boundaries, including colors, household containers, motion events and odors. Therefore, in the multilingual speaker, increased inter-language connections at phonological, lexico-semantic and conceptual levels may interact with language-specific properties inherent to word meaning and subsequent categorization (i.e., lexico-semantic features), altering the relevance of certain properties of the concept itself and its relational association to other concepts. Whether this alteration leads to differences in the quantity and quality of semantic representations and their associations in multilinguals of typologically-distinct versus typologically-similar languages is unclear and forms the central question of this thesis; implications of these findings may extend to patients with semantic dementia, a language-related neurodegenerative disease which destroys conceptual knowledge overtime.
The first word that came to mind when seeing the AI-generated picture? Eclecticism.
Explore more illustrations!
My work pictured by AI – Adrian Bangerter
In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jamil Zaghir
In a futuristic style. ©With Midjourney – AI & NCCR Evolving Language.
