My work pictured by AI – Sebastian Sauppe
"Neural signatures of syntactic variation in speech planning." - By Sebastian Sauppe
What is this work about? Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.
The first word that came to mind when seeing the AI-generated picture? Seeing into the mind.
Explore more illustrations!
My work pictured by AI – Diana Mazzarella
"Speaker trustworthiness: Shall confidence match evidence?" - By Diana Mazzarella
What is this work about? Speakers can convey information with varying degrees of confidence, and this typically impacts the extent to which their messages are accepted as true. Confident speakers are more likely to be believed than unconfident one. Crucially, though, this benefit comes with additional risks. Confident speakers put their reputation at stake: if their message turns out to be false, they are more likely to suffer a repetitional loss than unconfident speakers. In this paper, we investigate the extent to which perceived speaker trustworthiness is affected by evidence. Our experiments show that the reputation of confident speakers is not damaged when their false claims are supported by strong evidence, but it is damaged when their true claims are based on weak evidence.
The first word that came to mind when seeing the AI-generated picture? Trust me.
My work pictured by AI – Yaqing Su
"A deep hierarchy of predictions enables on-line meaning extraction in a computational model of human speech comprehension." - By Yaqing Su
What is this work about? Real-time speech comprehension poses great challenges for both the brain and language models. We show that hierarchically organized predictions integrating nonlinguistic and linguistic knowledge provide a more comprehensive account of behavioral and neurophysiological response to speech, compared to next-word predictions as generated by GPT2.
The first word that came to mind when seeing the AI-generated picture? Embedded.
Explore more illustrations!
My work pictured by AI – Fabio J. Fehr
In the style of comics and superheros. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Paola Merlo
"Blackbird's language matrices (BLMs): a new task to investigate disentangled generalization in neural networks." - By Paola Merlo
What is this work about? Current successes of machine learning architectures are based on computationally expensive algorithms and prohibitively large amounts of data. We need to develop tasks and data to train networks to reach more complex and more compositional skills. In this paper, we illustrate Blackbird’s language matrices (BLMs), a novel grammatical task modelled on intelligence tests usually based on visual stimuli. The dataset is generatively constructed to support investigations of current models’ linguistic mastery and their ability to generalize them. We present the logic of the task, the method to automatically construct data on a large scale, and the architecture to learn them. Through error analysis and several experiments on variations of the dataset, we demonstrate that this language task and the data that instantiate it provide a new challenging testbed to understand generalization and abstraction.
The first word that came to mind when seeing the AI-generated picture? Goofy.
Explore more illustrations!
My work pictured by AI – Abigail Licata
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
In the style of Joan Miro. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Piermatteo Morucci
In the style of computational art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Monica Lancheros
"Relationship between the production of speech and of orofacial movements." - By Monica Lancheros
What is this work about? This study investigated the relationship between speech and non-speech gestures (or orofacial movements) in order to determine if motor activities that use the same orofacial effectors recruit similar neural networks. Results suggest that the production of speech and non-speech gestures activate the same brain circuits; however, those circuits follow different patterns of activation for speech and non-speech gestures. Those findings suggest that speech has underlying neural architectures that are specialized for its production and that differentiate it from other oromotor related movements.
The first word that came to mind when seeing the AI-generated picture? Brain circuits.
Explore more illustrations!
My work pictured by AI – Adrian Bangerter
In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Paola Merlo
In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
In the style of Joan Miro. ©With Midjourney – AI & NCCR Evolving Language.