My work pictured by AI – Kinkini Bhadra
"Think to speak: What if a computer could decode what you want to say?" - By Kinkini Bhadra
What is this work about? For people affected by neurological conditions like aphasia, who have intact thoughts but disrupted speech, a computer that decodes speech directly from the neural signal and converts it into audible speech could be life-changing. Recent research has demonstrated the potential for brain signals to decode imagined speech, which could be life-changing for individuals with neurological conditions affecting speech. While much of this research has focused on developing machine learning tools, there is also potential for training the human brain to improve BCI control. Our study utilized a Brain-Computer Interface to decode covertly spoken syllables and showed improved BCI control performance after just 5 days of training in 11 out of 15 healthy participants. This indicates the brain’s ability to adapt and learn new skills like speech imagery and opens up new possibilities for speech prosthesis and rehabilitation.
The first word that came to mind when seeing the AI-generated picture? Communication.
Explore more illustrations!
My work pictured by AI – Moritz M. Daum Group
"Differences between monolingual and bilingual children's communicative behaviour." - By Moritz M. Daum group
What is this work about? This paper talks about a new way of thinking about how children learn to communicate. The idea is that when kids have different kinds of experiences talking with others, it affects how they communicate in the future. If they have lots of experiences where talking doesn’t work well, they will learn to use more ways to communicate and be more flexible when they talk. The authors use bilingual children as an example to explain this idea. They talk about how growing up with two languages affects how kids learn to communicate. Children who speak only one language and those who speak two or more languages communicate differently. Children who speak two languages are better at understanding what their communication partner is trying to say. They also adapt more easily to what the other person needs and use gestures to explain things more often. They are better at fixing misunderstandings and responding in a way that makes sense. The general idea is, however, not limited to bilingual communication but can also be applied to other challenge communicative situations.
The first word that came to mind when seeing the AI-generated picture? Confused.
Explore more illustrations!
My work pictured by AI – Daniel Friedrichs
In the style of Edward Hopper. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
In the style of Surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Sebastian Sauppe
"Neural signatures of syntactic variation in speech planning." - By Sebastian Sauppe
What is this work about? Planning to speak is a challenge for the brain, and the challenge varies between and within languages. Yet, little is known about how neural processes react to these variable challenges beyond the planning of individual words. Here, we examine how fundamental differences in syntax shape the time course of sentence planning. Most languages treat alike (i.e., align with each other) the 2 uses of a word like “gardener” in “the gardener crouched” and in “the gardener planted trees.” A minority keeps these formally distinct by adding special marking in 1 case, and some languages display both aligned and nonaligned expressions. Exploiting such a contrast in Hindi, we used electroencephalography (EEG) and eye tracking to suggest that this difference is associated with distinct patterns of neural processing and gaze behavior during early planning stages, preceding phonological word form preparation. Planning sentences with aligned expressions induces larger synchronization in the theta frequency band, suggesting higher working memory engagement, and more visual attention to agents than planning nonaligned sentences, suggesting delayed commitment to the relational details of the event. Furthermore, plain, unmarked expressions are associated with larger desynchronization in the alpha band than expressions with special markers, suggesting more engagement in information processing to keep overlapping structures distinct during planning. Our findings contrast with the observation that the form of aligned expressions is simpler, and they suggest that the global preference for alignment is driven not by its neurophysiological effect on sentence planning but by other sources, possibly by aspects of production flexibility and fluency or by sentence comprehension. This challenges current theories on how production and comprehension may affect the evolution and distribution of syntactic variants in the world’s languages.
The first word that came to mind when seeing the AI-generated picture? Seeing into the mind.
Explore more illustrations!
My work pictured by AI – Richard Hahnloser
"Songbirds work around computational complexity by learning song vocabulary independently of sequence. " - By Richard Hahnloser
What is this work about? How does a young songbird learn its song? How does it compare the immature vocalizations it produces to the adult template syllables it hears and strives to imitate? It turns out that young birds have a very efficient way of learning their song vocabulary, by identifying for each target syllable they hear the closest vocalization in their developing repertoire. Thus, songbirds are efficient vocabulary learners. The process they use of assigning vocal errors to their vocalizations is computationally similar to the strategy used by taxi companies to dispatch their taxis to customers.
The first word that came to mind when seeing the AI-generated picture? Cubism.
Explore more illustrations!
My work pictured by AI – Daniel Friedrichs
In the style of Edward Hopper. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jessie C. Adriaense
"Parental care as joint action in common marmosets: coordination during infant transfer." - By Jessie C. Adriaense
What is this work about? Joint actions require various coordination mechanisms in order to achieve a successful joint outcome. To understand how joint action evolved in humans, research requires a comparative approach by investigating whether other animals have similar motoric and mental coordination skills that would facilitate their joint actions. Common marmosets are cooperative breeders, just like humans, and thus their parental care system forms an ideal model to further investigate the different proximate mechanisms of joint action. This study focusses on infant transfers in marmosets, a highly important and risky joint action, for which both parents are required to coordinate efficiently. How marmosets exactly achieve a successful transfer, and what the relevant traits are, is unknown. To this end, we analyzed motor coordination during transfers, including micro-analyses of coordination signals such as touch and mutual gaze between parents. All our data was collected in captive housing as first stage of this project and we are developing a protocol for research in the field, to further understand how ecological conditions impact this behavior.
The first word that came to mind when seeing the AI-generated picture? Alliance.
Explore more illustrations!
My work pictured by AI – Chantal Oderbolz
In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
In the style of Surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Aris Xanthos
"Learning Phonological Categories." - By Aris Xanthos (co-authored with John Goldsmith)
What is this work about? The paper explains how computers can be taught to recognize speech sounds in any language. In human language, there are sounds that carry meaning, and these sounds are called phonemes. The paper shows how a computer can learn to recognize these phonemes from raw speech data, without being told explicitly what the phonemes are. We use several mathematical techniques belonging to a family of methods called “unsupervised learning” to analyze the speech data and group similar sounds together. The resulting groups correspond to phonemes, which are the basic building blocks of language. This research helps us better understand how aspects of natural languages can be learnt by machines or by humans.
The first word that came to mind when seeing the AI-generated picture? Language and computer.
Explore more illustrations!
My work pictured by AI – Chantal Oderbolz
In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Kinkini Bhadra
My work pictured by AI – Kinkini Bhadra
My work pictured by AI – Daniel Friedrichs
"Speaking Fast and Slow: Evidence for Anatomical Influence on Temporal Dynamics of Speech” - By Daniel Friedrichs.
What is this work about? We explored the connection between mandible length and the temporal dynamics of speech. Our study involved testing speakers with different mandible sizes and observing how their speech timing was affected. We found that mandible length can indeed influence the time it takes to open and close the mouth, which in turn can affect the length of syllables in speech. This finding is particularly important for language evolution, as the human jaw has undergone significant changes throughout human history. For example, the jaw has decreased in size due to softer diets since the transition from hunter-gatherer to agricultural societies. By considering the movements of the mandible as similar to that of a pendulum, it becomes apparent that the duration of an oscillation, or period, should depend entirely on its length. This analogy suggests that humans in the distant past might have spoken more slowly due to slower mouth opening and closing movements, resulting in slower transmission of information. If this were true, it could also have had an impact on the evolution of the human brain, as humans would have to process linguistic information at lower frequencies (for example, previous studies have shown that the brain tracks the speech signal at frequencies that correspond to the lengths of syllables). It seems possible that, over time, the human brain has adapted to changes in human jaw anatomy, resulting in the speech and language patterns we observe today. Our research sheds light on the fascinating relationship between anatomy and speech, and how changes in our physical makeup can influence the way we communicate.
The first word that came to mind when seeing the AI-generated picture? Adaptation.
Explore more illustrations!
My work pictured by AI – Piermatteo Morucci
In the style of computational art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Paola Merlo
In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Abigail Licata
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.