My work pictured by AI – Sarah Saneei
"Computations supporting language functions and dysfunctions in artificial and biological neural networks." - By Sarah Saneei
What is this work about? It’s a research with the aim of finding the best stimuli (input) that can be provided with the brain to have the same brain signal results (best activation of Neurons) using deep learning approaches. We’ll use fMRI and ECOG to prepare the data for the model and as inputs, we plan to use texts and audio.
The first word that came to mind when seeing the AI-generated picture? /
Explore more illustrations!
My work pictured by AI – Nikhil Phaniraj
"Mathematical modelling of marmoset vocal learning suggests a dynamic template matching mechanism." - By Nikhil Phaniraj
What is this work about? Vocal learning plays an important role during speech development in human infants and is vital for language. However, the complex structure of language creates a colossal challenge in quantifying and tracking vocal changes in humans. Consequently, animals with simpler vocal communication systems are powerful tools for understanding the mechanisms underlying vocal learning. While human infants show the most drastic vocal changes, many adult animals, including humans, continue to show vocal learning in the form of a much-understudied phenomena called vocal accommodation. Vocal accommodation is often seen when people use similar words, pronunciations and speech rate to their conversing partner. Such a phenomena is also seen in common marmosets, a highly voluble Brazilian monkey species, with a simpler communication system compared to humans. In this project, I developed a mathematical model that explains the basic principles and rules underlying marmoset vocal accommodation. The model provides crucial insights into the mechanisms underlying vocal learning in adult animals and how they might differ from vocal learning in infant animals and humans.
The first word that came to mind when seeing the AI-generated picture? Monkey-learning.
Explore more illustrations!
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
In the style of Surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Sarah Saneei
My work pictured by AI – Diana Mazzarella
"Speaker trustworthiness: Shall confidence match evidence?" - By Diana Mazzarella
What is this work about? Speakers can convey information with varying degrees of confidence, and this typically impacts the extent to which their messages are accepted as true. Confident speakers are more likely to be believed than unconfident one. Crucially, though, this benefit comes with additional risks. Confident speakers put their reputation at stake: if their message turns out to be false, they are more likely to suffer a repetitional loss than unconfident speakers. In this paper, we investigate the extent to which perceived speaker trustworthiness is affected by evidence. Our experiments show that the reputation of confident speakers is not damaged when their false claims are supported by strong evidence, but it is damaged when their true claims are based on weak evidence.
The first word that came to mind when seeing the AI-generated picture? Trust me.
My work pictured by AI – Chantal Oderbolz
"Tracking the prosodic hierarchy in the brain." - By Chantal Oderbolz
What is this work about? The speech signal carries hierarchically organized acoustic and linguistic information. Recent research suggests that the brain uses brain waves, called cortical oscillations, to process this information. Especially oscillations in the theta frequency range (4-8 Hz) have been found to be important: Theta oscillations process acoustic energy in the speech signal associated with the timing of syllables. However, there is also slower information in the speech signal that corresponds to stress and intonation patterns and are part of the prosody – the rhythm and melody – of a language.
To better understand how the brain processes these different levels at the same time, we conducted an experiment with 30 participants who listened to German sentences with manipulated stress and intonation patterns. We found that the brain is able to simultaneously process the syllable, stress and intonation patterns of speech. However, changes in stress patterns disrupted the brain’s ability to track syllables with theta oscillations. Conversely, the brain was able to compensate for changes in intonation patterns by using linguistic knowledge. Additionally, we found that individuals varied in their ability to process the prosodic structure of the speech signal, with some participants better able to compensate for acoustic changes than others. Overall, our results support the idea that the brain uses a hierarchical organization of cortical oscillations to process the speech signal.
The first word that came to mind when seeing the AI-generated picture? Nostalgia.
Explore more illustrations!
My work pictured by AI – Jessie C. Adriaense
In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Yaqing Su
In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.
