My work pictured by AI – Sarah Saneei
"Computations supporting language functions and dysfunctions in artificial and biological neural networks." - By Sarah Saneei
What is this work about? It’s a research with the aim of finding the best stimuli (input) that can be provided with the brain to have the same brain signal results (best activation of Neurons) using deep learning approaches. We’ll use fMRI and ECOG to prepare the data for the model and as inputs, we plan to use texts and audio.
The first word that came to mind when seeing the AI-generated picture? /
Explore more illustrations!
My work pictured by AI – Richard Hahnloser
"Songbirds work around computational complexity by learning song vocabulary independently of sequence. " - By Richard Hahnloser
What is this work about? How does a young songbird learn its song? How does it compare the immature vocalizations it produces to the adult template syllables it hears and strives to imitate? It turns out that young birds have a very efficient way of learning their song vocabulary, by identifying for each target syllable they hear the closest vocalization in their developing repertoire. Thus, songbirds are efficient vocabulary learners. The process they use of assigning vocal errors to their vocalizations is computationally similar to the strategy used by taxi companies to dispatch their taxis to customers.
The first word that came to mind when seeing the AI-generated picture? Cubism.
Explore more illustrations!
My work pictured by AI – Abigail Licata
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Chantal Oderbolz
"Tracking the prosodic hierarchy in the brain." - By Chantal Oderbolz
What is this work about? The speech signal carries hierarchically organized acoustic and linguistic information. Recent research suggests that the brain uses brain waves, called cortical oscillations, to process this information. Especially oscillations in the theta frequency range (4-8 Hz) have been found to be important: Theta oscillations process acoustic energy in the speech signal associated with the timing of syllables. However, there is also slower information in the speech signal that corresponds to stress and intonation patterns and are part of the prosody – the rhythm and melody – of a language.
To better understand how the brain processes these different levels at the same time, we conducted an experiment with 30 participants who listened to German sentences with manipulated stress and intonation patterns. We found that the brain is able to simultaneously process the syllable, stress and intonation patterns of speech. However, changes in stress patterns disrupted the brain’s ability to track syllables with theta oscillations. Conversely, the brain was able to compensate for changes in intonation patterns by using linguistic knowledge. Additionally, we found that individuals varied in their ability to process the prosodic structure of the speech signal, with some participants better able to compensate for acoustic changes than others. Overall, our results support the idea that the brain uses a hierarchical organization of cortical oscillations to process the speech signal.
The first word that came to mind when seeing the AI-generated picture? Nostalgia.
Explore more illustrations!
My work pictured by AI – Fabio J. Fehr
In the style of comics and superheros. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
"Differences between monolingual and bilingual children's communicative behaviour." - By Moritz M. Daum group
What is this work about? This paper talks about a new way of thinking about how children learn to communicate. The idea is that when kids have different kinds of experiences talking with others, it affects how they communicate in the future. If they have lots of experiences where talking doesn’t work well, they will learn to use more ways to communicate and be more flexible when they talk. The authors use bilingual children as an example to explain this idea. They talk about how growing up with two languages affects how kids learn to communicate. Children who speak only one language and those who speak two or more languages communicate differently. Children who speak two languages are better at understanding what their communication partner is trying to say. They also adapt more easily to what the other person needs and use gestures to explain things more often. They are better at fixing misunderstandings and responding in a way that makes sense. The general idea is, however, not limited to bilingual communication but can also be applied to other challenge communicative situations.
The first word that came to mind when seeing the AI-generated picture? Confused.
Explore more illustrations!
My work pictured by AI – Jessie C. Adriaense
In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Huw Swanborough
In the style of Bauhaus. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Chantal Oderbolz
In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Aris Xanthos
"Learning Phonological Categories." - By Aris Xanthos (co-authored with John Goldsmith)
What is this work about? The paper explains how computers can be taught to recognize speech sounds in any language. In human language, there are sounds that carry meaning, and these sounds are called phonemes. The paper shows how a computer can learn to recognize these phonemes from raw speech data, without being told explicitly what the phonemes are. We use several mathematical techniques belonging to a family of methods called “unsupervised learning” to analyze the speech data and group similar sounds together. The resulting groups correspond to phonemes, which are the basic building blocks of language. This research helps us better understand how aspects of natural languages can be learnt by machines or by humans.
The first word that came to mind when seeing the AI-generated picture? Language and computer.
Explore more illustrations!
My work pictured by AI – Jessie C. Adriaense
In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.