My work pictured by AI – Kinkini Bhadra
"Think to speak: What if a computer could decode what you want to say?" - By Kinkini Bhadra
What is this work about? For people affected by neurological conditions like aphasia, who have intact thoughts but disrupted speech, a computer that decodes speech directly from the neural signal and converts it into audible speech could be life-changing. Recent research has demonstrated the potential for brain signals to decode imagined speech, which could be life-changing for individuals with neurological conditions affecting speech. While much of this research has focused on developing machine learning tools, there is also potential for training the human brain to improve BCI control. Our study utilized a Brain-Computer Interface to decode covertly spoken syllables and showed improved BCI control performance after just 5 days of training in 11 out of 15 healthy participants. This indicates the brain’s ability to adapt and learn new skills like speech imagery and opens up new possibilities for speech prosthesis and rehabilitation.
The first word that came to mind when seeing the AI-generated picture? Communication.
Explore more illustrations!
My work pictured by AI – Richard Hahnloser
"Songbirds work around computational complexity by learning song vocabulary independently of sequence. " - By Richard Hahnloser
What is this work about? How does a young songbird learn its song? How does it compare the immature vocalizations it produces to the adult template syllables it hears and strives to imitate? It turns out that young birds have a very efficient way of learning their song vocabulary, by identifying for each target syllable they hear the closest vocalization in their developing repertoire. Thus, songbirds are efficient vocabulary learners. The process they use of assigning vocal errors to their vocalizations is computationally similar to the strategy used by taxi companies to dispatch their taxis to customers.
The first word that came to mind when seeing the AI-generated picture? Cubism.
Explore more illustrations!
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jamil Zaghir
"Human-Machine Interactions, a battle of language acquisition." - By Jamil Zaghir
What is this work about? Human-machine interactions have an impact on the language acquisition for both actors. On the one hand, technologies are able to “learn” a language from text written by humans through Machine Learning, whether to perform a specific task or to chat with humans. Then on the other hand, humans have the tendency to learn a pseudo-language to improve the efficiency of interactions with the technology.
The first word that came to mind when seeing the AI-generated picture? Interactiveness.
Explore more illustrations!
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Kinkini Bhadra
In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Chantal Oderbolz
"Tracking the prosodic hierarchy in the brain." - By Chantal Oderbolz
What is this work about? The speech signal carries hierarchically organized acoustic and linguistic information. Recent research suggests that the brain uses brain waves, called cortical oscillations, to process this information. Especially oscillations in the theta frequency range (4-8 Hz) have been found to be important: Theta oscillations process acoustic energy in the speech signal associated with the timing of syllables. However, there is also slower information in the speech signal that corresponds to stress and intonation patterns and are part of the prosody – the rhythm and melody – of a language.
To better understand how the brain processes these different levels at the same time, we conducted an experiment with 30 participants who listened to German sentences with manipulated stress and intonation patterns. We found that the brain is able to simultaneously process the syllable, stress and intonation patterns of speech. However, changes in stress patterns disrupted the brain’s ability to track syllables with theta oscillations. Conversely, the brain was able to compensate for changes in intonation patterns by using linguistic knowledge. Additionally, we found that individuals varied in their ability to process the prosodic structure of the speech signal, with some participants better able to compensate for acoustic changes than others. Overall, our results support the idea that the brain uses a hierarchical organization of cortical oscillations to process the speech signal.
The first word that came to mind when seeing the AI-generated picture? Nostalgia.
Explore more illustrations!
My work pictured by AI – Chantal Oderbolz
In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alexandra Bosshard
In the style of a coloring book. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Huw Swanborough
"Acoustic Factors in Salience and Aversiveness of Infant Cries - Objective Physiological Evaluation and Cerebral Responses." - By Huw Swanborough
What is this work about? New-born infants are entirely dependent on caregiver attention and support, unable to care for themselves. As such, crying plays a vital role in allowing infants to capture the attention of caregivers to communicate aversive states and an effective cry must be sufficiently salient in the acoustic environment for this to succeed. However, optimal crying is a balancing act as overly salient sounds are often highly aversive to the listener, potentially eliciting detrimental caregiver responses that can range from reduced care quality to infant abuse or infanticide in extreme cases. An optimal infant cry then, may have evolved to be maximally salient, yet mitigating the aversive quality of the cry itself. One such mechanism this optimisation may occur is via cry pitch contours, as new-born infants have been shown to produce cries with the pitch/accent contour of the language prevalent in the environment in which they gestated, we hypothesise that these native accent contours may be survival advantageous for infants in mitigating the aversive nature of cries, whilst not impacting the perceptual salience of the signals. To this end, we will investigate subjective and objective measures of aversion and salience to infant cries and their interaction with native/non-native accent contours, as well as investigate the impact this has on neural circuits underlying cry-perception in adults.
The first word that came to mind when seeing the AI-generated picture? Crying.
Explore more illustrations!
My work pictured by AI – Daniel Friedrichs
In the style of Edward Hopper. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Sebastian Sauppe
In the style of Hieronmyus Bosch. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Kinkini Bhadra
In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.