My work pictured by AI – Kinkini Bhadra
"Think to speak: What if a computer could decode what you want to say?" - By Kinkini Bhadra
What is this work about? For people affected by neurological conditions like aphasia, who have intact thoughts but disrupted speech, a computer that decodes speech directly from the neural signal and converts it into audible speech could be life-changing. Recent research has demonstrated the potential for brain signals to decode imagined speech, which could be life-changing for individuals with neurological conditions affecting speech. While much of this research has focused on developing machine learning tools, there is also potential for training the human brain to improve BCI control. Our study utilized a Brain-Computer Interface to decode covertly spoken syllables and showed improved BCI control performance after just 5 days of training in 11 out of 15 healthy participants. This indicates the brain’s ability to adapt and learn new skills like speech imagery and opens up new possibilities for speech prosthesis and rehabilitation.
The first word that came to mind when seeing the AI-generated picture? Communication.
Explore more illustrations!
My work pictured by AI – Théophane Piette
"Animal’s Brain can follow the beat: investigating the link between vocal rhythm and brain oscillations." - By Théophane Piette
What is this work about? The relationship between speech rhythmicity and neural oscillations is an important component of speech perception, and especially of comprehension. However, even though the presence of the same rhythm has been described in non-human primates, and neural oscillations are a basic property of animals’ brains, we still do not know how the brain of animals is processing rhythmic information. Therefore, by identifying similarities and differences in rhythm, as well as its connection with brain oscillations in animal species, we hope to uncover the common rules that govern the rhythmic production and processing of vocal signals in animals. These results will help us understand how speech fits or detached itself from these basic rules, giving us new insight into the evolution of language complex hierarchical structure and a better understanding of brains’ perception mechanisms of vocal signals.
The first word that came to mind when seeing the AI-generated picture? /
https://evolvinglanguage.ch/my-work-pictured-by-ai/
My work pictured by AI – Monica Lancheros
In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jamil Zaghir
In a futuristic style. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Sebastian Sauppe
In the style of Hieronmyus Bosch. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Fabio J. Fehr
"A variational auto-encoder for Transformers with Nonparametric Variational Information Bottleneck." - By Fabio J. Fehr
What is this work about? Today Transformer language models dominate the natural language processing domain. In our work, we introduce a new perspective on these models, which in turn provide new emerging capabilities!
The first word that came to mind when seeing the AI-generated picture? Superhero!
Explore more illustrations!
My work pictured by AI – Chantal Oderbolz
In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Adrian Bangerter
In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Piermatteo Morucci
In the style of computational art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alexandra Bosshard
"Sequencing in common marmoset call structures." - By Alexandra Bosshard
What is this work about? Over the last twenty years researchers have become more and more interested in the way non-human animals communicate in order to explore what such findings could potentially tell us about the development of our own language. Through applying methods borrowed from computational linguistics, we were able to show that the very social common marmoset monkey strings calls together to form larger sequences up to nine calls of length. Superficially similar to the way we combine meaningful units, like words, into phrases or sentences, marmosets seem to follow a similar set of rules when stringing their calls together to form larger structures. We can conclude that the vocal systems of non-human animals might be built up in more complex ways than what we previously thought.
The first word that came to mind when seeing the AI-generated picture? Complexity.
Explore more illustrations!
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jessie C. Adriaense
In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Diana Mazzarella
"Speaker trustworthiness: Shall confidence match evidence?" - By Diana Mazzarella
What is this work about? Speakers can convey information with varying degrees of confidence, and this typically impacts the extent to which their messages are accepted as true. Confident speakers are more likely to be believed than unconfident one. Crucially, though, this benefit comes with additional risks. Confident speakers put their reputation at stake: if their message turns out to be false, they are more likely to suffer a repetitional loss than unconfident speakers. In this paper, we investigate the extent to which perceived speaker trustworthiness is affected by evidence. Our experiments show that the reputation of confident speakers is not damaged when their false claims are supported by strong evidence, but it is damaged when their true claims are based on weak evidence.
The first word that came to mind when seeing the AI-generated picture? Trust me.
