My work pictured by AI – Kinkini Bhadra
"Think to speak: What if a computer could decode what you want to say?" - By Kinkini Bhadra
What is this work about? For people affected by neurological conditions like aphasia, who have intact thoughts but disrupted speech, a computer that decodes speech directly from the neural signal and converts it into audible speech could be life-changing. Recent research has demonstrated the potential for brain signals to decode imagined speech, which could be life-changing for individuals with neurological conditions affecting speech. While much of this research has focused on developing machine learning tools, there is also potential for training the human brain to improve BCI control. Our study utilized a Brain-Computer Interface to decode covertly spoken syllables and showed improved BCI control performance after just 5 days of training in 11 out of 15 healthy participants. This indicates the brain’s ability to adapt and learn new skills like speech imagery and opens up new possibilities for speech prosthesis and rehabilitation.
The first word that came to mind when seeing the AI-generated picture? Communication.
Explore more illustrations!
My work pictured by AI – Sarah Saneei

"Computations supporting language functions and dysfunctions in artificial and biological neural networks." - By Sarah Saneei
What is this work about? It’s a research with the aim of finding the best stimuli (input) that can be provided with the brain to have the same brain signal results (best activation of Neurons) using deep learning approaches. We’ll use fMRI and ECOG to prepare the data for the model and as inputs, we plan to use texts and audio.
The first word that came to mind when seeing the AI-generated picture? /
Explore more illustrations!
My work pictured by AI – Abigail Licata

"The impact of semantic similarity on neurocognitive mechanisms underlying conceptual representation in healthy bilinguals." - By Abigail Licata
What is this work about? The neural underpinnings of semantic representations involve a distributed network of cortical regions that integrate multimodal information relating to concepts. These semantic representations are formed dynamically through novel experience and information, including linguistic. Most models of semantic knowledge and its structure in the brain have been based on western monolingual populations that fail to capture the rich and diverse multilingual experience that is the reality for a majority of the global population. In multilingual speakers, a given concept is represented by multiple labels, with each label comprising its own phonological and lexico-semantic connections between and within languages. Moreover, evidence from linguistics and cognitive science suggest behavioral and physiological cross-linguistic differences in several conceptual domains and their respective boundaries, including colors, household containers, motion events and odors. Therefore, in the multilingual speaker, increased inter-language connections at phonological, lexico-semantic and conceptual levels may interact with language-specific properties inherent to word meaning and subsequent categorization (i.e., lexico-semantic features), altering the relevance of certain properties of the concept itself and its relational association to other concepts. Whether this alteration leads to differences in the quantity and quality of semantic representations and their associations in multilinguals of typologically-distinct versus typologically-similar languages is unclear and forms the central question of this thesis; implications of these findings may extend to patients with semantic dementia, a language-related neurodegenerative disease which destroys conceptual knowledge overtime.
The first word that came to mind when seeing the AI-generated picture? Eclecticism.
Explore more illustrations!
My work pictured by AI – Jessie C. Adriaense

In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Yaqing Su

In the style of cubism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Adrian Bangerter

In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jessie C. Adriaense

"Parental care as joint action in common marmosets: coordination during infant transfer." - By Jessie C. Adriaense
What is this work about? Joint actions require various coordination mechanisms in order to achieve a successful joint outcome. To understand how joint action evolved in humans, research requires a comparative approach by investigating whether other animals have similar motoric and mental coordination skills that would facilitate their joint actions. Common marmosets are cooperative breeders, just like humans, and thus their parental care system forms an ideal model to further investigate the different proximate mechanisms of joint action. This study focusses on infant transfers in marmosets, a highly important and risky joint action, for which both parents are required to coordinate efficiently. How marmosets exactly achieve a successful transfer, and what the relevant traits are, is unknown. To this end, we analyzed motor coordination during transfers, including micro-analyses of coordination signals such as touch and mutual gaze between parents. All our data was collected in captive housing as first stage of this project and we are developing a protocol for research in the field, to further understand how ecological conditions impact this behavior.
The first word that came to mind when seeing the AI-generated picture? Alliance.
Explore more illustrations!
My work pictured by AI – Alexandra Bosshard
In the style of a coloring book. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Huw Swanborough
In the style of Bauhaus. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Adrian Bangerter

In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Yaqing Su

"A deep hierarchy of predictions enables on-line meaning extraction in a computational model of human speech comprehension." - By Yaqing Su
What is this work about? Real-time speech comprehension poses great challenges for both the brain and language models. We show that hierarchically organized predictions integrating nonlinguistic and linguistic knowledge provide a more comprehensive account of behavioral and neurophysiological response to speech, compared to next-word predictions as generated by GPT2.
The first word that came to mind when seeing the AI-generated picture? Embedded.
Explore more illustrations!
My work pictured by AI – Paola Merlo
In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
In the style of Surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Moritz M. Daum Group
"Differences between monolingual and bilingual children's communicative behaviour." - By Moritz M. Daum group
What is this work about? This paper talks about a new way of thinking about how children learn to communicate. The idea is that when kids have different kinds of experiences talking with others, it affects how they communicate in the future. If they have lots of experiences where talking doesn’t work well, they will learn to use more ways to communicate and be more flexible when they talk. The authors use bilingual children as an example to explain this idea. They talk about how growing up with two languages affects how kids learn to communicate. Children who speak only one language and those who speak two or more languages communicate differently. Children who speak two languages are better at understanding what their communication partner is trying to say. They also adapt more easily to what the other person needs and use gestures to explain things more often. They are better at fixing misunderstandings and responding in a way that makes sense. The general idea is, however, not limited to bilingual communication but can also be applied to other challenge communicative situations.
The first word that came to mind when seeing the AI-generated picture? Confused.
Explore more illustrations!
My work pictured by AI – Piermatteo Morucci
In the style of computational art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – EduGame Team
In the style of fantasy and Sci-fi. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jamil Zaghir
In a futuristic style. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jamil Zaghir
"Human-Machine Interactions, a battle of language acquisition." - By Jamil Zaghir
What is this work about? Human-machine interactions have an impact on the language acquisition for both actors. On the one hand, technologies are able to “learn” a language from text written by humans through Machine Learning, whether to perform a specific task or to chat with humans. Then on the other hand, humans have the tendency to learn a pseudo-language to improve the efficiency of interactions with the technology.
The first word that came to mind when seeing the AI-generated picture? Interactiveness.
Explore more illustrations!
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Adrian Bangerter

In the style of Aubrey Beardsley. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Nikhil Phaniraj
In a futuristic style. ©With Midjourney – AI & Nikhil Phaniraj.
My work pictured by AI – Monica Lancheros
"Relationship between the production of speech and of orofacial movements." - By Monica Lancheros
What is this work about? This study investigated the relationship between speech and non-speech gestures (or orofacial movements) in order to determine if motor activities that use the same orofacial effectors recruit similar neural networks. Results suggest that the production of speech and non-speech gestures activate the same brain circuits; however, those circuits follow different patterns of activation for speech and non-speech gestures. Those findings suggest that speech has underlying neural architectures that are specialized for its production and that differentiate it from other oromotor related movements.
The first word that came to mind when seeing the AI-generated picture? Brain circuits.
Explore more illustrations!
My work pictured by AI – Jamil Zaghir
In a futuristic style. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Elisa Pellegrino
In the style of Joan Miro. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Fabio J. Fehr
"A variational auto-encoder for Transformers with Nonparametric Variational Information Bottleneck." - By Fabio J. Fehr
What is this work about? Today Transformer language models dominate the natural language processing domain. In our work, we introduce a new perspective on these models, which in turn provide new emerging capabilities!
The first word that came to mind when seeing the AI-generated picture? Superhero!
Explore more illustrations!
My work pictured by AI – Kinkini Bhadra
In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.
My work pictured by AI – Chantal Oderbolz
In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Jessie C. Adriaense

In the style of William Blake. ©With Midjourney – AI & NCCR Evolving Language.