My work pictured by AI – Sarah Saneei
"Computations supporting language functions and dysfunctions in artificial and biological neural networks." - By Sarah Saneei
What is this work about? It’s a research with the aim of finding the best stimuli (input) that can be provided with the brain to have the same brain signal results (best activation of Neurons) using deep learning approaches. We’ll use fMRI and ECOG to prepare the data for the model and as inputs, we plan to use texts and audio.
The first word that came to mind when seeing the AI-generated picture? /
Explore more illustrations!
My work pictured by AI – Paola Merlo
"Blackbird's language matrices (BLMs): a new task to investigate disentangled generalization in neural networks." - By Paola Merlo
What is this work about? Current successes of machine learning architectures are based on computationally expensive algorithms and prohibitively large amounts of data. We need to develop tasks and data to train networks to reach more complex and more compositional skills. In this paper, we illustrate Blackbird’s language matrices (BLMs), a novel grammatical task modelled on intelligence tests usually based on visual stimuli. The dataset is generatively constructed to support investigations of current models’ linguistic mastery and their ability to generalize them. We present the logic of the task, the method to automatically construct data on a large scale, and the architecture to learn them. Through error analysis and several experiments on variations of the dataset, we demonstrate that this language task and the data that instantiate it provide a new challenging testbed to understand generalization and abstraction.
The first word that came to mind when seeing the AI-generated picture? Goofy.
Explore more illustrations!
My work pictured by AI – Elisa Pellegrino
In the style of Joan Miro. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Théophane Piette
In the style of Henri Rousseau. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Fabio J. Fehr
In the style of comics and superheros. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Monica Lancheros
"Relationship between the production of speech and of orofacial movements." - By Monica Lancheros
What is this work about? This study investigated the relationship between speech and non-speech gestures (or orofacial movements) in order to determine if motor activities that use the same orofacial effectors recruit similar neural networks. Results suggest that the production of speech and non-speech gestures activate the same brain circuits; however, those circuits follow different patterns of activation for speech and non-speech gestures. Those findings suggest that speech has underlying neural architectures that are specialized for its production and that differentiate it from other oromotor related movements.
The first word that came to mind when seeing the AI-generated picture? Brain circuits.
Explore more illustrations!
My work pictured by AI – Aris Xanthos
In the style of Pop-art. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Kinkini Bhadra
In the style of Pablo Picasso. ©With Midjourney – AI & Kinkini Bhadra.
My work pictured by AI – Moritz M. Daum Group
In the style of Andy Warhol. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Daniel Friedrichs
"Speaking Fast and Slow: Evidence for Anatomical Influence on Temporal Dynamics of Speech” - By Daniel Friedrichs.
What is this work about? We explored the connection between mandible length and the temporal dynamics of speech. Our study involved testing speakers with different mandible sizes and observing how their speech timing was affected. We found that mandible length can indeed influence the time it takes to open and close the mouth, which in turn can affect the length of syllables in speech. This finding is particularly important for language evolution, as the human jaw has undergone significant changes throughout human history. For example, the jaw has decreased in size due to softer diets since the transition from hunter-gatherer to agricultural societies. By considering the movements of the mandible as similar to that of a pendulum, it becomes apparent that the duration of an oscillation, or period, should depend entirely on its length. This analogy suggests that humans in the distant past might have spoken more slowly due to slower mouth opening and closing movements, resulting in slower transmission of information. If this were true, it could also have had an impact on the evolution of the human brain, as humans would have to process linguistic information at lower frequencies (for example, previous studies have shown that the brain tracks the speech signal at frequencies that correspond to the lengths of syllables). It seems possible that, over time, the human brain has adapted to changes in human jaw anatomy, resulting in the speech and language patterns we observe today. Our research sheds light on the fascinating relationship between anatomy and speech, and how changes in our physical makeup can influence the way we communicate.
The first word that came to mind when seeing the AI-generated picture? Adaptation.
Explore more illustrations!
My work pictured by AI – Fabio J. Fehr
In the style of comics and superheros. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Chantal Oderbolz
In the style of William Eggleston. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Volker Dellwo
"Mothers reveal more of their vocal identity when talking to babies." - By Volker Dellwo
What is this work about? Voice timbre – the unique acoustic information in a voice by which its speaker can be recognized – is particularly critical in mother-infant interaction. Vocal timbre is necessary for infants to recognize their mothers as familiar both before and after birth, providing a basis for social bonding between infant and mother. The exact mechanisms underlying infant voice recognition are unknown. Here, we show – for the first time – that mothers’ vocalizations contain more detail of their vocal timbre through adjustments to their voices known as infant-directed speech (IDS) or baby talk, resulting in utterances in which individual recognition is more robust. Using acoustic modelling (k-means clustering of Mel Frequency Cepstral Coefficients) of IDS in comparison with adult-directed speech (ADS), we found across a variety of languages from different cultures that voice timbre clusters in IDS are significantly larger to comparable clusters in ADS. This effect leads to a more detailed representation of timbre in IDS with subsequent benefits for recognition. Critically, an automatic speaker identification Gaussian-mixture model based on Mel Frequency Cepstral Coefficients showed significantly better performance when trained with IDS as opposed to ADS. We argue that IDS has evolved as part of a set of adaptive evolutionary strategies that serve to promote indexical signalling by caregivers to their offspring which thereby promote social bonding via voice and acquiring language.
Comment about the picture from the author? The study is about ‘voice recognition’ and the advantage that infant-directed speech offers in learning a voice. I am not sure someone would conclude this from looking at the pictures.
Explore more illustrations!
My work pictured by AI – Richard Hahnloser
In the style of Pablo Picasso. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Alejandra Hüsser
In the style of surrealism. ©With Midjourney – AI & NCCR Evolving Language.
My work pictured by AI – Paola Merlo
In the style of Pixar animations. ©With Midjourney – AI & NCCR Evolving Language.
