Silicon brains
Valentina Borghesani is exploring how meaning is created in our brains. To do so, she works with aphasia patients, but increasingly also with AI language models that can be used to simulate processes in the brain.
by Roger Nickl.
© Celia Lazzarotto
Lemon, sour, juicer: Which two of these three words do you think belong closer together? And what about “guitar, bass, violin”, “tiger, lion, turtle” or “pizza, pineapple, pepperoni”? In a study, Valentina Borghesani presented test persons with countless such word triplets, which varied in concreteness or abstractness. The subjects were asked to name the two more closely related terms. The neuro and cognitive scientist also gave the same task to various language models gifted with AI.
From the results of this study, the researcher, together with colleagues from the CNeuromod team at the University of Montreal, has now developed a freely available dataset for researchers to test and compare the semantic knowledge of humans and machines. “I think our dataset is a good way to test how close you can get to semantic representation in the human brain with AI,” says the scientist, who conducts research at the Neurocentre of the University of Geneva and the NCCR Evolving Language, “the idea now is that researchers can use it to test the performance of their language models.”
When words lose their meaning
Valentina Borghesani is interested in language and thinking. She studies how semantic content, i.e. our knowledge, is created and stored in our brains, and how it is lost again in neurodegenerative diseases. She also examines which brain regions are involved in these processes. Language plays an important role, but not the only one. It is true that we can use language to communicate mental concepts, that is, ideas about the world, and it is usually also the medium with which we learn such concepts. However, language and thinking are not always congruent. “There are concepts for which we have no label, no words,” says Borghesani, “and there is the reverse, words that we cannot initially link to any concept – for example, when we learn a new language.” Human intelligence is ultimately much more than language, the researcher says.
To gain new insights into the neurobiological basis of language and thought and the emergence of semantic knowledge, Valentina Borghesani works with aphasia patients, but also increasingly with AI-powered language models. For those suffering from a certain form of aphasia – namely the semantic variant of primary progressive aphasia (svPPA) – words increasingly lose their meaning and they also have difficulty recognizing objects and faces. “With the help of such neurodegenerative diseases, it is possible to investigate how impairments in specific brain areas affect certain linguistic and semantic functions,” explains the researcher.
In this way, she hopes to gain new insights into how and where our brain creates meaning and what role language plays in this. But not only that: “The more we know about these connections, the more we can do for patients suffering from this form of aphasia,” says Borghesani. With the help of the new knowledge from Borghesani’s laboratory, more effective rehabilitation therapies could be developed in the future, and also diagnostic procedures that can detect the onset of aphasia at an early stage.
Virtual lesions
In Valentina Borghesani’s research, working with aphasia patients is important, but increasingly important is the use of AI. “AI models are now so good that we can use them almost like in-silico brains for our research, to do experiments with them,” says the cognitive scientist, “they can help us learn more about our own brains.” By training the AI with different data or deliberately damaging a model by inflicting virtual lesions on it, as in aphasia, the researchers at the Geneva Neurocentre can use it to make and test predictions about specific activities (or their absence) in the human brain.
For example, it can be used to study the dissociation of syntax and semantics and determine which areas in our brain are involved, as was done in a recent study. “Working with AI models is relatively new in our field of research and we still need to know more about how these models work,” says Valentina Borghesani, “but we are gradually getting to the point where we can profitably use them for virtual tests for our theories on the human brain.”
Simulating human conversation
The flow of knowledge also goes in the other direction: the research of neuroscientists like Valentina Borghesani can contribute to improving AI models – especially when it comes to questions of semantics. Because in this respect, existing language models still have large gaps. Chat GPT can, for example, compose coherent texts in no time at all, write summaries or translate texts from one language into almost any other. “The system really excels at that,” says Borghesani, “it mimics human conversation, it simulates human behavior and intelligence.” But the machine language capability is limited mainly to written texts. And it produces language without really understanding its content.
“The human language system, on the other hand, is multimodal,” says Borghesani. It consists of spoken and written language. And it is based on physical – sensory, auditory, visual and tactile – experience and on world knowledge. AI models lack both. That’s why they struggle, for example, to understand metaphors or see similarities that a human would immediately notice. “The more we know about how human language and human intelligence work, the better we can tell AI experts in which direction they should further develop language models to improve their performance,” explains the researcher.
With her work, Valentina Borghesani wants to contribute to improving the semantic representation, the linguistic understanding of AI models. At the same time, however, she has an objection: “When we talk about how we can optimize these models, we should also always ask ourselves why we want to do this in the first place.” If the goal is to make it easier to work with texts or to create virtual assistants that make our lives easier, for example, with literature research or text translations, there is nothing to stop us from further developing AI models. “However, if the long-term goal is to create a general artificial intelligence, I’m not really sure if I want that and if improved language processing would even take us there,” says Valentina Borghesani. At the moment, the neuroscientist is mainly interested in more powerful models that help her better understand our brain, our intelligence and our ability to speak.