The CIMCYC Joins a Project to Develop AI-Powered Speech Neuroprostheses

Tue, 11/25/2025 - 15:03
0
26/11/2025
NeurSpeechXAI

The NeurSpeechXAI project will apply frontier technologies to restore communication to people with speech loss, using brain-computer interfaces and advanced artificial intelligence algorithms.

The University of Granada (UGR) has secured funding in the national call for Research Projects in the Field of Artificial Intelligence 2025 to develop NeurSpeechXAI. This project seeks to create speech neuroprostheses capable of decoding a person's communicative intention directly from their brain activity. The initiative received 478,900 euros, placing it among the 69 proposals funded out of a total of 840 applications and making it one of only three awarded to the UGR.

The initiative is led by José Andrés González, a researcher from the UGR's Higher Technical School of Computer and Telecommunications Engineering. The team also includes Ana Chica and Marc Ouellet, researchers from the CIMCYC's Cognitive Neuroscience research group.

NeurSpeechXAI is part of a coordinated project developed in collaboration with the University of the Basque Country and the Basque Center on Cognition, Brain and Language (BCBL) in San Sebastián, with the common objective of advancing the decoding of speech and language from brain signals using advanced artificial intelligence algorithms.

NeurSpeechXAI addresses a current challenge of great relevance: assisting people who have lost the ability to speak due to neurological injuries or neurodegenerative diseases such as Amyotrophic Lateral Sclerosis (ALS). In the most severe cases, these patients may be in a locked-in state, fully conscious but unable to communicate verbally.

The project seeks to develop neuroprostheses that transform brain signals into text or synthesized voice, using algorithms similar to those employed by virtual assistants like Siri or Alexa, but adapted to directly interpret neural activity. This approach promises faster, more direct and more natural communication, opening new avenues for autonomy for individuals with severe motor disabilities.

The research will combine artificial intelligence algorithms with different methods of recording brain activity, both invasive and non-invasive, to analyze how language is encoded at different levels. Furthermore, it will incorporate Explainable AI (XAI) techniques, which will allow for the interpretation of which brain patterns influence the models’ decisions, providing valuable tools for their clinical validation and for advancing knowledge of the human brain.

José Andrés González, the project's Principal Investigator, explains that the project starts from "algorithms similar to those that allow assistants like Siri or Alexa to understand human language, but adapted to interpret brain activity directly. The practical application is enormous: consider well-known cases like that of the scientist Stephen Hawking. Although he relied on a very slow and limited system, technologies like those we are developing could, in the future, allow for much more natural and rapid communication for people in the same situation."

NeurSpeechXAI involves researchers from the Mind Brain and Behavior Research Center (CIMCYC), specialists from the Neurosurgery Unit at the Virgen de las Nieves University Hospital and the Neural Interfacing Lab at Maastricht University (Netherlands), a global leader in speech-focused brain-computer interfaces. This collaboration will integrate knowledge in engineering, neuroscience, psychology, linguistics and clinical practice, covering the entire process: from the acquisition of neural signals to the validation of AI models.

The project will generate multimodal datasets that will be published under open science principles, boosting future research in neuroprostheses, AI and cognitive neuroscience.