A neurotechnology start-up claims to have achieved something that sounds like science fiction, transmitting a word from one person’s dream into another’s while both were asleep.
REMspace says its experiment involved two lucid dreamers: one heard a random word through headphones and uttered it in their dream, while the second dreamer picked up the same word eight minutes later.
If validated, the experiment would mark a significant leap in brain-computer interface (BCI) technology, tapping into the intersection of artificial intelligence, neuroimaging and sleep science.
However, the technology is still in its infancy, experts tell The National. They say the cost and complexity remain high, and mainstream use is a distant prospect.
What the experiment entails
REMspace says it used sleep-monitoring headgear and a “dream language” system to track two lucid dreamers as they slept.
Both participants were synchronised using EEG and other sensors to identify when they entered rapid-eye movement (REM) sleep, the stage in which vivid dreams occur.
During the experiment, one sleeper was played a random word through headphones.
The company claims the participant heard the cue, spoke it inside their dream, and that the second dreamer later recognised the same word in their own dream, a process it describes as “dream-to-dream communication”.
However, the claim has not been independently verified or published in a peer-reviewed journal, and scientists say there is no clear evidence showing that information was actually transmitted between the two brains.
How plausible is it?
David Melcher, professor of psychology and co-principal investigator at the Centre for Brain and Health at NYU Abu Dhabi, says that while scientists can already interpret some brain activity, the idea of transmitting words between dreamers remains uncertain.
He says researchers are now able to “successfully guess what information people are seeing, what people perceive or think about”, using tools such as EEG and functional magnetic resonance imaging, techniques widely used in cognitive neuroscience. However, transmitting information between sleepers is “more unclear”.
“There are some studies that suggest that at some (but not all) stages of sleep, hearing a sound or word can activate the brain regions processing sounds,” Prof Melcher says.
REMspace’s focus on lucid dreaming adds another layer of complexity, he says, since “there are very few studies of whether lucid dreamers hear and understand words or not”.
While at least one study has suggested it might be possible, “further research is required”, he adds.
Watch: UAE Assistant Minister for Health and Life Sciences, Maha Taysir Barakat talking about the future of health
Conducting such experiments, he added, requires monitoring multiple physiological signals and complex decoding tools.
“You would need to detect what sleep stage they are in (EEG plus eye movement, cardiac and respiratory measures), then have some way of decoding their brain activity.”
Such research also demands significant investment. “The neuroscience equipment is costly,” he says, “but also to run these studies takes months of time from trained neuroscientists.”
As for whether dream-to-dream communication could have practical value, Prof Melcher says the appeal may be largely recreational. “I imagine that people would find it interesting to dream together, but it is not clear if there are any other benefits other than it potentially being fun.”
Technology still far from real-world use
From a clinical perspective, Dr Bobby Jose, Specialist Neurosurgery at Medcare Royal Speciality Hospital in Dubai, says the technology needed to decode or transmit words from dreams simply does not yet exist.
“It is not possible to decode specific information from dreams,” he says, adding that while there is “a lot of electrical activity going on” in the brain during sleep, current tools cannot “decode and decipher what a person is thinking”.
To make such communication feasible, scientists would need advanced brain–computer interfaces capable of translating electrical brain impulses into words or thoughts, Dr Jose says. “EEG and a decoding device to translate EEG impulses into meaningful words or thoughts of an individual would be required.”
He also warned that such developments could raise serious ethical and privacy issues. “Such technology would be a powerful tool to intrude into a person’s intimate privacy and this may not be ethical,” he says, adding that for now, the field remains far from any practical application.
“It is still in the experimental stage and may take years before we can take it to a meaningful level to accurately communicate and decipher dreams.”
What the science says
Lucid dreaming, the state in which people are aware they are dreaming and can sometimes influence what happens, is a well-established phenomenon supported by decades of sleep research.
A 2025 study published in The Journal of Neuroscience, titled Electrophysiological Correlates of Lucid Dreaming: Sensor and Source Level Signatures, found distinct brain-activity patterns that differentiate lucid dreaming from ordinary REM sleep.
Using EEG analysis, researchers identified reductions in beta power (12–30 Hz) in right parietal regions and increased alpha-band connectivity, evidence that lucid dreaming represents a hybrid state between wakefulness and dreaming.
Earlier research has also shown that limited two-way communication with dreamers is possible under certain conditions.
A 2021 study published in Current Biology, titled Real-time dialogue between experimenters and dreamers during REM sleep and led by Ken Paller at Northwestern University, demonstrated that lucid dreamers could perceive external cues such as spoken words or light flashes and respond accurately with eye movements or facial twitches.
The study involved 36 participants across multiple labs and confirmed real-time “interactive dreaming," a first in sleep research.
These findings show that external stimuli such as sound, light or touch can influence dream content and enable basic interaction between a dreamer and an awake researcher.
However, no peer-reviewed study has ever demonstrated the transmission of information between two separate dreamers, and scientists say REMspace’s claim remains unverified and outside the scope of current neurotechnology.
How close is it to becoming mainstream?
Despite rapid progress in neurotechnology, experts say that applications involving dream communication remain confined to research settings.
The global BCI market is projected to reach $12.4 billion by 2035, from $2.6 billion in 2024, according to a 2024 market report by Precedence Research. The grow will be driven largely by medical and assistive uses such as stroke rehabilitation, prosthetics and communication for people with paralysis, not dream-state experiments, the research agency said.
The most advanced peer-reviewed demonstrations of “interactive dreaming,” such as the study published in Current Biology, involve simple external cues and eye-movement responses observed in laboratory settings.
By contrast, commercial neurotechnology headsets like Muse and NextMind can only track general brainwave activity, and none are capable of decoding or transmitting specific words or imagery.
Regulators also classify any device that records or stimulates the brain as a medical device, subject to clinical testing and ethics approval.
The US Food and Drug Administration has so far approved BCIs only for therapeutic use in patients with severe disabilities, not for consumer communication or entertainment.
Given these constraints, researchers say there is no current pathway for mainstream or consumer use of dream-to-dream technology.
As Dr Bobby Jose notes: “It is still in experimental stage and may take years before we can take it to a meaningful level to accurately communicate and decipher dreams.”



