The researchers are based at Stanford Medicine and collaborating institutions. Their SleepFM model, which is based on AI, processes full polysomnography (PSG) recordings. PSG is a comprehensive, multi-parameter sleep study used to evaluate how a subject’s body functions during sleep.
How AI reads the language of sleep
PSG monitors brain waves, breathing, eye movements, muscle activity, heart rhythms, and blood oxygen levels. SleepFM aims to go beyond just sleep disorders by treating these signals as a single physiological dataset.
With the assistance of AI, the researchers analysed the largest dataset of its kind: 585,000 hours of sleep time from 65,000 people. SleepFM sliced the recordings into five-second chunks, which helped the model pick out patterns similar to how large language models handle words and sentences.
Training across multiple body systems
SleepFM is regarded as a breakthrough for its ability to combine multiple signal sources. It can simultaneously process brain activity, muscle movement, breathing patterns, etc. Tracking multiple body systems allows SleepFM to detect when physiological signals drift out of phase during sleep.
The researchers trained the model on how different body parts interact using the leave-one-out contrasting learning method. The technique works by eliminating one signal and rebuilding it from the others.
Predicting illness years in advance
To test if sleep alone could be used to forecast future diseases, the team merged medical records from a single clinic with sleep data. The result was SleepFM predicted 130 conditions, including dementia, cancer, Parkinson’s disease, and heart attack. The model achieved C-index scores above 0.8, indicating it accurately predicted patient conditions more than 8 out of 10 times.
The Researchers are now working to improve SleepFM and integrate data from wearable devices.








