On March 3, a research group consisting of Associate Professor Kenichi Fukui of the Institute of Scientific and Industrial Research, Osaka University and Professor Takashi Kato of the Graduate School of Dentistry of the same university learned individual sleep patterns from sounds recorded by smartphones and tablet terminals by machine learning. Announced that it has developed AI technology for visualization and evaluation.
Until now, sleep quality could only be measured at specialized facilities and hospitals, but due to the background that one in five Japanese suffers from insomnia, it can be easily measured at home and sleep patterns that differ depending on the physical condition and environment. The development of a system that can grasp the situation has been awaited.
The challenge was to distinguish between sleep environment sounds such as air conditioner sounds and speaking voices and sleep-related sounds such as tooth grinds, body movements, and snoring, but this time, the research group combined multiple machine learning methods to achieve high-precision sleep-related sounds. Developed a method to extract sounds and automatically map them to a two-dimensional plane according to the characteristics of sleep-related sounds.As a result of the sleep experiment, we succeeded in visualizing the sleep pattern by the proposed method.
If the application of this technology enables comfortable personalization of sleep, it will lead to high-quality sleep, such as the development of smartphones and tablet apps that can be self-managed at home, and the control of lighting and air conditioners according to individual sleep patterns. Expectations are rising for technology.
Paper information: [AAAIXNUMX Workshop Proceedings] Personal Sleep Pattern Visualization via Clustering on Sound Data