-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
d217c37
commit f9a4538
Showing
7 changed files
with
202 additions
and
22 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,40 @@ | ||
--- | ||
title: '{{ replace .File.ContentBaseName "-" " " | title }}' | ||
date: {{ .Date }} | ||
draft: true | ||
tags: [""] | ||
--- | ||
|
||
[This review is intended solely for my personal learning] | ||
|
||
Paper Info | ||
> DOI: | ||
> Title: | ||
> Authors: | ||
## Prior Knowledge | ||
|
||
|
||
## Goal | ||
|
||
|
||
## Method | ||
|
||
|
||
## Results | ||
|
||
|
||
## Conclusion | ||
|
||
|
||
## Limitations | ||
|
||
|
||
# Thoughts | ||
|
||
|
||
--- | ||
|
||
## Reference | ||
* The paper: | ||
* This note was written with the assistance of Generative AI and is based on the content and results presented in the original paper. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,4 @@ | ||
--- | ||
title: '' | ||
draft: true | ||
--- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
--- | ||
title: "[LR] Interfacing with Lucid Dreams" | ||
date: 2024-09-25T20:33:52+08:00 | ||
draft: false | ||
tags: ["Human-Computer Interaction"] | ||
--- | ||
|
||
[This review is intended solely for my personal learning] | ||
|
||
Paper Info - LuciEntry | ||
> DOI: 10.1145/3613905.3649123 | ||
> Title: LuciEntry: A Modular Lab-based Lucid Dreaming Induction Prototype | ||
> Authors: Po-Yao (Cosmos) Wang, Nathaniel Lee Yung Xiang, Rohit Rajesh, Antony Smith Loose, Nathan Semertzidis, and Florian ‘Floyd’ Mueller | ||
Paper Info - DreamCeption | ||
> DOI: 10.1145/3613905.3649121 | ||
> Title: DreamCeption: Towards Understanding the Design of Targeted Lucid Dream Mediation | ||
> Authors: Po-Yao (Cosmos) Wang, Rohit Rajesh, Antony Smith Loose, Nathaniel Lee Yung Xiang, Nathalie Overdevest, Nathan Semertzidis, and Florian ‘Floyd’ Mueller | ||
## Prior Knowledge | ||
|
||
Lucid dreaming is a phenomenon wherein sleepers become aware of dreaming while asleep, often enabling manipulation of dream content and providing potential benefits such as enhanced creativity, nightmare alleviation, and stress relief. Past research has focused on techniques for lucid dream induction (e.g., wake-back-to-bed, mnemonic induction) and explored the use of interactive technologies—such as auditory or visual cues—to influence dream content. However, effectively automating or streamlining this process, as well as helping dreamers shape specific dream topics, remains a challenge. | ||
|
||
## Goal | ||
|
||
Both **LuciEntry** and **DreamCeption** explore how interactive systems can be harnessed to facilitate lucid dreaming: | ||
|
||
- **LuciEntry** seeks to simplify and automate the induction of lucid dreams in a lab setting, featuring a modular and autonomous platform that detects REM and delivers multiple cues (visual, auditory, electrical) at the right moment to trigger lucidity. By reducing researchers’ workloads and increasing reliability, LuciEntry aims to make the study of lucid dreaming more systematic and accessible. | ||
|
||
- **DreamCeption** focuses on shaping or “inserting” specific dream themes once a lucid dream is detected, thereby expanding what lucid dreamers can do after attaining lucidity. | ||
|
||
## Method | ||
|
||
#### LuciEntry | ||
|
||
1. **Wake-Back-to-Bed Protocol**: Participants sleep 4 hours uninterrupted, then awaken briefly for cognitive training (e.g., MILD). | ||
2. **Modular Architecture**: A headband with EEG and EOG electrodes connects to a Raspberry Pi server. When sustained REM is detected, the server automatically triggers external cues—LED flashing, binaural beats, and 40 Hz tACS—without researcher intervention. | ||
3. **Emergency Button**: Users can halt stimulation at any time, ensuring safety and peace of mind. | ||
|
||
#### DreamCeption | ||
|
||
1. **Closed-Loop Detection**: Employs brain (EEG) and eye (EOG) sensors to identify when users enter a lucid dream. | ||
2. **Targeted Stimuli**: Once lucidity is detected (participants move their eyes in a specific pattern, e.g., left-right signals), the system provides stimuli—visual (light), auditory (sound effects), and even haptic or galvanic vestibular stimulation—corresponding to a chosen dream theme (e.g., “scuba diving”). | ||
|
||
## Results | ||
|
||
#### LuciEntry | ||
|
||
- In a pilot study with three overnight sessions, two participants reported achieving short lucid dreams after receiving the visual/audio/electrical cues. | ||
- Demonstrated “dream incorporation,” where external stimuli (flashing lights, sounds) were woven into dream narratives (e.g., seeing brake lights in a racing dream). | ||
- Identified system hurdles such as sensor calibration, headband comfort, and ensuring fully autonomous operation. | ||
|
||
#### DreamCeption | ||
|
||
- Illustrates how well-timed “dream prime” stimuli can lead lucid dreamers to incorporate specific elements (e.g., ocean sounds, bubble haptics) into their dream worlds. | ||
- Underscores that real-time detection of lucidity is crucial to deliver prompts effectively. | ||
|
||
## Conclusion | ||
|
||
Taken together, **DreamCeption** and **LuciEntry** exemplify how HCI-driven solutions can deepen our engagement with lucid dreaming. **DreamCeption** offers a vision of _content-rich dream design_, enabling users to “sculpt” their dream environment. Meanwhile, **LuciEntry** addresses _scalable, automated induction_, promising a more robust framework for controlled experiments and eventual personal use. Both open exciting avenues in dream engineering—where carefully timed interventions harness the dreamer’s brain state to either reliably induce or intricately shape dream content. | ||
|
||
These two prototypes illustrate an emerging intersection of immersive design, biofeedback, and sleep science, pushing beyond conventional VR experiences into the realm of dreams. | ||
|
||
## Limitations | ||
1. **Signal Quality**: EEG and EOG readings can be prone to interference from movement or improper electrode placement, potentially impeding real-time detection. | ||
2. **Short Lucid Durations**: While users became aware of dreaming, many reported only fleeting moments of lucidity. Lengthening such episodes remains a challenge. | ||
3. **Wearability**: Discomfort from wearing headsets or electrodes overnight can disrupt sleep and reduce data reliability. | ||
|
||
--- | ||
|
||
## Reference | ||
* The paper: | ||
* LuciEntry: https://dl.acm.org/doi/10.1145/3613905.3649123 | ||
* DreamCeption: https://dl.acm.org/doi/10.1145/3613905.3649121 | ||
* This note was written with the assistance of Generative AI and is based on the content and results presented in the original papers. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,57 @@ | ||
--- | ||
title: "[LR] Binocular Vision SSVEP BCI for Dual-Frequency Modulation" | ||
date: 2024-10-12T20:32:51+08:00 | ||
draft: false | ||
tags: ["Brain–Computer Interface", "SSVEP"] | ||
--- | ||
|
||
[This review is intended solely for my personal learning] | ||
|
||
Paper Info | ||
> DOI: 10.1109/TBME.2022.3212192 | ||
> Title: A Binocular Vision SSVEP Brain–Computer Interface Paradigm for Dual-Frequency Modulation | ||
> Authors: Yike Sun, Liyan Liang, Jingnan Sun, Xiaogang Chen, Runfa Tian, Yuanfang Chen, Lijian Zhang, and Xiaorong Gao | ||
## Prior Knowledge | ||
- **SSVEP and BCIs:** Steady-State Visual Evoked Potentials (SSVEPs) are brain responses elicited by periodic visual stimuli. Their robustness and high signal-to-noise ratio make them a cornerstone in non-invasive BCI research | ||
- **Dual-Frequency Stimulation:** Traditional dual-frequency paradigms, such as the checkerboard arrangement, allow the encoding of more targets but are hampered by intermodulation artifacts, which can compromise signal quality. | ||
- **Binocular Vision Approach:** By using circularly polarized light to deliver different frequencies to each eye, the binocular vision paradigm minimizes interference from intermodulation harmonics, thereby enhancing signal fidelity. | ||
|
||
## Goal | ||
To design and evaluate a novel dual-frequency SSVEP paradigm based on binocular vision that suppresses intermodulation harmonics and enhances overall BCI performance, particularly in training-free applications. | ||
|
||
## Method | ||
The study was structured around two primary experiments: | ||
1. **Experiment 1: Offline SNR Analysis** | ||
- **Participants:** 9 subjects. | ||
- **Design:** A 6-target experiment comparing the binocular vision paradigm with the traditional checkerboard arrangement. | ||
- **Measurements:** Signal-to-noise ratios (SNR) were calculated for broadband, narrowband, and intermodulation components to assess the quality of the evoked potentials. | ||
|
||
2. **Experiment 2: Online BCI Evaluation** | ||
- **Participants:** 12 subjects. | ||
- **Design:** A 40-target training-free online experiment. | ||
- **Analysis:** Utilized a customized Filter Bank Dual-Frequency Canonical Correlation Analysis (FBDCCA) algorithm to decode the SSVEP responses, with additional offline analysis using Task-Related Component Analysis (TRCA) for comparison. | ||
|
||
Stimuli were presented on a circularly polarized display, where alternating odd and even rows contained different frequencies, ensuring each eye received a distinct stimulus. | ||
|
||
## Results | ||
- **Improved Signal Quality:** | ||
The binocular vision paradigm yielded significantly higher broadband and narrowband SNRs, and reduced intermodulation noise by approximately 2 dB compared to the traditional checkerboard setup. | ||
|
||
- **Enhanced BCI Performance:** | ||
In the online experiment, the training-free system achieved an average Information Transfer Rate (ITR) of 104.56 bits/min—nearly double that of the conventional approach. Offline analyses further confirmed the robustness of the binocular paradigm. | ||
|
||
- **Effective Algorithm Adaptation:** | ||
The tailored FBDCCA algorithm successfully decoded dual-frequency responses, providing high classification accuracy without the need for extensive training. | ||
|
||
## Conclusion | ||
This study demonstrates that the binocular vision approach can effectively suppress intermodulation harmonics, resulting in improved SSVEP signal quality and enhanced BCI performance. The innovative integration of hardware (circularly polarized displays) and the specialized FBDCCA algorithm paves the way for scalable, training-free BCI systems capable of handling a larger number of targets. | ||
|
||
## Limitations | ||
The study was conducted under controlled laboratory conditions with specialized hardware like circularly polarized displays, which might limit its immediate real-world application. Moreover, the small sample size calls for further research with a more diverse population to validate the results. Future work should also focus on expanding the scope of the FBDCCA algorithm to ensure its effectiveness across various dual-frequency paradigms and BCI configurations, thereby enhancing the practical utility of the approach. | ||
|
||
--- | ||
|
||
## Reference | ||
* The paper: https://ieeexplore.ieee.org/document/9911680 | ||
* This note was written with the assistance of Generative AI and is based on the content and results presented in the original paper. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1 +1 @@ | ||
<script type="text/javascript" src={{ "/js/canvas-nest.js" | relURL }} count=100 color="255,255,255" opacity=1></script> | ||
<script type="text/javascript" src={{ "/js/canvas-nest.js" | relURL }} count=80 color="102,255,178" opacity=1></script> |