Skip to content

Commit

Permalink
add posts
Browse files Browse the repository at this point in the history
  • Loading branch information
HowardHsuuu committed Feb 15, 2025
1 parent d217c37 commit f9a4538
Show file tree
Hide file tree
Showing 7 changed files with 202 additions and 22 deletions.
40 changes: 40 additions & 0 deletions archetypes/LR.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
---
title: '{{ replace .File.ContentBaseName "-" " " | title }}'
date: {{ .Date }}
draft: true
tags: [""]
---

[This review is intended solely for my personal learning]

Paper Info
> DOI:
> Title:
> Authors:
## Prior Knowledge


## Goal


## Method


## Results


## Conclusion


## Limitations


# Thoughts


---

## Reference
* The paper:
* This note was written with the assistance of Generative AI and is based on the content and results presented in the original paper.
12 changes: 6 additions & 6 deletions config/_default/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,12 @@ languages:
- name: Topics
url: posts/
weight: 10
# - name: About Me
# url: aboutme/
# weight: 20
# - name: Tags
# url: tags/
# weight: 15
- name: About Me
url: aboutme/
weight: 20
- name: Tags
url: tags
weight: 15
zh-hant:
languageName: "Traditional Chinese"
disabled: true
Expand Down
4 changes: 4 additions & 0 deletions content/posts/DeFi Analysis/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
---
title: ''
draft: true
---
34 changes: 19 additions & 15 deletions content/posts/LR/LLM&ToM.md → content/posts/LR/LLM-ToM.md
Original file line number Diff line number Diff line change
@@ -1,30 +1,33 @@
---
title: "[LR] Unveiling Theory of Mind in Large Language Models"
title: "[LR] Unveiling Theory of Mind in Large Language Models"
date: 2024-09-23T19:24:43+08:00
draft: false
tags: ["Cognitive Neuroscience", "Large Language Models"]
---

[This review is intended solely for my personal learning]

Paper Info
> arXiv:2309.01660
> Title: Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain
> Author: Mohsen Jamali and Ziv M. Williams and Jing Cai
## Prior Knowledge
### Theory of Mind (ToM)
- **Theory of Mind (ToM)**:
A complex cognitive capacity related to our conscious mind and mental state that allows us to infer another's beliefs and perspective. Through ToM, human can create intricate mental representations of other agents and realize that others may have beliefs that's different from our own or the objective reality.
### True- and False-belief Task
* True-belief task: assesses whether someone understands that some other people's believes is correctly aligned with reality.
* False-belief task: assesses whether someone understands that some other people's believes is **not** correctly aligned with reality. (ex: belief diverges from reality after a change to the environment that one did not witness.)
* A critical test for ToM is the false belief task.
* Both tasks are evaluated by providing the participant a scenario and asking the participant "fact questions" and "belief questions", which are about the reality and the belief of some character in the scenario respectively.
* These tasks are designed to test if the individual can attribute mental states (including potentially false beliefs) to others in general.
### ToM in the human brain
- **True- and False-belief Task**
* True-belief task: assesses whether someone understands that some other people's believes is correctly aligned with reality.
* False-belief task: assesses whether someone understands that some other people's believes is **not** correctly aligned with reality. (ex: belief diverges from reality after a change to the environment that one did not witness.)
* A critical test for ToM is the false belief task.
* Both tasks are evaluated by providing the participant a scenario and asking the participant "fact questions" and "belief questions", which are about the reality and the belief of some character in the scenario respectively.
* These tasks are designed to test if the individual can attribute mental states (including potentially false beliefs) to others in general.
- **ToM in the human brain**
> Human brain imaging studies have provided substantial evidence for the brain network that supports our ToM ability, including the temporalparietal junction, superior temporal sulcus and the dorsal medial prefrontal cortex (dmPFC)
> Research have identified single neurons in the dorsal medial prefrontal cortex that exhibit selective modulations for true- versus false-belief trials during the period of questions, suggesting a particular role for processing others' beliefs and potentially subserving ToM ability. (reference1)
> These neurons displayed a consistent difference in their firing rates when the other's beliefs were true compared to when the other's beliefs were false. These neurons therefore reliably changed their activities in relation to the other's beliefs despite variations in the specific statements and scenarios within each trial type, providing evidence for the specific tuning of human neurons to ToM computations.
### Some Premises
- **Some Premises**:
LLM is proven to exhibit a certain level of ToM. (The January 2022 version of GPT3 (davinci-002) has a performance comparable with that of seven-year-old children, while the November 2022 version (davinci-003) has a performance comparable with that of nine-year-old children.)

---
Expand Down Expand Up @@ -54,16 +57,17 @@ The LLMs evaluated (open-source models): Falcon (1b, 7b, 40b), LLaMa (3b, 7b, 13
- Falcon-40b model showed the highest decoding accuracy of 81%
- Smaller models (<7b parameters) had an average decoding accuracy of 67%
## Conclusion
### Parallel 1 - Selective response to true- or false-belief trials
#### Parallel 1 - Selective response to true- or false-belief trials
The presence of embeddings that displayed modulations related to the ToM content (absent in smaller models, which have difficulties with false-belief trials) in multiple large models indicates that hidden embeddings facilitate the models' ToM performance. Also, ToM trial types can be robustly decoded from the population of artificial neurons (embeddings), indicating a consistent encoding of ToM features by the embeddings.

Both systems (Human Brain and LLM) contain neurons that directly respond to the perspective of others. A substantial proportion of artificial neurons responds selectively to true- or false-belief trials, mirroring prefrontal neurons in humans exhibiting changes in firing rates for different trial types.
### Parallel 2 - Distribution of ToM-responding neurons
#### Parallel 2 - Distribution of ToM-responding neurons
* LLM layers with high percentages of ToM responding embeddings showed a peak in the middle and high layers and almost zero in the input layers constantly (neither confined to one layer nor randomly distributed).
* Similar distributed areas can be identified in the human brain as the frontal, temporal and parietal cortices are regarded as regions for high-level cognitive processing. (ToM-related activity within lower input processing areas such as occipital lobe is minimal)
* Also, they observed the artificial layers exhibiting ToM responses were located in contiguous layers, which is analogous to the highly interconnected structure of ToM brain areas.
### Other conclusions
* LLM had higher accuracies when tested with facts and beliefs in true-belief trials compared to false-belief trials. (larger models did better, especially for false-belief trials)
#### Other conclusions
* LLM had higher accuracies when tested with facts and beliefs in true-belief trials compared to false-belief trials.
* Larger models did better, especially in false-belief trials.
* As the size of the models increase, the decoding accuracies, the percentages of significant neurons and ToM performances all increased.
## Limitations
1. The study was limited to open-source LLMs. Future research could examine higher-performing, proprietary models like GPT-4.
Expand All @@ -78,4 +82,4 @@ Both systems (Human Brain and LLM) contain neurons that directly respond to the
* Some of the papers cited in this paper
1. https://www.nature.com/articles/s41586-021-03184-0
2. https://arxiv.org/vc/arxiv/papers/2302/2302.02083v1.pdf
* This Note is written with the assistance of Generative AI and quotes from the original paper
* This note was written with the assistance of Generative AI, quotes from the original paper, and is based on the content and results presented in the original paper.
75 changes: 75 additions & 0 deletions content/posts/LR/Lucid-interface.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: "[LR] Interfacing with Lucid Dreams"
date: 2024-09-25T20:33:52+08:00
draft: false
tags: ["Human-Computer Interaction"]
---

[This review is intended solely for my personal learning]

Paper Info - LuciEntry
> DOI: 10.1145/3613905.3649123
> Title: LuciEntry: A Modular Lab-based Lucid Dreaming Induction Prototype
> Authors: Po-Yao (Cosmos) Wang, Nathaniel Lee Yung Xiang, Rohit Rajesh, Antony Smith Loose, Nathan Semertzidis, and Florian ‘Floyd’ Mueller
Paper Info - DreamCeption
> DOI: 10.1145/3613905.3649121
> Title: DreamCeption: Towards Understanding the Design of Targeted Lucid Dream Mediation
> Authors: Po-Yao (Cosmos) Wang, Rohit Rajesh, Antony Smith Loose, Nathaniel Lee Yung Xiang, Nathalie Overdevest, Nathan Semertzidis, and Florian ‘Floyd’ Mueller
## Prior Knowledge

Lucid dreaming is a phenomenon wherein sleepers become aware of dreaming while asleep, often enabling manipulation of dream content and providing potential benefits such as enhanced creativity, nightmare alleviation, and stress relief. Past research has focused on techniques for lucid dream induction (e.g., wake-back-to-bed, mnemonic induction) and explored the use of interactive technologies—such as auditory or visual cues—to influence dream content. However, effectively automating or streamlining this process, as well as helping dreamers shape specific dream topics, remains a challenge.

## Goal

Both **LuciEntry** and **DreamCeption** explore how interactive systems can be harnessed to facilitate lucid dreaming:

- **LuciEntry** seeks to simplify and automate the induction of lucid dreams in a lab setting, featuring a modular and autonomous platform that detects REM and delivers multiple cues (visual, auditory, electrical) at the right moment to trigger lucidity. By reducing researchers’ workloads and increasing reliability, LuciEntry aims to make the study of lucid dreaming more systematic and accessible.

- **DreamCeption** focuses on shaping or “inserting” specific dream themes once a lucid dream is detected, thereby expanding what lucid dreamers can do after attaining lucidity.

## Method

#### LuciEntry

1. **Wake-Back-to-Bed Protocol**: Participants sleep 4 hours uninterrupted, then awaken briefly for cognitive training (e.g., MILD).
2. **Modular Architecture**: A headband with EEG and EOG electrodes connects to a Raspberry Pi server. When sustained REM is detected, the server automatically triggers external cues—LED flashing, binaural beats, and 40 Hz tACS—without researcher intervention.
3. **Emergency Button**: Users can halt stimulation at any time, ensuring safety and peace of mind.

#### DreamCeption

1. **Closed-Loop Detection**: Employs brain (EEG) and eye (EOG) sensors to identify when users enter a lucid dream.
2. **Targeted Stimuli**: Once lucidity is detected (participants move their eyes in a specific pattern, e.g., left-right signals), the system provides stimuli—visual (light), auditory (sound effects), and even haptic or galvanic vestibular stimulation—corresponding to a chosen dream theme (e.g., “scuba diving”).

## Results

#### LuciEntry

- In a pilot study with three overnight sessions, two participants reported achieving short lucid dreams after receiving the visual/audio/electrical cues.
- Demonstrated “dream incorporation,” where external stimuli (flashing lights, sounds) were woven into dream narratives (e.g., seeing brake lights in a racing dream).
- Identified system hurdles such as sensor calibration, headband comfort, and ensuring fully autonomous operation.

#### DreamCeption

- Illustrates how well-timed “dream prime” stimuli can lead lucid dreamers to incorporate specific elements (e.g., ocean sounds, bubble haptics) into their dream worlds.
- Underscores that real-time detection of lucidity is crucial to deliver prompts effectively.

## Conclusion

Taken together, **DreamCeption** and **LuciEntry** exemplify how HCI-driven solutions can deepen our engagement with lucid dreaming. **DreamCeption** offers a vision of _content-rich dream design_, enabling users to “sculpt” their dream environment. Meanwhile, **LuciEntry** addresses _scalable, automated induction_, promising a more robust framework for controlled experiments and eventual personal use. Both open exciting avenues in dream engineering—where carefully timed interventions harness the dreamer’s brain state to either reliably induce or intricately shape dream content.

These two prototypes illustrate an emerging intersection of immersive design, biofeedback, and sleep science, pushing beyond conventional VR experiences into the realm of dreams.

## Limitations
1. **Signal Quality**: EEG and EOG readings can be prone to interference from movement or improper electrode placement, potentially impeding real-time detection.
2. **Short Lucid Durations**: While users became aware of dreaming, many reported only fleeting moments of lucidity. Lengthening such episodes remains a challenge.
3. **Wearability**: Discomfort from wearing headsets or electrodes overnight can disrupt sleep and reduce data reliability.

---

## Reference
* The paper:
* LuciEntry: https://dl.acm.org/doi/10.1145/3613905.3649123
* DreamCeption: https://dl.acm.org/doi/10.1145/3613905.3649121
* This note was written with the assistance of Generative AI and is based on the content and results presented in the original papers.
57 changes: 57 additions & 0 deletions content/posts/LR/SSVEP-BCI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
---
title: "[LR] Binocular Vision SSVEP BCI for Dual-Frequency Modulation"
date: 2024-10-12T20:32:51+08:00
draft: false
tags: ["Brain–Computer Interface", "SSVEP"]
---

[This review is intended solely for my personal learning]

Paper Info
> DOI: 10.1109/TBME.2022.3212192
> Title: A Binocular Vision SSVEP Brain–Computer Interface Paradigm for Dual-Frequency Modulation
> Authors: Yike Sun, Liyan Liang, Jingnan Sun, Xiaogang Chen, Runfa Tian, Yuanfang Chen, Lijian Zhang, and Xiaorong Gao
## Prior Knowledge
- **SSVEP and BCIs:** Steady-State Visual Evoked Potentials (SSVEPs) are brain responses elicited by periodic visual stimuli. Their robustness and high signal-to-noise ratio make them a cornerstone in non-invasive BCI research
- **Dual-Frequency Stimulation:** Traditional dual-frequency paradigms, such as the checkerboard arrangement, allow the encoding of more targets but are hampered by intermodulation artifacts, which can compromise signal quality.
- **Binocular Vision Approach:** By using circularly polarized light to deliver different frequencies to each eye, the binocular vision paradigm minimizes interference from intermodulation harmonics, thereby enhancing signal fidelity.

## Goal
To design and evaluate a novel dual-frequency SSVEP paradigm based on binocular vision that suppresses intermodulation harmonics and enhances overall BCI performance, particularly in training-free applications.

## Method
The study was structured around two primary experiments:
1. **Experiment 1: Offline SNR Analysis**
- **Participants:** 9 subjects.
- **Design:** A 6-target experiment comparing the binocular vision paradigm with the traditional checkerboard arrangement.
- **Measurements:** Signal-to-noise ratios (SNR) were calculated for broadband, narrowband, and intermodulation components to assess the quality of the evoked potentials.

2. **Experiment 2: Online BCI Evaluation**
- **Participants:** 12 subjects.
- **Design:** A 40-target training-free online experiment.
- **Analysis:** Utilized a customized Filter Bank Dual-Frequency Canonical Correlation Analysis (FBDCCA) algorithm to decode the SSVEP responses, with additional offline analysis using Task-Related Component Analysis (TRCA) for comparison.

Stimuli were presented on a circularly polarized display, where alternating odd and even rows contained different frequencies, ensuring each eye received a distinct stimulus.

## Results
- **Improved Signal Quality:**
The binocular vision paradigm yielded significantly higher broadband and narrowband SNRs, and reduced intermodulation noise by approximately 2 dB compared to the traditional checkerboard setup.

- **Enhanced BCI Performance:**
In the online experiment, the training-free system achieved an average Information Transfer Rate (ITR) of 104.56 bits/min—nearly double that of the conventional approach. Offline analyses further confirmed the robustness of the binocular paradigm.

- **Effective Algorithm Adaptation:**
The tailored FBDCCA algorithm successfully decoded dual-frequency responses, providing high classification accuracy without the need for extensive training.

## Conclusion
This study demonstrates that the binocular vision approach can effectively suppress intermodulation harmonics, resulting in improved SSVEP signal quality and enhanced BCI performance. The innovative integration of hardware (circularly polarized displays) and the specialized FBDCCA algorithm paves the way for scalable, training-free BCI systems capable of handling a larger number of targets.

## Limitations
The study was conducted under controlled laboratory conditions with specialized hardware like circularly polarized displays, which might limit its immediate real-world application. Moreover, the small sample size calls for further research with a more diverse population to validate the results. Future work should also focus on expanding the scope of the FBDCCA algorithm to ensure its effectiveness across various dual-frequency paradigms and BCI configurations, thereby enhancing the practical utility of the approach.

---

## Reference
* The paper: https://ieeexplore.ieee.org/document/9911680
* This note was written with the assistance of Generative AI and is based on the content and results presented in the original paper.
2 changes: 1 addition & 1 deletion layouts/partials/extend_footer.html
Original file line number Diff line number Diff line change
@@ -1 +1 @@
<script type="text/javascript" src={{ "/js/canvas-nest.js" | relURL }} count=100 color="255,255,255" opacity=1></script>
<script type="text/javascript" src={{ "/js/canvas-nest.js" | relURL }} count=80 color="102,255,178" opacity=1></script>

0 comments on commit f9a4538

Please sign in to comment.