Skip to content

How to apply time synchronization properly #127

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ChadChen68 opened this issue Mar 24, 2025 · 5 comments
Open

How to apply time synchronization properly #127

ChadChen68 opened this issue Mar 24, 2025 · 5 comments

Comments

@ChadChen68
Copy link

I am currently setting up an LSL (Lab Streaming Layer) connection between my Meta Quest Pro and my host PC. However, I am still facing issues with time synchronization in the inlet stream.
I have tried using processing_flags=pylsl.proc_ALL, but the problem persists. My implementation is based on the ReceiveAndPlot.py example.
I have read the LSL time synchronization documentation, which mentions that a main clock is required to synchronize timestamps. I think I already applied my host PC clock.

I also tried using time_correction from the GetTimeCorrection.py example, but I don't fully understand this part:
Returns the current time correction estimate. This is the number that needs to be added to a time stamp that was remotely generated via local_clock() to map it into the local clock domain of this machine.
Should I simply use local_clock() + timeout?

Here is my build-up:

  • Host PC: Receives data and event streams (running in Python).
  • Data stream: A homemade EEG headset (running in Python), sending data to the host PC.
  • Event stream: Meta Quest Pro sends event data to the host PC (task running in Unity).
  • All three devices are on the same LAN (without wifi).

My current code below

import pylsl
from typing import List
class Inlet:
    def __init__(self, info: pylsl.StreamInfo):
        self.inlet = pylsl.StreamInlet(info, processing_flags=pylsl.proc_ALL)
        self.name = info.name()
        self.channel_count = info.channel_count()
    def pull_and_plot(self):
        pass
class MarkerInlet(Inlet):
    def __init__(self, info: pylsl.StreamInfo):
        super().__init__(info)
    def pull_and_plot(self):
        markers, timestamps = self.inlet.pull_chunk(0)
        if markers and timestamps:
            for marker, timestamp in zip(markers, timestamps):
                print(f"Marker:{marker}, Timestamp: {timestamp}") 
class DataInlet(Inlet):
    def __init__(self, info: pylsl.StreamInfo):
        super().__init__(info)
    def pull_and_plot(self):
        data, timestamps = self.inlet.pull_chunk(timeout = 0.0)
def main():
    inlets: List[Inlet] = []
    print("Looking for streams")
    streams = pylsl.resolve_streams()
    for info in streams:
                if info.type() == 'Markers':
                    if info.nominal_srate() != pylsl.IRREGULAR_RATE \
                            or info.channel_format() != pylsl.cf_string:
                        print('Invalid marker stream ' + info.name())
                    print('Adding marker inlet: ' + info.name())
                    inlets.append(MarkerInlet(info))
                elif info.nominal_srate() != pylsl.IRREGULAR_RATE \
                        and info.channel_format() != pylsl.cf_string:
                    print('Adding data inlet: ' + info.name())
                    inlets.append(DataInlet(info))
                else:
                    print('Don\'t know what to do with stream ' + info.name())
    while True:
            for inlet in inlets:
                inlet.pull_and_plot()
if __name__ == '__main__':
    main()
@ChadChen68
Copy link
Author

ChadChen68 commented May 8, 2025

Hi,
Since the processing_flags=proc_clocksync doesn't seem to be working, I still haven’t figured out how to properly synchronize two devices. I’ve read the LSL documentation and related GitHub issues, but it hasn’t helped much.

So I had an idea — though I'm not sure if it makes sense.
Since I can’t change the timestamp directly from an inlet, I was thinking: what if I take the data from the inlet, apply time_correction + timestamp, and then re-send it via a new outlet? Then I can receive that stream again with another inlet.

In this way, both data streams would be aligned to the same clock (i.e., my PC's clock).
Does that make sense?

The reason I believe the synchronization isn't working is because I tested the same task using NeuroNexus (Nuronlink) + MATLAB, and the N1 latency was about 172 ms. But when I use my custom device with LSL in Unity, the N1 shows up at around 200 ms.

However, if I send events using UDP, the N1 latency is correct.

@cboulay
Copy link
Collaborator

cboulay commented May 8, 2025

Data are timestamped at the outlet in the push_sample or push_chunk call.
Then, when the inlet pulls the data, the timestamps it receives are by default in the original sending computer's clock, which has no meaning for the receiving computer. By using a flag that enables clock synchronization, you're converting the timestamp from the origin clock to the receiving clock.

LSL always uses its local_clock which is std::chrono::steady_clock. When you create a connection between an outlet and an inlet, a background thread maintains the offset between the receiver's local_clock and the sender's local_clock, at least as well as it can be estimated over the network (<1msec error over ethernet, worse over wi-fi). When you enable timestamp post processing, the timestamps are converted from the sender's local_clock to the receiver's local_clock using this offset.

Quick aside: LabRecorder does not do this conversion and instead stores the clock offsets separately, so they can be used during file loading for a more accurate clock correction.

Since the processing_flags=proc_clocksync doesn't seem to be working...
The reason I believe the synchronization isn't working is because I tested the same task using NeuroNexus (Nuronlink) + MATLAB, and the N1 latency was about 172 ms. But when I use my custom device with LSL in Unity, the N1 shows up at around 200 ms.
However, if I send events using UDP, the N1 latency is correct.

How are you measuring N1 latency? Events from LSL4Unity? What is your neural data source?

@ChadChen68
Copy link
Author

I'm sorry, I use Neuroscan to record EEG and MATLAB to send events to Neuroscan, not Nuronlink. Normally, our N1 will be located at around 170 ms for most subjects, even in oddball or n-back tasks, based on our experience. To validate my process, I compare it with my EEG based on the LSL system. And it shows N1 appears after 200ms or longer, we consider it might be because my multiple devices' synchronization causes something to be wrong. I simply use the ERP profile to check the time N1 shows up between 2 conditions, and it did show our device based on LSL has longer latency.

I am not sure what neural data source refers to. I use a handmade EEG headset from our lab, which will send EEG over LAN to the host PC. And Unity sends events to the host PC I use LSL from the LSL4Unity folder. just like the SimpleOutletTriggerEvent sample.

Image

Image

@cboulay
Copy link
Collaborator

cboulay commented May 9, 2025

There are 2 sources of delay that are unknown:

  1. the time between the physical voltage sampling and when that sample gets timestamped by LSL in PC1;
  2. the time between when the Unity app says it displayed a stimulus and when it actually manifested.

If delay 1 is long and unaccounted for, this would actually push your N1 earlier so let's ignore that for now.

For delay 2, this depends on if you're doing audio or video stimuli. For audio, Unity has terrible latency and variability (I don't know how Unity rhythm games work; they must use their own audio library). For video, it depends on if you're using single-, double-, or triple-buffering.

The best thing you can do to characterize your latency is put a photosensor in your HMD (under a towel or something to keep it dark otherwise) and connect it to your signal acquisition system -- preferably through an aux input but you might be able to connect it directly to an electrode.

@ChadChen68
Copy link
Author

Thank you, Cboulay. Sorry for the late reply. I use video stimuli for my task. I think what you describe is screen or video Synchronization from Timing Synchronization in EEG Experiments. I’ll definitely give that a try. If I see any improvement, I’ll let you know. Thanks again for the helpful suggestion!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants