Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MCC128 on RPi5: Fluctuation in number of samples returned by a_in_scan_start() and a_in_scan_read() #68

Closed
AdamSorrel opened this issue Jun 12, 2024 · 6 comments

Comments

@AdamSorrel
Copy link

I am running a continuous voltage acquisition on the MCC128 on a Raspberry Pi5 (with a PREEMPT_RT). I am running the DAQ card with 10 000 samples/second rate and reading 1000 samples using a python sched module every 0.1 s. In theory, I should be getting 1000 samples every 100 ms, however in practice this number fluctuates quite significantly.

I am not as much concerned about the actual number of samples, but it is critical to be able to time stamp each value with a reasonable precision.

I was first concerned that my scheduling in the RPi5 is not precise enough, but I am checking both when the request was scheduled (second row) and right after the data was retrieved (third row). As you can see from the first row of the attached plot, the number of samples is typically around 980, however it fluctuates significantly between 900 all the way to 1100 samples/cycle. This is worrisome, because 100 samples would be equivalent to 10 ms. This would be an unacceptable error in my case.

Question: I don't understand the hardware well enough to assess the source of this sample fluctuation. Can I just rely on the time stamp being correct and "back-calculate" the timestamps for whatever amount of samples I get? Or is there actually some offset that is changing and will throw my data off?

The last row is just the voltage output. This is largely irrelevant, since the DAQ card is not connected to anything, but I have added it just to see the output. I am attaching the raw data used to generate the figure as well as the code I was running (see daqTest2.txt). The latter is a python code, but I cannot seem to be able to attach a python file.

Fluctuation of measurements

daqTest2.txt
outputDaq3raw.txt

@shaunmccance
Copy link

shaunmccance commented Jun 13, 2024

I am following this with interest as I have a similar issue. Of note I am not a MCC expert.

The apparent variation may be due to a lag in the hardware buffering the data for output. My read interval is lower so whilst I was always getting data, some occasions I was getting slightly less and sometimes slightly more. Overall, however, the hardware seems to give the correct number of samples in a given time as it balances out once all the data is received.

Or the apparent variation may be down to a lag in the software speed. If the hardware timed board can be assumed to be as close to perfect timing as possible, the buffered data would be sequential with a known periodicity and a documented lag between reading subsequent channels. Time complexity and lag on the software would cause the readings to deviate from this fixed read rate.

Also are you reading the full buffer or just 1000 samples? You may be accumulating samples in the buffer that relate to read values - which are actually only the next set of samples from the buffer not the most recent set as might be expected (This would also eventually overload the buffer)

Also when working with multiple channels the data is received incrementally so if channel 2 and 3 is missing from one buffer read then this first 2 data items on the subsequent read is actually the end of the previous data set. So important not to discard unused data.

Time basing is a bit of an issue and I am curious on the best practice for this. I have timestamped the data incrementally from a known starting time. Using a wired trigger on the board pulled up by a software thread. The thread starts the ADC alongside recording a timestamp for the activation. This happens after the program is fully loaded and confirmed running so with a minimal time complexity and lag - then restarted periodically for confidence. This is perhaps not perfect but works fine. As said I would be interested to learn about best practice or your approach here.

Hope this helps. As mentioned curious as to some official comment on this

@AdamSorrel
Copy link
Author

AdamSorrel commented Jun 13, 2024

Hi Shaun,

Thank you for the message.

I am following this with interest as I have a similar issue. Of note I am not a MCC expert.

The apparent variation may be due to a lag in the hardware buffering the data for output. My read interval is lower so whilst I was always getting data, some occasions I was getting slightly less and sometimes slightly more. Overall, however, the hardware seems to give the correct number of samples in a given time as it balances out once all the data is received.

I agree that it is probably a hardware buffer. I have just checked and the mean value over the time presented here is 999.45 so pretty much exactly a 1000 as expected.

Or the apparent variation may be down to a lag in the software speed. If the hardware timed board can be assumed to be as close to perfect timing as possible, the buffered data would be sequential with a known periodicity and a documented lag between reading subsequent channels. Time complexity and lag on the software would cause the readings to deviate from this fixed read rate.

I was thinking about the software speed, but from what I have checked, the timing of read request and the actual read being finished seems really precise, unless I am misunderstanding something (see the second and third plot in the above post and note the units - it is in ns or micro seconds range if this is to be believed). I am using python's time.time_ns() to retrieve these times. I have just also re-run the same using time.clock_gettime_ns(time.CLOCK_REALTIME) with pretty much identical results (the recorded software timing is still quite precise).

I am not sure if I fully understand what you mean so please correct me, but I agree that it appears that the readout time might be equivalent to about 10-20 sample's worth of time in my case so I most commonly get around 980-990 samples per cycle and occasionally there is a spike which "flushes" the rest. However, I should be reading all the samples in the buffer (see bellow) so I cannot wrap my head around this.

Also are you reading the full buffer or just 1000 samples? You may be accumulating samples in the buffer that relate to read values - which are actually only the next set of samples from the buffer not the most recent set as might be expected (This would also eventually overload the buffer)

I am reading all samples available in the buffer:
hat.a_in_scan_read(samples_per_channel=-1, timeout = 5.0)

Also when working with multiple channels the data is received incrementally so if channel 2 and 3 is missing from one buffer read then this first 2 data items on the subsequent read is actually the end of the previous data set. So important not to discard unused data.

I hope that you are referring to a scenario when one doesn't read out all the data from the buffer, correct? Or do you mean that some data would not be collected from the last channel? I was worried about that, but I have hard-coded that the data be always divisible by 3 (number of my channels) so I think that if I had some inconsistent readouts, I would see mistakes as occasionally the data-out would just not be divisible correctly. I have not seen that happen so far.

Time basing is a bit of an issue and I am curious on the best practice for this. I have timestamped the data incrementally from a known starting time. Using a wired trigger on the board pulled up by a software thread. The thread starts the ADC alongside recording a timestamp for the activation. This happens after the program is fully loaded and confirmed running so with a minimal time complexity and lag - then restarted periodically for confidence. This is perhaps not perfect but works fine. As said I would be interested to learn about best practice or your approach here.

That is interesting. I have not considered that. I have no idea what the best practice is. I am just kinda winging it. I reckon your wired trigger is not implemented as the on board clock for the MCC Daq card, is it? Is it through the TRIG port?
I have honestly just compiled the kernel with the PREEMPT_RT patch (this should reduce the kenel lag and make it "more real time") and I will be testing isolating a core for running my program. Mostly this is working pretty well. The time when the data gets in is within around 200 micro seconds range based on what I see here, which feels good enough for now. So in a nutshell, from all I can tell, the Raspberry is precise enough. My plan is to just back-calculate the time stamps for each sample by dividing equal intervals between the time-of-readout-request timestamp and the time-of-data-arrival timestamp. Hopefully identifying how long it takes to readout the data and shifting the second time stamp by that amount.

Hope this helps. As mentioned curious as to some official comment on this

@shaunmccance
Copy link

shaunmccance commented Jun 13, 2024

I reckon your wired trigger is not implemented as the on board clock for the MCC Daq card, is it? Is it through the TRIG port?

Yes, there is the TRIG and CLK connectors, I was using both. If I got this the right way round CLK is to link the timing between boards so they all grab readings at the same moment (master and slave scenario). TRIG is a line that can be pulled high to start data acquisition. So that one (on the master board) is coupled to a GPIO pin. Then recording the time and switching the GPIO can be done at the same instant (near as damn it at least)

hope that you are referring to a scenario when one doesn't read out all the data from the buffer, correct?

Kind of, it seemed to be reading all the data but 'all' the data wasnt always there. eg. as if the hardware was buffering the readings but you called the a_in_scan_read() function in the middle of the hardware job so there wasnt necessarily a full set of readings for the most recent timesteps. But then the rest would come through on the next a_in_scan_read() call. Hope that make sense.

WRT the timestamps I was calculating the timestamp from a starting point as mentioned but then tracking any drift from the rpi clock time so triggering a reset if it drifted too much. Seemed fine unless there was an additional load on the rpi core it was running on

@AdamSorrel
Copy link
Author

AdamSorrel commented Jun 13, 2024

Yes, there is the TRIG and CLK connectors, I was using both. If I got this the right way round CLK is to link the timing between boards so they all grab readings at the same moment (master and slave scenario). TRIG is a line that can be pulled high to start data acquisition. So that one (on the master board) is coupled to a GPIO pin. Then recording the time and switching the GPIO can be done at the same instant (near as damn it at least)

I see. That is quite clever. I am luckily working with just one board, so that is a bit less headache, but maybe I should consider using the TRIG. Perhaps the variant delay is not at the readout but at the software trigger point? I think it seems to get triggered on time, but maybe processing the trigger in the DAQ card is a source of variance? I did not consider that option.
Is there any data or any way to test the software and hardware trigger accuracy?

Kind of, it seemed to be reading all the data but 'all' the data wasnt always there. eg. as if the hardware was buffering the readings but you called the a_in_scan_read() function in the middle of the hardware job so there wasnt necessarily a full set of readings for the most recent timesteps. But then the rest would come through on the next a_in_scan_read() call. Hope that make sense.

Yeah, that makes sense to me and that is a concerning scenario with regard to timing precision. I think understanding this would be the most important point for me.

WRT the timestamps I was calculating the timestamp from a starting point as mentioned but then tracking any drift from the rpi clock time so triggering a reset if it drifted too much. Seemed fine unless there was an additional load on the rpi core it was running on.

I see. That is cool. I will consider implementing that somehow.

@nwright-mcc
Copy link
Collaborator

This area is primarily for reporting issues with the daqhats library source and is not monitored by MCC support.

Please use the MCC forum at https://forum.digilent.com/forum/39-measurement-computing-mcc/ to get help with the hardware and library.

@AdamSorrel
Copy link
Author

@shaunmccance I have reposted the question on the Digilent forum (here). So far with no satisfactory answer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants