-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added Digital Compression #414
base: master
Are you sure you want to change the base?
Conversation
decompressedValue = decompressionScale * compressedValue + decompressionOffset; | ||
while (time > (lastTime + samplingrate)) | ||
{ | ||
lastTime += samplingrate; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if rounding errors could potentially introduce additional data points between state changes. Do you think it might cause problems downstream if points.Count > m_samples
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
m_samples
get's removed from the blob so I am not sure that even matters.
By definition we assume these "Digitals" are evenly sampled so it will generate an evenly sampled flat line until it hits a state change. In theory it is possible that we Loose some points in between or generate extra ones, but since it's a flat line anyway I am not too concerned about that.
Occasionally we may compress a latched analog this way, which I suppose could be an issue, but I believe we already assume fixed sampling rate in a few places when doing math so not sure that really matters either.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Off the top of my head, my main concern would be this.
openXDA/Source/Libraries/FaultData/DataAnalysis/DataGroup.cs
Lines 323 to 331 in f0ccd33
// If the data being added matches the parameters for this data group, add the data to the data group | |
// Note that it does not have to match Asset | |
if (startTime == m_startTime && endTime == m_endTime && samples == m_samples) | |
{ | |
m_dataSeries.Add(dataSeries); | |
return true; | |
} | |
return false; |
If the actual number of decoded data points doesn't match DataGroup.m_samples
, then the DataGroup.Add()
method will fail to even include the digital series in the DataGroup
. This is called directly by DataGroup.FromData()
so the decoded data would just be silently dropped on the floor, and we would be wondering why it's not getting returned to the visualization.
Also, I suppose I'm assuming that there will be a follow-up PR that includes changes to DataGroup.FromData()
?
Note with the switch to use |
This adds a new compression option for datablobs.
It is used if < 10% of points represent actual changes in values (e.g. Digitals)
It compresses the data into a byte array containg:
# of sampes
(int)SeriesID
(int)First Timestamp
(long)Ticks between samples
(long)Offset and scaling for values
(2 doubles)And the data for every change in Value in form
Timestamp
(shortlong)Value
(ushort)With this PR there is no difference except in length of the data blob.