You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'd like to calculate the motion data within a region of frames, maybe specified by the same rules as cropping [width,height,top_left x, topleft_y]. Currently working on assessing movement across sections of an audience and the processing of generated cropped videos of these regions before assessing motion is very slow and introducing undesirable artifacts.
Even better would be an option to specify a grid dividing the frame height and width, with an output of QoM calculated within each square, but I can deal with the efficiency loss of reading the video for each region if that would require too many changes to existing functions.
The text was updated successfully, but these errors were encountered:
So, if I understand correctly, so far you've cropped parts of your videos and used them to calculate the QoM afterwards.
Could you tell me more about the artifacts you are getting on the cropped video?
As for your suggestion, it could be quite demanding in terms of calculation, yes. But if you think such a function could be relevant for your work we can discuss more about it!
There is btw another suggestion which could be to extract motion vectors from MPEG files to run a new motion analysis on the file (much faster). I have worked a little on this issue but didn't manage to properly get what i wanted so far, but if we manage to do so then it would be easily possible to either specify a specific region of interest with ´FFmpeg´ to get the QoM or compute a divided grid of the extracted motion vectors.
The videos are in compressed formats because of their size (50 fps, UHD, and many minutes long). Cropping requires re-compressing, and so far all the formats I've tried result in new colour corrections at quasi-regular intervals. (Maybe at key frames, maybe not). So motiondata() extracted QoM on a compressed region looks like this:
The timing of those spikes varies per cropped region, so it's not an artifact of the original video. I can threshold them out, replace them with NaNs, but it's not a perfect process.
As to the other issue, it could be another means of reaching the same information, though the analysis challenge would be around how to combine vectors informatively over set regions, and it sounds like some of the needed information isn't super accessible.
I'd like to calculate the motion data within a region of frames, maybe specified by the same rules as cropping [width,height,top_left x, topleft_y]. Currently working on assessing movement across sections of an audience and the processing of generated cropped videos of these regions before assessing motion is very slow and introducing undesirable artifacts.
Even better would be an option to specify a grid dividing the frame height and width, with an output of QoM calculated within each square, but I can deal with the efficiency loss of reading the video for each region if that would require too many changes to existing functions.
The text was updated successfully, but these errors were encountered: