Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More motion analysis #210

Open
alexarje opened this issue Mar 10, 2021 · 4 comments
Open

More motion analysis #210

alexarje opened this issue Mar 10, 2021 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@alexarje
Copy link
Contributor

alexarje commented Mar 10, 2021

It would be interesting to explore some more analysis, including:

  • motion energy, motion smoothness, motion entropy (as described here).

  • spectral information (FFT) and z-transforms of QoM data. An important question, then, is what type of windowing to use for the analysis.

@alexarje alexarje added the enhancement New feature or request label Mar 10, 2021
@joachimpoutaraud
Copy link
Contributor

joachimpoutaraud commented Sep 1, 2022

According to Cross et al., 2021, motion energy consists of a difference image for consecutive frames pairs computed on each video so that any pixel with more than 10 units luminance change gets classified as “moving”. The mean numbers of moving pixels per frame and movie is then summed to give a ME index for that video.

After displaying the average of sample value difference between all values of the Y (luminance) plane in the current frame and corresponding values of the previous input frame using ffmpeg, I found that motion energy could be interpreted as the quantity of motion. Here is a simple python code:

import subprocess
import re
import matplotlib.pyplot as plt
import numpy as np
import cv2

command = 'ffmpeg -i input_files/video/test.mp4 -vf "signalstats,metadata=print:key=lavfi.signalstats.YDIF" -an -f null -'
process = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, universal_newlines=True)
out, err = process.communicate()

ydif = re.split(r'\s', out)
matching = [float(s.split('=')[1]) for s in ydif if "lavfi.signalstats.YDIF" in s]

cap= cv2.VideoCapture('input_files/video/test.mp4')
fps = int(cap.get(cv2.CAP_PROP_FPS))

plt.figure(figsize=(12,2))
plt.bar(np.arange(len(matching)-1)/fps, np.asarray(matching[1:])/max(matching[1:]));

However, I will not implement it in the toolbox as it renders the same result as the QoM. On the other hand, I found an interesting project related to the extraction of motion energy features from video using a pyramid of spatio-temporal Gabor filters. I will now focus on extrating motion smoothness.

@joachimpoutaraud
Copy link
Contributor

joachimpoutaraud commented Nov 22, 2022

Motion smoothness

I have implemented a new velocity parameter in the dense optical flow function.

When set to True, it allows to compute dense optical flow velocity according to a distance in meters to the image (focal length) for returning flow in meters per second. The distance parameter is set to None as default but if known in advance it can be useful. Additional parameter related to the angle_of_view, can be set for reporting flow in meters per second. This is set by default to 0. As a result, it is now possible to compute motion smoothness using the number of velocity peaks per meter (NoP) as an index as described here.

Main drawback of this new parameter is that it is based on optical flow using OpenCV which takes a lot of time to process a video.

More information on how to implement it can be found in the MGT wiki documentation.

@joachimpoutaraud
Copy link
Contributor

Motion entropy

Based on the velocity parameter, it is also possible to compute acceleration of motion between every frames as follow:

  def get_acceleration(self, velocity, fps):

      acceleration = np.zeros(len(velocity))
      velocity = np.abs(velocity)
      
      for i in range(len(acceleration)-1):
          acceleration[i] = ((velocity[i+1] + velocity[i]) - velocity[i]) / (1/fps)
          
      return acceleration[:-1]

That way, if the distance and angle_of_view parameters are accurately filled out, one can have a precise idea of the acceleration of motion (expressed in meters per second). Finally, entropy of acceleration is calculated on the acceleration of motion array in order to get the motion entropy as described here.

Here is an overview of the results obtained for the dance.avi video with experimental parameters set to distance=3.5 and angle_of_view=80. Finding precise angle of view to compute optical flow velocity can be calculated with the camera’s effective focal length. Here is more information on how to calculate it.

velocity

@alexarje
Copy link
Contributor Author

Very cool! I wonder whether this MV tractus could be a way to get motion vectors ala optical flow without OpenCV?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants