This is python code for the paper work accepted in ACM MMSys'20. You can access the paper through this link.
- Language: Python
- Required Packages: numpy, pandas, matplotlib, scipy, sklearn, scikit-image and opencv
- Need to install 'OpenCV' link
- Need to install 'transform360' developed by 'facebook' link
- The following link would be helpful for installation link
- Need to install 'FFMPEG' link
- To install the required package, type the following command:
- Python 2
pip install numpy pandas matplotlib scipy sklearn scikit-image
- Python 3
pip3 install numpy pandas matplotlib scipy sklearn scikit-image
We collected head and eye gaze movement dataset from 20 vonlunteers over 10 360-degree VR video. Due to the restriction on file size allowed to be uploaded onto Github, we will open a linke to the external link to our dataset.
- Data provided in this github is sample data (some json files and video segments).
- How to generate the eye gaze heat map
python3 eye_gaze_heatmap.py
The command above would generate the eye gaze heatmap figure shown below in 'figure' directory, named 'eye_gaze_heatmap.png'.
- How to plot yaw & pitch deviation from center of front face
python3 gaze_deviation.py
python3 gaze_deviation_visualization.py
- The first command 'python3 gaze_deviation.py' calculates the yaw and pitch deviation from the center of front face in cube map representation, which would be saved in a format of 'pickle' file in 'data' directory to be used in the following script.
- 'gaze_deviation_visualization.py' would save the deviation plots, each of which would be saved as 'type_of_data'_'duration'.png (e.g., pitch_2.png) in the 'figure' directory. The figure below is 'pitch_5.png that shows the deviation pattern when 5-second-of video segments are streamed.
- Pyramid encoding in video Run the following command. pyramid-encoded video segment would be found in 'video/segments/pyra'.
python3 pyramid_b_encoding.py
The pyramid representation would look as below.
- Pyramid decoding in video The following command would convert pyramid representation back into cube represenation. however, even with the same sturcuture as cube format, decocded video would have reduced size and degraded quality, compared to the original cube map represenatation. You can find the decoded video in 'video/segments/pyra_decoded'.
python3 pyramid_b_decoding.py
You can compare the quality of decoded video frame with the original cube frame by clicking the following images below.
"Original Cube Frame"
- Calculate saliency scores
python3 saliency_json.py
The command above would calculate saliency score of video frames and save them in json files (in 'json' directory). To boost up the computation efficiency, we sampled every n-th (in our case 10th) frame for saliency score.
The content of json file would look as below. Please refer to the paper for the detailed explanation of json files.
"{\"0\": {\"saliency\": \"1.0\", \"row\": \"274\", \"column\": \"767\", \"name\": \"L\", \"width\": \"256\"}, \"1\": {\"saliency\": \"0.9852603104565192\", \"row\": \"0\", \"column\": \"892\", \"name\": \"R\", \"width\": \"132\"}, \"2\": {\"saliency\": \"0.9852603104565192\", \"row\": \"0\", \"column\": \"0\", \"name\": \"B\", \"width\": \"124\"}, \"3\": {\"saliency\": \"0.8944914751693509\", \"row\": \"92\", \"column\": \"107\", \"name\": \"B\", \"width\": \"256\"}, \"4\":
...
- SALI360 Encoding
python3 sali_encode.py
The command above generates video files whose frames would look as below. You can find the encoded video in 'video/segments/sali_encoded'. Sliced salient regions are concatenated to pyramid represenation.
- SALI360 Decoding
python3 sali_decode.py
The command above would decode the encoded video generated by the previous command. You can find the encoded video in 'video/segments/sali_decoded'. The decoded frames would would look as below. When zoomed in, most part of the decoded frames have degraded quality as in pyramid, while visually salient regions stitched in high quality.