Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make video_sample and other video commands 360° aware #336

Open
realchrisolin opened this issue Mar 21, 2019 · 9 comments
Open

Make video_sample and other video commands 360° aware #336

realchrisolin opened this issue Mar 21, 2019 · 9 comments

Comments

@realchrisolin
Copy link

This is more of a feature request than a bug and I'll probably be doing a lot of the work for this myself, but I still wanted to create some sort of "ticket" for tracking purposes and upstream feedback.

Issue: Images derived from 360° videos using video_sample are not injected with the metadata needed so they're recognized as 360° images, resulting in bad segments when uploaded

The current solution I'm looking at is integrating https://www.github.com/google/spatial-media to detect if 360° metadata exists in video media and add the metadata to any images created from it. It's also designed to be worked with using Python2.7, but so far I haven't had any issues uses it on 3.7, so this shouldn't be a pain point if and when this tool is updated for Python3.7 compatibility.

@joshinils
Copy link

Still a problem, it seems.

I want to extract samples from a GoPro max video.
Because the GoPro max single image time-lapse setting only allows a minimum time of 2 seconds, which for my liking is too long.

I need to manually stitch the video with the proprietary non-linux (i.e. windows) software from GoPro, then I can use that video (which has spherical metadata) to sample from.

So now that there is python 3 support on mapillary_tools, can this be implemented?
Or do I need to manually inject the metadata into the sampled images to make them detect as spherical?

I only looked at the intermediary files while ffmpeg is still running, though.

@ptpt
Copy link
Member

ptpt commented Jun 9, 2023

@joshinils Have you tried the direct uploading? The server will stitch the frames for you.

mapillary_tools process_and_upload gopro_max.mp4

@joshinils
Copy link

joshinils commented Jun 9, 2023

No, I need to correctly correlate the images to the gpx track.
The timestamp metadata is wrong in either the video or gpx, I don't remember anymore when I recorded them.

But JOSM has the tooling to move the images along the gpx track and save the new location into the exif data, with a plugin.

Only after can I upload the images at their correct location.

Since I need to process the images with a script which I wrote for this purpose anyway, I manually inject the 360-metadata myself anyway; https://gist.github.com/joshinils/2ca6dd75ac6c1093b4d4fd2d60c68145

See my (German) OSM-diary entry; https://www.openstreetmap.org/user/cyton/diary/401670

@ptpt
Copy link
Member

ptpt commented Jun 9, 2023

@joshinils Thanks for the context.

Curious why you use external GPS tracker. The GoPro Max does support GPS and it embeds GPS track in the video. See https://help.mapillary.com/hc/en-us/articles/360012674619-GoPro-MAX

@joshinils
Copy link

joshinils commented Jun 9, 2023

  1. I did not assume a gps track was included in the video.
  2. after stitching, it is gone from the output video,
  3. so, only the unstitched *.360 video off the SD card has a gpx track, the stitched *.mov does not
  4. I don't know how to extract the gpx track from the 360 file and include it in the mov file

When I do:
--geotag_source "gopro_videos" --geotag_source_path GS015642.360

mapillary_tools video_process GS025642.mov --geotag_source "gopro_videos" --geotag_source_path GS015642.360 --video_sample_distance -1 --video_sample_interval 0.1 --skip_process_errors --interpolation_use_gpx_start_time

or --geotag_source "exif" --geotag_source_path GS015642.360

mapillary_tools video_process GS025642.mov --geotag_source "exif" --geotag_source_path GS015642.360 --video_sample_distance -1 --video_sample_interval 0.1 --skip_process_errors --interpolation_use_gpx_start_time

The images are not geotagged.

(But still, the images do not contain 360° metadata)

@joshinils
Copy link

joshinils commented Jun 9, 2023

Plainly feeding the unstitched .360 video file (which does include some gps metadata) does not work, the images are not geolocated;

mapillary_tools video_process GS015642.360 --geotag_source "exif" --video_sample_distance -1 --video_sample_interval 0.1 --skip_process_errors

nor

mapillary_tools video_process GS015642.360

inserts location data, or other metadata (like the accurate timestamp of when the frame was taken, with millisecond precision) allowing me to correlate after the fact with JOSM

@waldyrious
Copy link

I'm facing a similar issue. I tried to extract snapshots from a .360 video using the video_process subcommand (without custom --geotag_source), and the resulting snapshots are not recognized as 360º panoramas (which indeed they aren't — they seem to be the unrolled sides of a cube map (i.e. without the top and bottom sides, and with noticeable seams in the edges), rather than a full equirectangular projection of the sphere:

GS024154_0_002438

(notice the artifacts — duplicated image features along the vertical seams)

Now, this wouldn't be such an issue if Mapillary did process them as panoramas, even if they were lacking the top and bottom, because those are typically the least relevant bits of the image anyway. However, I tried uploading a small sequence just to make sure, and indeed it's shown on Mapillary as a regular, non-panoramic image, albeit a wide one:

image

So for now I gave in and uploaded the full video, with parking maneuvers and all, and will likely ask for the noisy snapshots to be deleted after they're available on Mapillary.com. But this is extremely cumbersome and would be much easier if the video_process was able to extract 360º images correctly, which would allow me to cut out those bits and then uploading the trimmed sequence from the set of images instead of from the original video.

@joshinils
Copy link

@waldyrious those aren't even stitched correctly.
the gopro max video output is not stitched in camera.
you need to use a tool afterward to stitch them.

I've come across a little c program, which I've forked and made some changes to: https://github.com/joshinils/max2sphere
That works under linux, the official program by gopro does not. max2sphere also allows more control over what happens, but it's more difficult to use and you have to compile it yourself and extract the frames with ffmpeg and merge them again with ffmpeg. it takes a long time and uses a lot of ram and disc space in the process.

@waldyrious
Copy link

waldyrious commented Jun 2, 2024

Thanks for the pointer, @joshinils. However, it's clear that the Mapillary uploader is able to correctly stitch the .360 videos and extract snapshots from it; so presumably, the video_process subcommand should be able to do the same. It doesn't sound like additional logic needs to be developed; but rather reusing the one that the desktop uploader already uses.

In my case, the cleanup changes I need to do are small enough that uploading the entire video file, and then requesting deletion of specific snapshots generated from it, is a tolerable workaround; but if mapillary_tools gains the ability to correctly extract 360° images from .360 videos (which IIUC is what this issue is primarily about?) I'd gladly make use of it instead.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants