Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

decompose the recordings into RGB folder and depth folder #67

Open
random-guest opened this issue Jun 2, 2023 · 11 comments
Open

decompose the recordings into RGB folder and depth folder #67

random-guest opened this issue Jun 2, 2023 · 11 comments

Comments

@random-guest
Copy link

I have taken a recording using the faceid.

I want to get two folders from the recording, one is the depth folder that will contain the depth images, and the other is the RGB folder that will contain the RGB images.

I would also want to get the camera_intrinsincs.json file.

can you please guide me through this process?

Thank you in advance for your time.

@marek-simonik
Copy link
Owner

If you want to get RGB images, depth maps and metadata of an existing 3D video, you can export it e.g. into the .r3d format (see #7 for more details) or into EXR + JPG (the EXR files contain float32 depth values). Both export options generate a JSON metadata file (either metadata or metadata.json).

@wing-kit
Copy link

wing-kit commented Jun 8, 2023

@marek-simonik This is exactly my use case to reproduce experiments result from research papers. However I am not familiar with EXR format. Could you suggest how to convert the EXR format into PNG or how to read the EXR using opencv/numpy? Also what is the depth scale to convert back to meter unit?

@marek-simonik
Copy link
Owner

Please try to use the OpenEXR Python package to read the EXR files instead of OpenCV/numpy. OpenCV does not seem to be able to read float32 EXR images. The EXR files contain one float32 channel, where each float32 pixels represents the depth of that pixel in meters (so there is no need to convert it).

@random-guest
Copy link
Author

Is the depth images' resolution constant (256, 192) or (640, 480)?

can't I get higher than that?

the RGB images resolutions without enabling the higher-quality Lidar recording is 720x960, and when allowed it becomes 1440x1920.

I tried recording a video with both settings and exporting it.r3d and the depth size is not changing.

is there a way to save the live RGBD streaming using wifi/usb to .r3d format?

@random-guest random-guest reopened this Jun 17, 2023
@marek-simonik
Copy link
Owner

@random-guest Unfortunately, it is currently not possible to get a higher resolution of depth images (to the best of my knowledge, the depth maps have the highest possible resolution provided by Apple's APIs).

is there a way to save the live RGBD streaming using wifi/usb to .r3d format?

There is no support for saving live streams into .r3d.

@random-guest
Copy link
Author

Thank you for fast reply.

@random-guest
Copy link
Author

do you recommend a way to decrease the resolution of the recorded RGB photos to match that of the depth from the mobile app directly?

@random-guest random-guest reopened this Jun 19, 2023
@wing-kit
Copy link

@random-guest streaming from ARKit iOS is limited to 256x192. However instead of depth streaming, it is feasible to get higher resolution depth image from AVFoundation AVCaptureDevice.

@wing-kit
Copy link

BTW if you do python it is definitely feasible and straightforward to implement what you mentioned.

Resize images using opencv
save the rgbd stream using record3d python client and opencv

@marek-simonik
Copy link
Owner

The Record3D app does not support decreasing resolution of the RGB images to match the size of the depth images (I think that having a 192x256 px RGB image would be useless). I agree with what @wing-kit wrote; i.e. it would be best to use OpenCV's cv2.resize() to resize the RGB images for your specific purposes.

It's true that AVFoundation provides a higher LiDAR resolution (320x240 px at max), but if I'm not mistaken, AVFoundation does not provide camera pose estimate like ARKit does, which is why Record3D does not use AVFoundation.

@wing-kit
Copy link

True. AVFoundation does not give pose and the the fps is much lower.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants