You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 9, 2024. It is now read-only.
Up to know, most of the data preprocessing methods are for 2D images or tabular dataframe. But my dataset is about 3D images. I have to do the data preprocessing outside keras. Another problem is my dataset is too big to load into my memory, so I used .h5 file to load my data into the memory batch by batch. Therefore, I can't do sklearn's data preprocessing, such as standardization, normalization, etc. I am not sure if you will deploy more api for 3D data. For example, 3D data augmentation.
The text was updated successfully, but these errors were encountered:
Can we specify this? Cause sometimes 3d data augmentation means also synthetic datasets. I.e. 3d dataset where you need to augment inside a minimal 3d engine (opengl, vulkan) for camera, lights and objects parameters.
I don't think that currently we have a minimal scenegraph to drive.
In this case is it plausible roadmap to have some keras interface over https://github.com/tensorflow/graphics?
@bhack
My data format is like (nb_channels, x, y, z). They are not real images, but on each mesh note, there is a value, look like density cloud. So I want to do data augmentation similar to 2D image data. I want to borrow the concept of the 2D image classification to solve my own project.
Now I rotated my original data several times and save them. Finally, I combined those synthetic datasets with the original one to become one training dataset. If keras can do in-place 3D data augmentation like a 2D ImageDataGenerator(), it can reduce the memory useage and save a lot of data preprocessing time.
Please let me know if I didn't explain my question clearly. Thanks.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Up to know, most of the data preprocessing methods are for 2D images or tabular dataframe. But my dataset is about 3D images. I have to do the data preprocessing outside keras. Another problem is my dataset is too big to load into my memory, so I used .h5 file to load my data into the memory batch by batch. Therefore, I can't do sklearn's data preprocessing, such as standardization, normalization, etc. I am not sure if you will deploy more api for 3D data. For example, 3D data augmentation.
The text was updated successfully, but these errors were encountered: