Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Efficient way to infere visibility of object endpoints #1153

Open
zadaninck opened this issue Oct 18, 2024 · 3 comments
Open

Efficient way to infere visibility of object endpoints #1153

zadaninck opened this issue Oct 18, 2024 · 3 comments
Labels
first answer provided question Question, not yet a bug ;)

Comments

@zadaninck
Copy link

zadaninck commented Oct 18, 2024

Describe the issue

Hi,
I am using BlenderProc to simulate the drop of 2000+ objects onto a flat surface, with a camera located above the flat surface. The objects are approximately a rectangular cuboid.
After rendering the simulation I want to know for each individual object whether the endpoints are visible or occluded. The endpoints are defined as 10% of the object at the edges along the length axis of the cuboid (an approximate drawing can be seen in the image). Is it possible to perform this detection using BlenderProc and if so, can it be done in an efficient way? At this moment, I am already calculating the occlusion % by iterating over each vertex of each object and verifying whether it is visible from the camera POV, but I'd also like to detect whether the endpoint are visible.

image

Minimal code example

for obj in product_objs:
        # Already implemented code
        visible_vertices = 0
        for vert in obj.vertices:
                vert_world = local2world(vert)
                vert_camera = project_points(vert_world)
                visible_vertices += is_visible(vert_camera) 

        occlusion_perc = visible_vertices / len(obj.vertices)

         # Would like to add
         endpoints = get_endpoints(obj)
         
         # Then check whether each endpoint is fully visible

Files required to run the code

No response

Expected behavior

Detection of object endpoints

BlenderProc version

v2.7.1

@zadaninck zadaninck added the question Question, not yet a bug ;) label Oct 18, 2024
@march038
Copy link

Hi @zadaninck ,

we are actually facing a similar question in our latest post in Issue #1150 .
Veryfing whether specific objects or parts of them are visible is definitely something useful for different applications!

@cornerfarmer
Copy link
Member

cornerfarmer commented Oct 22, 2024

Hey @zadaninck and @march038,

there are multiple ways how to do this:

  • If you only want to know whether an object is at all visible from the camera, you can use bproc.camera.visible_objects(cam2world_matrix) to get a list of all visible objects. However this will not allow you to distinguish between object parts.
  • Alternatively, to check whether a given 3D point is visible from the camera, you can use the unproject/project methods
bvh_tree = bproc.object.create_bvh_tree_multi_objects(objs)
# Project your 3D points into 2D pixel space
points2D = bproc.camera.project_points(points3D)
# Send rays through 2D points and get hit distance
hit_distances = bproc.camera.depth_at_points_via_raytracing(bvh_tree, points2D, return_dist=True)
# Compute distance of original 3D points from camera
point3D_distances = np.linalg.norm(points3D - bproc.camera.get_camera_pose()[None, :3,3], axis=-1)
# If hit_distance equals actual distance, then the 3D points are visible in the image
points_visible = np.abs(point3D_distances - hit_distances) < 1e-2

You might need to adjust the threshold in the last line. Further, each method supports a frame parameter to select a specific camera frame/pose that you want to use.
Let me know if that works, maybe it makes sense to put that into an extra method.

@march038
Copy link

Hi @cornerfarmer ,

thank your your response! Personally we think that it would be a good idea to put this into an extra method.

It would make sense to add functionality so that it can be applied either to points or objects . Then for objects, as you proposed, have a threshold parameter so the user can decide whether the whole object needs to be visible or only e.g. 70% of it for the object to be determined as visible or not.

I looked for a bpy functionality to sample e.g. 1000 points on a mesh and then implement the script you gave above with these given points but couldn't find one, therefore open for your ideas.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
first answer provided question Question, not yet a bug ;)
Projects
None yet
Development

No branches or pull requests

3 participants