Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test coverage for different camera models #40

Open
jwnimmer-tri opened this issue May 16, 2023 · 2 comments
Open

Test coverage for different camera models #40

jwnimmer-tri opened this issue May 16, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@jwnimmer-tri
Copy link
Contributor

Our current tests use a single "camera model" (width, height, center, fov, etc.). We should expand our test coverage to ensure that any model the user specifies ends up being correctly applied.

@SeanCurtis-TRI
Copy link
Contributor

Some of the camera intrinsics have some non-obvious mappings in Blender.

Non-uniform focal lengths

In blender, we create the effect of non-uniform focal lengths using two mechanisms:

  • Defining a "baseline" focal length.
    • Blender's projection matrix doesn't allow for anisotropy. So, we configure the camera to have a single symmetric focal length.
  • Using pixel aspect ratio to introduce anisotropy.

Setting the baseline focal length

Currently, we explicitly set the vertical field of view. We converged onto this via a bit of trial and error (as documented in this review conversation. Setting camera.data.angle_x = params.fov_x did not work.

Further blender investigation suggests that there are three other parameters that can contribute to this topic:

The fields of view, focal lengths, and image dimensions are all interconnected in Drake's camera parameters. In Blender, there are both the image dimensions (rendering setting), but there are also sensor dimensions (the so-called sensor_width and sensor_height indicated above). If the sensor aspect ratio doesn't match the camera's intrinsics, simple operations can lead to surprising outcomes.

The test we currently have works for a square image. However prodding around inside of blender shows that the value of sensor_fit (combined with the ratio of the sensor dimensions) can significantly impact the final rendering, playing havoc with the base line focal length. We don't actively do anything with the sensor dimensions, but we should.

Options:

  • We can assume that the camera created by the gltf import is going to have auto and sensor dimensions defined in such a way that we can always safely set angle_y.
  • We can explicitly set the sensor size "appropriately". I'm not quite sure what the "appropriate" value is -- although I have my suspicions. I suspect if sensor aspect matches image aspect ratio we're good. But I don't understand yet how the sensor dimensions feed into anything else (possibly nothing else).
    • I don't think we can programmatically set the fit (I hvaen't figured out where to find byp enumeration values).

Adding anisotropy

The current pixel aspect ratio logic, while a bit counter-intuitive, seems robust and correct (assuming the baseline field of view is correct).


Blender fun and games:

I defined the following two functions in a blender console sessions:

def cam(w, h, fov_x = None, fov_y = None):
    c.sensor_width = w
    c.sensor_height = h
    if fov_x is not None:
        c.angle_x = fov_x
    elif fov_y is not None:
        c.angle_y = fov_y

and

def cam_fix(w, h, fov_x = None, fov_y = None):
    c.sensor_height = h
    c.sensor_width = w
    if fov_x is not None:
        c.sensor_fit = 'HORIZONTAL'
        c.angle_x = fov_x
    elif fov_y is not None:
        c.sensor_fit = 'VERTICAL'
        c.angle_y = fov_y

For a square image output aspect ratio, I executed the following (with the indicated results):

  c = bpy.data.objects.get("Camera").data
  cam(32, 32, pi / 2)       # Expected scene framing.
  cam(32, 32, None, pi / 2) # Expected scene framing.
  cam(32, 24, pi / 2)       # Expected scene framing.
  cam(32, 24, None, pi / 2) # Focal length decreased; scene appears to draw away.
  cam(24, 32, pi / 2)       # Expected scene framing.
  cam(24, 32, None, pi / 2) # Focal length increased; scene appears to draw closer.

  cam_fix(32, 32, pi / 2)       # Expected scene framing.
  cam_fix(32, 32, None, pi / 2) # Expected scene framing.
  cam_fix(32, 24, pi / 2)       # Expected scene framing.
  cam_fix(32, 24, None, pi / 2) # Expected scene framing.
  cam_fix(24, 32, pi / 2)       # Expected scene framing.
  cam_fix(24, 32, None, pi / 2) # Expected scene framing.

The result is different if the output image is rectangular. Simply declaring the sensor_fit is insufficient. Further investigation required.

But as far as this issue goes, we should make sure we test output images with various aspect ratios as well as anisotropic aspect ratios.

@SeanCurtis-TRI
Copy link
Contributor

We also need to investigate that sensor aspect ratio output in our gltf files (e.g., test code). Where does the aspect ratio come from and does that inform the sensor dimensions in blender? Should we be doing something explicit about that on the drake side?

@jwnimmer-tri jwnimmer-tri added the bug Something isn't working label May 23, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants