Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds function to define camera configs through intrinsic matrix #617

Merged
merged 31 commits into from
Aug 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
31 commits
Select commit Hold shift + click to select a range
8c671ba
add intrinsic matrix to initial config
pascal-roth Jul 2, 2024
29c6be4
some fixes
pascal-roth Jul 2, 2024
5e78e20
add tests for new initialization, change to classmethod
pascal-roth Jul 2, 2024
cf3e9f9
adjust changelog
pascal-roth Jul 2, 2024
3e7f833
Apply suggestions from code review
pascal-roth Jul 3, 2024
b369236
add more docs
pascal-roth Jul 3, 2024
9e99d0e
Merge branch 'feature/cam-init-intrinsic-matrix' of github.com:isaac-…
pascal-roth Jul 3, 2024
d5e729f
fixes
pascal-roth Jul 3, 2024
4801278
allow setting of vertical aperture (bugfix as never adjusted for usd …
pascal-roth Jul 3, 2024
71a37b8
formatter
pascal-roth Jul 3, 2024
5ead7d4
Merge branch 'main' into feature/cam-init-intrinsic-matrix
pascal-roth Jul 28, 2024
ffea731
Merge branch 'main' into feature/cam-init-intrinsic-matrix
pascal-roth Aug 15, 2024
06dc697
fix
pascal-roth Aug 16, 2024
20f6b48
formatter
pascal-roth Aug 16, 2024
8760a0f
Merge branch 'main' into feature/cam-init-intrinsic-matrix
pascal-roth Aug 16, 2024
603567e
changlog
pascal-roth Aug 16, 2024
076e767
remove stereolabs
pascal-roth Aug 16, 2024
9ea0c7b
fix
pascal-roth Aug 16, 2024
68ef701
remove change
pascal-roth Aug 16, 2024
1a6ade2
carb warn
pascal-roth Aug 17, 2024
fe86869
add intrinsics test to tiled camera
pascal-roth Aug 28, 2024
559466c
Merge branch 'feature/cam-init-intrinsic-matrix' of github.com:isaac-…
pascal-roth Aug 28, 2024
c73b621
fix spawn test
pascal-roth Aug 28, 2024
db7bf41
Merge branch 'main' into feature/cam-init-intrinsic-matrix
pascal-roth Aug 28, 2024
71caf9c
fixes a bunch of nits
Mayankm96 Aug 29, 2024
0c4cd18
Merge branch 'main' into feature/cam-init-intrinsic-matrix
pascal-roth Aug 29, 2024
8b2a588
fixes doc build complaints
Mayankm96 Aug 29, 2024
dd1f1c4
Merge branch 'feature/cam-init-intrinsic-matrix' of github.com:isaac-…
Mayankm96 Aug 29, 2024
e212ca6
minor doc thing
Mayankm96 Aug 29, 2024
16a30ce
test intentionally fail
pascal-roth Aug 29, 2024
0e960e3
Merge branch 'feature/cam-init-intrinsic-matrix' of github.com:isaac-…
pascal-roth Aug 29, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion source/extensions/omni.isaac.lab/config/extension.toml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
[package]

# Note: Semantic Versioning is used: https://semver.org/
version = "0.22.2"
version = "0.22.3"

# Description
title = "Isaac Lab framework for Robot Learning"
Expand Down
21 changes: 21 additions & 0 deletions source/extensions/omni.isaac.lab/docs/CHANGELOG.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,27 @@
Changelog
---------

0.22.3 (2024-08-28)
~~~~~~~~~~~~~~~~~~~

Added
^^^^^

* Added a class method to initialize camera configurations with an intrinsic matrix in the
:class:`omni.isaac.lab.sim.spawner.sensors.PinholeCameraCfg`
:class:`omni.isaac.lab.sensors.ray_caster.patterns_cfg.PinholeCameraPatternCfg` classes.

Fixed
^^^^^

* Fixed the ray direction in :func:`omni.isaac.lab.sensors.ray_caster.patterns.patterns.pinhole_camera_pattern` to
point to the center of the pixel instead of the top-left corner.
* Fixed the clipping of the "distance_to_image_plane" depth image obtained using the
:class:`omni.isaac.lab.sensors.ray_caster.RayCasterCamera` class. Earlier, the depth image was being clipped
before the depth image was generated. Now, the clipping is applied after the depth image is generated. This makes
the behavior equal to the USD Camera.


0.22.2 (2024-08-21)
~~~~~~~~~~~~~~~~~~~

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@

from __future__ import annotations

import math
import numpy as np
import re
import torch
Expand Down Expand Up @@ -117,6 +116,9 @@ def __init__(self, cfg: CameraCfg):
rot = torch.tensor(self.cfg.offset.rot, dtype=torch.float32).unsqueeze(0)
rot_offset = convert_orientation_convention(rot, origin=self.cfg.offset.convention, target="opengl")
rot_offset = rot_offset.squeeze(0).numpy()
# ensure vertical aperture is set, otherwise replace with default for squared pixels
if self.cfg.spawn.vertical_aperture is None:
self.cfg.spawn.vertical_aperture = self.cfg.spawn.horizontal_aperture * self.cfg.height / self.cfg.width
# spawn the asset
self.cfg.spawn.func(
self.cfg.prim_path, self.cfg.spawn, translation=self.cfg.offset.pos, orientation=rot_offset
Expand Down Expand Up @@ -243,6 +245,12 @@ def set_intrinsic_matrices(
"horizontal_aperture_offset": (c_x - width / 2) / f_x,
"vertical_aperture_offset": (c_y - height / 2) / f_y,
}

# TODO: Adjust to handle aperture offsets once supported by omniverse
# Internal ticket from rendering team: OM-42611
if params["horizontal_aperture_offset"] > 1e-4 or params["vertical_aperture_offset"] > 1e-4:
carb.log_warn("Camera aperture offsets are not supported by Omniverse. These parameters are ignored.")

# change data for corresponding camera index
sensor_prim = self._sensor_prims[i]
# set parameters for camera
Expand Down Expand Up @@ -541,17 +549,21 @@ def _update_intrinsic_matrices(self, env_ids: Sequence[int]):
# get camera parameters
focal_length = sensor_prim.GetFocalLengthAttr().Get()
horiz_aperture = sensor_prim.GetHorizontalApertureAttr().Get()
vert_aperture = sensor_prim.GetVerticalApertureAttr().Get()
horiz_aperture_offset = sensor_prim.GetHorizontalApertureOffsetAttr().Get()
vert_aperture_offset = sensor_prim.GetVerticalApertureOffsetAttr().Get()
pascal-roth marked this conversation as resolved.
Show resolved Hide resolved
# get viewport parameters
height, width = self.image_shape
# calculate the field of view
fov = 2 * math.atan(horiz_aperture / (2 * focal_length))
# calculate the focal length in pixels
focal_px = width * 0.5 / math.tan(fov / 2)
# extract intrinsic parameters
f_x = (width * focal_length) / horiz_aperture
f_y = (height * focal_length) / vert_aperture
c_x = width * 0.5 + horiz_aperture_offset * f_x
c_y = height * 0.5 + vert_aperture_offset * f_y
# create intrinsic matrix for depth linear
self._data.intrinsic_matrices[i, 0, 0] = focal_px
self._data.intrinsic_matrices[i, 0, 2] = width * 0.5
self._data.intrinsic_matrices[i, 1, 1] = focal_px
self._data.intrinsic_matrices[i, 1, 2] = height * 0.5
self._data.intrinsic_matrices[i, 0, 0] = f_x
self._data.intrinsic_matrices[i, 0, 2] = c_x
self._data.intrinsic_matrices[i, 1, 1] = f_y
self._data.intrinsic_matrices[i, 1, 2] = c_y
self._data.intrinsic_matrices[i, 2, 2] = 1

def _update_poses(self, env_ids: Sequence[int]):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -86,6 +86,8 @@ def pinhole_camera_pattern(
pixels = torch.vstack(list(map(torch.ravel, grid))).T
# convert to homogeneous coordinate system
pixels = torch.hstack([pixels, torch.ones((len(pixels), 1), device=device)])
# move each pixel coordinate to the center of the pixel
pixels += torch.tensor([[0.5, 0.5, 0]], device=device)
# get pixel coordinates in camera frame
pix_in_cam_frame = torch.matmul(torch.inverse(intrinsic_matrices), pixels.T)

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ class PinholeCameraPatternCfg(PatternBaseCfg):

Longer lens lengths narrower FOV, shorter lens lengths wider FOV.
"""

horizontal_aperture: float = 20.955
"""Horizontal aperture (in mm). Defaults to 20.955mm.

Expand All @@ -87,15 +88,84 @@ class PinholeCameraPatternCfg(PatternBaseCfg):
Note:
The default value is the horizontal aperture of a 35 mm spherical projector.
"""

vertical_aperture: float | None = None
"""Vertical aperture (in mm). Defaults to None.

Emulates sensor/film height on a camera. If None, then the vertical aperture is calculated based on the
horizontal aperture and the aspect ratio of the image to maintain squared pixels. In this case, the vertical
aperture is calculated as:

.. math::
\text{vertical aperture} = \text{horizontal aperture} \times \frac{\text{height}}{\text{width}}
"""

horizontal_aperture_offset: float = 0.0
"""Offsets Resolution/Film gate horizontally. Defaults to 0.0."""

vertical_aperture_offset: float = 0.0
"""Offsets Resolution/Film gate vertically. Defaults to 0.0."""

width: int = MISSING
"""Width of the image (in pixels)."""

height: int = MISSING
"""Height of the image (in pixels)."""

@classmethod
def from_intrinsic_matrix(
cls,
intrinsic_matrix: list[float],
width: int,
height: int,
focal_length: float = 24.0,
) -> PinholeCameraPatternCfg:
r"""Create a :class:`PinholeCameraPatternCfg` class instance from an intrinsic matrix.

The intrinsic matrix is a 3x3 matrix that defines the mapping between the 3D world coordinates and
the 2D image. The matrix is defined as:

.. math::
I_{cam} = \begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1
\end{bmatrix},

where :math:`f_x` and :math:`f_y` are the focal length along x and y direction, while :math:`c_x` and :math:`c_y` are the
principle point offsets along x and y direction respectively.

Args:
intrinsic_matrix: Intrinsic matrix of the camera in row-major format.
The matrix is defined as [f_x, 0, c_x, 0, f_y, c_y, 0, 0, 1]. Shape is (9,).
width: Width of the image (in pixels).
height: Height of the image (in pixels).
focal_length: Focal length of the camera (in cm). Defaults to 24.0 cm.

Returns:
An instance of the :class:`PinholeCameraPatternCfg` class.
"""
# extract parameters from matrix
f_x = intrinsic_matrix[0]
c_x = intrinsic_matrix[2]
f_y = intrinsic_matrix[4]
c_y = intrinsic_matrix[5]
# resolve parameters for usd camera
horizontal_aperture = width * focal_length / f_x
vertical_aperture = height * focal_length / f_y
horizontal_aperture_offset = (c_x - width / 2) / f_x
vertical_aperture_offset = (c_y - height / 2) / f_y

return cls(
focal_length=focal_length,
horizontal_aperture=horizontal_aperture,
vertical_aperture=vertical_aperture,
horizontal_aperture_offset=horizontal_aperture_offset,
vertical_aperture_offset=vertical_aperture_offset,
width=width,
height=height,
)


@configclass
class BpearlPatternCfg(PatternBaseCfg):
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -264,14 +264,19 @@ def _update_buffers_impl(self, env_ids: Sequence[int]):
ray_starts_w = math_utils.quat_apply(quat_w.repeat(1, self.num_rays), self.ray_starts[env_ids])
ray_starts_w += pos_w.unsqueeze(1)
ray_directions_w = math_utils.quat_apply(quat_w.repeat(1, self.num_rays), self.ray_directions[env_ids])

# ray cast and store the hits
# note: we set max distance to 1e6 during the ray-casting. THis is because we clip the distance
# to the image plane and distance to the camera to the maximum distance afterwards in-order to
# match the USD camera behavior.

# TODO: Make ray-casting work for multiple meshes?
# necessary for regular dictionaries.
self.ray_hits_w, ray_depth, ray_normal, _ = raycast_mesh(
ray_starts_w,
ray_directions_w,
mesh=RayCasterCamera.meshes[self.cfg.mesh_prim_paths[0]],
max_dist=self.cfg.max_distance,
max_dist=1e6,
return_distance=any(
[name in self.cfg.data_types for name in ["distance_to_image_plane", "distance_to_camera"]]
),
Expand All @@ -286,9 +291,13 @@ def _update_buffers_impl(self, env_ids: Sequence[int]):
(ray_depth[:, :, None] * ray_directions_w),
)
)[:, :, 0]
# apply the maximum distance after the transformation
distance_to_image_plane = torch.clip(distance_to_image_plane, max=self.cfg.max_distance)
self._data.output["distance_to_image_plane"][env_ids] = distance_to_image_plane.view(-1, *self.image_shape)
if "distance_to_camera" in self.cfg.data_types:
self._data.output["distance_to_camera"][env_ids] = ray_depth.view(-1, *self.image_shape)
self._data.output["distance_to_camera"][env_ids] = torch.clip(
ray_depth.view(-1, *self.image_shape), max=self.cfg.max_distance
)
if "normals" in self.cfg.data_types:
self._data.output["normals"][env_ids] = ray_normal.view(-1, *self.image_shape, 3)

Expand Down Expand Up @@ -346,17 +355,23 @@ def _compute_intrinsic_matrices(self):
"""Computes the intrinsic matrices for the camera based on the config provided."""
# get the sensor properties
pattern_cfg = self.cfg.pattern_cfg

# check if vertical aperture is provided
# if not then it is auto-computed based on the aspect ratio to preserve squared pixels
if pattern_cfg.vertical_aperture is None:
pattern_cfg.vertical_aperture = pattern_cfg.horizontal_aperture * pattern_cfg.height / pattern_cfg.width

# compute the intrinsic matrix
vertical_aperture = pattern_cfg.horizontal_aperture * pattern_cfg.height / pattern_cfg.width
f_x = pattern_cfg.width * pattern_cfg.focal_length / pattern_cfg.horizontal_aperture
f_y = pattern_cfg.height * pattern_cfg.focal_length / vertical_aperture
f_y = pattern_cfg.height * pattern_cfg.focal_length / pattern_cfg.vertical_aperture
c_x = pattern_cfg.horizontal_aperture_offset * f_x + pattern_cfg.width / 2
c_y = pattern_cfg.vertical_aperture_offset * f_y + pattern_cfg.height / 2
# allocate the intrinsic matrices
self._data.intrinsic_matrices[:, 0, 0] = f_x
self._data.intrinsic_matrices[:, 0, 2] = c_x
self._data.intrinsic_matrices[:, 1, 1] = f_y
self._data.intrinsic_matrices[:, 1, 2] = c_y

# save focal length
self._focal_length = pattern_cfg.focal_length

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@

from typing import TYPE_CHECKING

import carb
import omni.isaac.core.utils.prims as prim_utils
import omni.kit.commands
from pxr import Sdf, Usd
Expand Down Expand Up @@ -99,9 +100,21 @@ def spawn_camera(
attribute_types = CUSTOM_PINHOLE_CAMERA_ATTRIBUTES
else:
attribute_types = CUSTOM_FISHEYE_CAMERA_ATTRIBUTES
# custom attributes in the config that are not USD Camera parameters
non_usd_cfg_param_names = ["func", "copy_from_source", "lock_camera", "visible", "semantic_tags"]

# TODO: Adjust to handle aperture offsets once supported by omniverse
# Internal ticket from rendering team: OM-42611
if cfg.horizontal_aperture_offset > 1e-4 or cfg.vertical_aperture_offset > 1e-4:
carb.log_warn("Camera aperture offsets are not supported by Omniverse. These parameters will be ignored.")

# custom attributes in the config that are not USD Camera parameters
non_usd_cfg_param_names = [
"func",
"copy_from_source",
"lock_camera",
"visible",
"semantic_tags",
"from_intrinsic_matrix",
]
# get camera prim
prim = prim_utils.get_prim_at_path(prim_path)
# create attributes for the fisheye camera model
Expand Down
Loading
Loading