Skip to content

Conversation

@vsnever
Copy link
Contributor

@vsnever vsnever commented Dec 6, 2019

This adds a new 2D observer called FovCamera that launches rays from the observer's origin point over a specified field of view in spherical coordinates. Each pixel of the final image represents a solid angle of collection inside an azimuth-altitude rectangle.
When developing a new diagnostic, before selecting lines of sight, setting the optics, etc. it may be useful to obtain an image in the field of view of that diagnostic as a function of azimuthal and altitudinal angles. VectorCamera can be used for that purpose, however I think that having a dedicated, simpler class is better.
I propose it for Raysect, but if you find it more suitable for Cherab, then I'm not against moving it to Cherab.

mattngc
mattngc previously approved these changes Dec 30, 2019
Copy link
Member

@mattngc mattngc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Vlad for this contrib. Looks good to me. I've done some testing to and works nicely in my use cases. I'd say its ready to merge as is. I'll just leave it up another 2 days in case one of the others wants to comment.

@mattngc mattngc self-assigned this Dec 30, 2019
@mattngc mattngc added this to the Release v0.7 milestone Dec 30, 2019
@CnlPepper
Copy link
Member

CnlPepper commented Dec 30, 2019

I was going to add a camera called LightProbe that performs even sampling over a sphere, with optional range restriction and with configurable mapping to a rectangle. Equi-angle sampling would be one of the possible mapping options, as would equi-area. The functionality of the FOVCamera would end up being a subset of the LightProbe. As such I'd prefer we don't merge this unless it is generalised further... i.e. implementing a full light probe.

The mapping would be a plug-able mapping class. The angle restriction would be centred around the +ve Z-axis as per other cameras.

@CnlPepper
Copy link
Member

@vsnever could I make a request that new features are discussed via the issue tracker before implementation and merge request. You a producing some great code, however receiving merge requests out of the blue containing new features puts us in a difficult position when those requests clash with planned functionality. To ensure neither your, nor our time is wasted and that there are no hard feelings, we would prefer it if an issue is raised proposing a feature before any merge requests are sent. We can then decide on the approach before generating code.

cdef:
np.ndarray solid_angle1d

solid_angle1d = (pi / 180.) * self.azimuth_delta * (np.sin((pi / 180.) * (self._altitude + 0.5 * self.altitude_delta)) -
Copy link
Member

@CnlPepper CnlPepper Dec 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't look right to me. d(SA) = sin(theta) d(theta) d(phi). The shouldn't the surface integral over an element spanning [theta0, theta1] and [phi0, phi1] include a cos term....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here, the altitude angle is measured from the equatorial plane, not from the pole (Z-axis is also lies in the equatorial plane, it easier when dealing with a field of view), so d(SA) = cos(theta) d(theta) d(phi).

Copy link
Member

@CnlPepper CnlPepper Dec 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure I followed the description. The raysect standard is for the camera to look along the +ve z-axis, with the "up" direction being +ve y-axis. The altitude here should be the equivalent of the pitch in math.transform.rotate(), while the azimuth should be the yaw. (0, 0) degrees being aligned with the z-axis. Is this what you mean?

Copy link
Member

@CnlPepper CnlPepper Dec 30, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it is then it should be cos(theta) d(theta) d(phi) as you say. As the circumference decreases as the altitude angle increases.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The camera looks along +ve z-axis and the "up" direction is +ve y-axis just like in all other Raysect cameras. However, altitude and azimuth angles are calculated like in horizontal coordinate system, so the direction of z-axis is not the azimuth point but the north point (see the scheme). For the FoV with angles of view (phi, theta), the azimuth angle goes from -phi/2 to phi/2 and the altitude angle goes from -theta/2 to theta/2. The total solid angle observed is 2 * phi * sin(theta/2).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah we are describing the same thing, great.

return rays

cpdef double _pixel_sensitivity(self, int x, int y):
return self._sensitivity
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a finite solid angle, that varies per pixel. The solid angle should be included in this calculation. the sensitivity then becomes an effective area.... if a power pipeline is used (which isn't a great idea), everything should then be correctly normalised.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Though we do ignore this for the pinhole camera, so best to follow the same pattern. i.e assume the sampling is being done through an infinitesimal pinhole.

@vsnever
Copy link
Contributor Author

vsnever commented Dec 30, 2019

@vsnever could I make a request that new features are discussed via the issue tracker before implementation and merge request.

Yes, I agree, I should have to open a discussion in the issue tracker before adding any new code. I’m sorry I didn’t do that. The only reason I started a pull request without opening a discussion first is because for me it wasn’t really a new code, I created it back in the summer as a part of ITER synthetic diagnostics as well as some other code I proposed earlier for Cherab.

@vsnever
Copy link
Contributor Author

vsnever commented Dec 30, 2019

I was going to add a camera called LightProbe that performs even sampling over a sphere, with optional range restriction and with configurable mapping to a rectangle.

I’ll try to implement a LightProbe camera class and a couple of mapping classes for simple cases (e.g. equirectangular projection and cylindrical equal-area projection).

@CnlPepper
Copy link
Member

Excellent, thanks. Those are the exactly the projections I meant.

@CnlPepper CnlPepper dismissed mattngc’s stale review December 30, 2019 20:51

Change of scope.

@mattngc mattngc assigned CnlPepper and unassigned mattngc Jan 1, 2020
@mattngc
Copy link
Member

mattngc commented Jan 1, 2020

Like where this is going. Alex has a clearer vision for this feature so handing over reviewer-ship.

@CnlPepper CnlPepper removed this from the Release v0.7 milestone Nov 7, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants