-
Notifications
You must be signed in to change notification settings - Fork 25
New field-of-view observer #337
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: development
Are you sure you want to change the base?
Conversation
mattngc
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Vlad for this contrib. Looks good to me. I've done some testing to and works nicely in my use cases. I'd say its ready to merge as is. I'll just leave it up another 2 days in case one of the others wants to comment.
|
I was going to add a camera called LightProbe that performs even sampling over a sphere, with optional range restriction and with configurable mapping to a rectangle. Equi-angle sampling would be one of the possible mapping options, as would equi-area. The functionality of the FOVCamera would end up being a subset of the LightProbe. As such I'd prefer we don't merge this unless it is generalised further... i.e. implementing a full light probe. The mapping would be a plug-able mapping class. The angle restriction would be centred around the +ve Z-axis as per other cameras. |
|
@vsnever could I make a request that new features are discussed via the issue tracker before implementation and merge request. You a producing some great code, however receiving merge requests out of the blue containing new features puts us in a difficult position when those requests clash with planned functionality. To ensure neither your, nor our time is wasted and that there are no hard feelings, we would prefer it if an issue is raised proposing a feature before any merge requests are sent. We can then decide on the approach before generating code. |
| cdef: | ||
| np.ndarray solid_angle1d | ||
|
|
||
| solid_angle1d = (pi / 180.) * self.azimuth_delta * (np.sin((pi / 180.) * (self._altitude + 0.5 * self.altitude_delta)) - |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doesn't look right to me. d(SA) = sin(theta) d(theta) d(phi). The shouldn't the surface integral over an element spanning [theta0, theta1] and [phi0, phi1] include a cos term....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here, the altitude angle is measured from the equatorial plane, not from the pole (Z-axis is also lies in the equatorial plane, it easier when dealing with a field of view), so d(SA) = cos(theta) d(theta) d(phi).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I followed the description. The raysect standard is for the camera to look along the +ve z-axis, with the "up" direction being +ve y-axis. The altitude here should be the equivalent of the pitch in math.transform.rotate(), while the azimuth should be the yaw. (0, 0) degrees being aligned with the z-axis. Is this what you mean?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If it is then it should be cos(theta) d(theta) d(phi) as you say. As the circumference decreases as the altitude angle increases.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The camera looks along +ve z-axis and the "up" direction is +ve y-axis just like in all other Raysect cameras. However, altitude and azimuth angles are calculated like in horizontal coordinate system, so the direction of z-axis is not the azimuth point but the north point (see the scheme). For the FoV with angles of view (phi, theta), the azimuth angle goes from -phi/2 to phi/2 and the altitude angle goes from -theta/2 to theta/2. The total solid angle observed is 2 * phi * sin(theta/2).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah we are describing the same thing, great.
| return rays | ||
|
|
||
| cpdef double _pixel_sensitivity(self, int x, int y): | ||
| return self._sensitivity |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a finite solid angle, that varies per pixel. The solid angle should be included in this calculation. the sensitivity then becomes an effective area.... if a power pipeline is used (which isn't a great idea), everything should then be correctly normalised.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though we do ignore this for the pinhole camera, so best to follow the same pattern. i.e assume the sampling is being done through an infinitesimal pinhole.
Yes, I agree, I should have to open a discussion in the issue tracker before adding any new code. I’m sorry I didn’t do that. The only reason I started a pull request without opening a discussion first is because for me it wasn’t really a new code, I created it back in the summer as a part of ITER synthetic diagnostics as well as some other code I proposed earlier for Cherab. |
I’ll try to implement a LightProbe camera class and a couple of mapping classes for simple cases (e.g. equirectangular projection and cylindrical equal-area projection). |
|
Excellent, thanks. Those are the exactly the projections I meant. |
|
Like where this is going. Alex has a clearer vision for this feature so handing over reviewer-ship. |
This adds a new 2D observer called
FovCamerathat launches rays from the observer's origin point over a specified field of view in spherical coordinates. Each pixel of the final image represents a solid angle of collection inside an azimuth-altitude rectangle.When developing a new diagnostic, before selecting lines of sight, setting the optics, etc. it may be useful to obtain an image in the field of view of that diagnostic as a function of azimuthal and altitudinal angles.
VectorCameracan be used for that purpose, however I think that having a dedicated, simpler class is better.I propose it for Raysect, but if you find it more suitable for Cherab, then I'm not against moving it to Cherab.