Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Planetarium applications of sphviewer #22

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

athob
Copy link

@athob athob commented Dec 1, 2019

I've developed a new implementation to consider more projection types, starting with its first new one: the Azimuthal equidistant projection. With this also comes some simplifications of the C module codes as well as a correction to the way rendermodule was summing up the projected kernels.

Until now, sphviewer was considering 2 types of projection: perspective (r=fixed value) and isometric (r = infinity). With this version, one can use an azimuthal equidistant projection of the scene with the camera target at the center of it (by default). This can be done by giving the string value 'fisheye' to a new parameter 'projection' in the kwargs of the Camera object given to Scene. However, this usage isn't the most valuable one.

The interest in the azimuthal equidistant projection resides in the fact that it is a suitable format for most planetarium projectors. In these, the central point of this projection generally corresponds to the zenith position of the dome. Because of this, placing the camera target in this default center location isn't a comfortable configuration for a planetarium spectator.

Instead, the 'projection' kwarg of Camera can be tweaked by appending 'fisheye' with the string equivalent of the wanted zenith angle in degrees where the target should be. For example a value 'fisheye45' would place that target at a latitude above the horizon of +45 degrees, while 'fisheye90' would simply place that target at the horizon (additionally, 'fisheye0' would leave the target at the zenith, 'fisheye180' would move the target all the way to the nadir, and any negative zenith angle - i.e. 'fisheye-***' - moves the target in the back of the spectator).

The implementation in itself required to modify:

  • Camera.py, Scene.py, and Render.py to include the new kwarg projection treatment, in particular in the case of Scene.py where the camera parameters given to scene.scene are updated with respects to the zenith angle,
  • extensions/scenemodule.c to add a new projection case, implementing the corresponding (3D => 2D pixelized coordinates) projection maths with consideration of the smoothing lengths,
  • and extensions/rendermodule.c that required a different treatment of kernel summation (using the smoothing lengths previously returned by scene.scene) as those have distortions that depend on the position of their centroids.

For the latter point, I needed to add the 'projection' type as an argument of render.render. Additionally, I have made a design choice that made me also give it the 'extent' as another argument. This design choice is purely experimental and does not serve any purpose at the moment: although the Camera 'zoom' does not really have a utility when it comes to planetarium projection, I decided to use it as a way to narrow or broaden the maximum zenith angle shown in the azimuth equidistant projection. In that design, a zoom of 1 corresponds to the planetarium type 90 degree maximum zenith angle, while a zoom near 0 broadens to almost 180 degree max zenith angle, and a really large zoom narrows the max zenith angle near 0 degree. Because the mentioned kernel distortions need to know the centroids' position with respect to these maximum zenith angles, the 'extent' values are used as a reference.

Lastly, before implementing this new projection scheme, I had to prepare the code for making it adjustable to the addition of more projection schemes. For that I've made some simplifications and corrections to the codes:

  • the code of scene.scene for assigning kview particles has been simplified, and corrected for considering particles outside of the field of view but with intersecting kernels (this and this commits)
  • the code of render.render for dealing with remainder particles in extensions.rendermodule.c has been simplified to avoid having 2 identical blocs of code (this commit)
  • the implementation of the variable tt in render.render has been modified by using a tt_f float version of it to be used for allowing kernels to not align themselves with the pixel grid, but still using tt for the convolution implementation (this commit)

That's it for now, I will be sharing the maths I used for my implementation of the azimuth equidistant projection. In the meantime, let me know if you have any thoughts about it. I also apologize for any misuse of the "pull request" feature of github: this is all very new to me!

Edit: added some links to specific commits for readability.

athob and others added 8 commits January 1, 2019 16:56
Simplified the code for dealing with the remainder particles
Simplified the kview computation to get the inclusion of particles with a kernel that intersects the field of view, despite the particle being outside of it
Correct kview computation to remove the inclusion of particles with hsml[i] < 0 (equivalent to the old zpart > 0 condition)
Detangle tt between the kernel pixel size (the tt variable) with the kernel projected size (the new tt_f float variable), allowing kernels with sizes under the resolution to be computed properly
Adding "-ffast-math" compiler flag by default. This speeds up the code by more than a factor of 3.
@alejandrobll alejandrobll marked this pull request as ready for review December 2, 2019 10:42
@alejandrobll
Copy link
Owner

Thanks a lot Adrien. Indeed, this would be a very nice feature to have implemented. I'll try to review it a soon as I find some time.
This is something I considered to do several times, so I am really glad that you did it. At the moment I am slightly concerned that the new projection seems to require heavy additions to rendermodule.c. I would naively say that given a projection is something that affects how the particles look like, this should be part of scenemodule.c, but I might being naive. Let me have a look.
Thanks!

@athob
Copy link
Author

athob commented Dec 2, 2019

Thanks! Indeed, there's been a few significant changes to rendermodule.c.

Some of these were improvements to the core sphviewer implementation before the addition of the new projection.

  • In particular, I have made this simplification to the code that used to duplicate the entire bloc that is looped over for computing the kernel convolution and local image. That duplicated bloc (line 122 to 140 of the old rendermodule.c in this commit diff) was there to deal with the remainder particle left from the set of particles that was given to a single thread. Instead, I removed that remainder particle case and added a line (new line 99 in this commit diff) in the main loop to include it as an additional tick.
  • Additionally, I have also made a second less significant modification to this file before implementing the projection, but this time it was more of a minor correction in the way core sphviewer works. Before this correction, the way the smoothing length of the kernel was being implemented in render.render was forcing that length to be aligned with the pixel grid: the reason for that is that the variable tt representing that smoothing length was used inside the cubic_kernel function that returns the kernel contribution of a particle to a pixel (line 116 of the old rendermodule.c in this commit diff) while being forced to be an integer (line 106) due to its usage as a maximum value to the loop counter of the convolution (lines 113 and 114). However, the tt used in the cubic_kernel function has no reason to be an integer (i.e. aligned with the pixel grid) and should instead be the original float value of the corresponding float t[i] for the particle i. For that, I've duplicated the tt assignment to a tt_f assignment that contains that float value, leaving tt as the integer used for the loop counter of the convolution, while tt_f is used inside the cubic_kernel. As a note, I should mention that a similar minor correction could be made to the variables xx and yy, and I think corrections I would do in the future would target this too.

Now for the projection implementation, rendermodule.c did also get some additions due to the way kernels in that projection are distorted unlike for the core projections that keep circular kernels. I'll be insisting on that aspect when sharing the maths for clarity, but a way to look at it for now is via the Tissot's indicatrix associated to that projection: the Tissot's indicatrix consists in characterizing local distortions when doing a sphere projection onto a flat surface. In practice, it looks at the projected shapes that result from the projection of identical circles distributed onto the surface of that sphere. Since kernels in 3D space are projected circles onto the sphere of viewing angles, the distortion the Tissot's indicatrix reveal will also affect these kernels after the projection. The core sphviewer could only compute the convolution of perfectly circular kernels, so this was not suitable for having an accurate projection of the kernels in this new scheme. Instead, I needed to develop a new special convolution to treat these distortions: these modifications are highlighted in this commit diff where I added a conditional structure (bloc between lines 109 and 175 of the new file), with the classic case (previously between lines 99 and 118) has been moved into the else case (now between lines 152 and 174), and the first case is the new projection case (between lines 109 and 149).

@alejandrobll alejandrobll self-assigned this Dec 2, 2019
@athob
Copy link
Author

athob commented Dec 3, 2019

Before sharing the maths I thought I should provide a quick shortcode that could be used as a proof of concept. However, while working on the more complex animated version of that shortcode (see down below), it revealed some 'segmentation fault (core dumped)' fatal errors. After looking into what was the cause of these, I found this was due to the interference between my new treatment to include intersecting kernels with centroids outside the field of view and the new low-resolution treatment. As the former would leave particles with xx and yy coordinates in render.render that were not within the (xsize,ysize) domain, some local_image[yy*xsize+xx] += mm lines could end up crashing. I have resolved it in my latest commit.

As for the proof of concept now, using QuickView similarly to this tutorial, we can just add the kwarg 'projection' to QuickView in the following way (with the camera target placed at a latitude of 30 degrees above the horizon):

import h5py
from sphviewer.tools import QuickView

with h5py.File('darkmatter_box.h5py','r') as f:
    pdrk = f['PartType1/Coordinates'].value

QuickView(pdrk, r=1, projection='fisheye60')

Keep in mind that the center of the image is supposed to be the zenith when projected onto a planetarium dome, with the part facing the spectator being the upper half of the image (the lower half being behind him - note also that when using the method QuickView.imsave, this is ends up reversed). Obviously, seeing the projection in action isn't evident with only this, so here's a (slightly heavier) code where we rotate the camera around its target:

import h5py
import numpy as np
from matplotlib import pyplot as plt
import sphviewer as sph

with h5py.File('darkmatter_box.h5py','r') as f:
    pdrk = f['PartType1/Coordinates'].value

mdrk = np.ones(len(pdrk))
scene = sph.Scene(sph.Particles(pdrk,mdrk))
scene.update_camera(r=1, p=0, projection='fisheye60')
render = sph.Render(scene)
render.set_logscale()
img = render.get_image()
extent = render.get_extent()

plt.ion()
fig = plt.figure()
ax = fig.add_subplot(111)

imshow = ax.imshow(img, extent=extent)

for phase in np.linspace(0, 300, 51):
    scene.update_camera(p=phase)
    render = sph.Render(scene)
    render.set_logscale()
    imshow.set_data(render.get_image())
    fig.canvas.draw()
    fig.canvas.flush_events()

Note that this time I haven't put the 'origin' kwarg of imshow to 'lower' as it is done in QuickView, so the target is in the lower half of the view (it just comes down to my preference).

I wasn't expecting to spend so much time debugging today, so I'll work on assembling the maths in a proper format to be shared tomorrow.

@alejandrobll
Copy link
Owner

Thanks! If you could send me the math by email, that would be great.

I'll wait for the math, but from what I have seen so far, it seems to me that the implementation of the projection involves a few steps. (1) define the camera, (2) project the particles onto a sphere and (3) perform the rendering.

I was thinking that instead of restricting the projection of the particles onto the sphere to your particular projection, it would be much more general to write a dedicated function that does it so that it can be used with any future projection. After all, the only thing that matters is the angular coordinates and the angular size of the "smoothed" sphere of each particle, no? Once this is calculated, each projection is just a mapping of this coordinates onto the plane.

I did some quick maths, and it seems to me that it could be possible, in principle, to calculate all the particles coordinates in scenemodule.c (as you do), and perhaps re-define the smoothing lengths of the particles so that they are characterized by two values, hx and hy. Do you think this might work? If that works, we could minimize the changes in rendermodule.c so that it does not depend on the projection chosen at all. This would not come for free, but we should introduce two smoothing lengths, one perpendicular to each other. Do you think this might work? am I missing something?

If this works, then we could create a very convenient framework for future extensions.

@alejandrobll
Copy link
Owner

Do you have some thoughts? I thought some more on this and tried a quick implementation of the logic outlined in my previous message. It seems to work for me. If you check the equirectangular branch that I just pushed to the repository, you will see that I am barely changing rendermodule.c, but making all the relevant calculation in Scene.py (this should be done in scenemodule.c, but at this stage, this is just a proof of concept).

The idea is that particles get projected onto the sphere (radiu=1). At the moment this is not a separate function, but it should be, as it has nothing to do with the projection!). Then, the projection is applied to the angular coordinates (and to the smoothing lengths too!). For the particular projection that I have chosen, the smoothing lengths are the same in the x and y direction of the projection, so this could be done with only one value of the smoothing length for each particle. However, I updated rendermodule.c so that it uses two different smoothing lengths, one in each direction, so that it has the ability to draw ellipses if needed.

I think this approach, or something close to this, will allow adding this feature while leaving the code clean and as modular as possible too, minimizing the interaction between the classes. What do you think? Do you think your projection is suitable for this or I am missing something?

Thanks,
Alejandro

@alejandrobll
Copy link
Owner

by the way, I tested it using the example of the repository:

snap = h5py.File('darkmatter_box.h5py','r')
pos = snap['PartType1/Coordinates'].value
h   = snap['PartType1/SmoothingLength'].value
P = sph.Particles(pos, mass=np.ones(len(pos)), hsml=h)
S = sph.Scene(P)
S.update_camera(r='equirectangular', x=32, y=32,z=32, xsize=2000, ysize=1000)
R = sph.Render(S)
R.set_logscale()
img = R.get_image()
imshow(img)

@athob
Copy link
Author

athob commented Dec 4, 2019

Sorry, I was working on writing down the maths and was planning on answering your comment after sending it to you since some aspects of it could be important in talking about the distortions. I'm gonna try to answer quickly anyway your latest comments:

I'll wait for the math, but from what I have seen so far, it seems to me that the implementation of the projection involves a few steps. (1) define the camera, (2) project the particles onto a sphere and (3) perform the rendering.

I was thinking that instead of restricting the projection of the particles onto the sphere to your particular projection, it would be much more general to write a dedicated function that does it so that it can be used with any future projection. After all, the only thing that matters is the angular coordinates and the angular size of the "smoothed" sphere of each particle, no? Once this is calculated, each projection is just a mapping of this coordinates onto the plane.

The steps you described are not far from what I do, though it doesn't exactly project the particles onto a sphere, just kind-of: the idea of projecting onto a sphere is just an in-between step in getting to the final projection scheme. The azimuth equidistant projection is a scheme designed to project a sphere onto a flat surface, so when establishing the maths, I need to virtually pass by a projection of the space onto the sphere. But when it comes down to the implementation, I just treat the "said-sphere" as the set of all the possible directions that start from the camera position. I'm not sure if that makes sense? What I mean is that the projection is already "a mapping of this coordinates onto the plane" as you mentioned (at least when it comes to the scene.scene step, for render.render it's a bit more complex as I'll discuss it below).

I did some quick maths, and it seems to me that it could be possible, in principle, to calculate all the particles coordinates in scenemodule.c (as you do), and perhaps re-define the smoothing lengths of the particles so that they are characterized by two values, hx and hy. Do you think this might work? If that works, we could minimize the changes in rendermodule.c so that it does not depend on the projection chosen at all. This would not come for free, but we should introduce two smoothing lengths, one perpendicular to each other. Do you think this might work? am I missing something?

If this works, then we could create a very convenient framework for future extensions.

The fact that I had to give the projection and extent to render.render frustrated me quite a bit when I first implemented that new scheme, but I needed something working quickly and didn't give more thoughts to it. The kernel convolution is particularly different from the core projection schemes, as it is dependent on the position of the kernel centroid in the field of view. Specifically, the kernel is distorted the further its centroid is away from the zenith point: this distortion presents itself in the form of a curved elongation of the kernel ellipticity along the tangent axis perpendicular to the radial axis from the zenith point to the centroid. What I mean by curved is that the projected kernel doesn't end up as a straightforward elliptic kernel with the longer axis along the tangent axis, but "as if" that longer axis follows the circle arc that continues along this tangent axis.
Of course, this is a mild effect but for the purpose of accuracy, it did matter. To do that correctly, I needed in some ways to virtually project back onto the unit sphere that was the in-between step in the projection framework. When summing mass contributions to each pixel, core sphviewer weights these contributions by a kernel coefficient that depends on the distance between the pixel coordinates and the centroid ones. In the case of this new scheme, the change happens in the way it treats that distance: instead of using a normal Euclidean distance, it uses the Spherical distance between the 2 points of the unit sphere corresponding to the pixel and centroid. With that Spherical distance, it uses the smoothing length projected on that unit sphere, virtually considering the entire kernel as its projection on the sphere for obtaining its "azimuth-equidistantly" projected analogue.
As for its implementation, the coordinates returned by scene.scene and given to render.render are in pixel format, consistently with the way core sphviewer works. The way I worked around the need to convert the field of view pixel ranges ([0,xsize] and [0,ysize]) in the corresponding angular coordinates was by using extent as a reference. One could think that I didn't need that since the key usage was to have xsize = ysize and with the planetarium style projection of half the space, meaning that [0,xsize] and [0,ysize] would both correspond to [-90,90] degrees of angular coordinates. However, as I was mentioning in the starting comment:

although the Camera 'zoom' does not really have a utility when it comes to planetarium projection, I decided to use it as a way to narrow or broaden the maximum zenith angle shown in the azimuth equidistant projection. In that design, a zoom of 1 corresponds to the planetarium type 90 degree maximum zenith angle, while a zoom near 0 broadens to almost 180 degree max zenith angle, and a really large zoom narrows the max zenith angle near 0 degree.

While I pointed out this design choice as purely experimental, I do believe it can serve a purpose in the future (after all, this is equivalent to a "fisheye" lens) and fits well within the design of the zoom parameter. It also means that render.render cannot assume [0,xsize] and [0,ysize] would both correspond to [-90,90] degrees of angular coordinates and must know the extent.

Taking all this into account as for the double smoothing lengths solution you proposed, you probably noticed by now based on what I described that it would be incompatible as the kernels require more than just 2 smoothing lengths. However, I do agree that this sort of modularization would make a more convenient framework for future projections so I've been giving some thoughts to it. I think what it needs is not only the 2 smoothing lengths you proposed but perhaps also additional parameters that would encode the orientation for the longer axis, and the curvature applied to it. I believe something like that can be achieved by giving the kernel an angle corresponding to the said orientation and a length corresponding to the radius of curvature. While continuing to write down the maths, I'll give some more rigorous thinking to this implementation.

Do you have some thoughts? I thought some more on this and tried a quick implementation of the logic outlined in my previous message. It seems to work for me. If you check the equirectangular branch that I just pushed to the repository, you will see that I am barely changing rendermodule.c, but making all the relevant calculation in Scene.py (this should be done in scenemodule.c, but at this stage, this is just a proof of concept).

The idea is that particles get projected onto the sphere (radiu=1). At the moment this is not a separate function, but it should be, as it has nothing to do with the projection!). Then, the projection is applied to the angular coordinates (and to the smoothing lengths too!). For the particular projection that I have chosen, the smoothing lengths are the same in the x and y direction of the projection, so this could be done with only one value of the smoothing length for each particle. However, I updated rendermodule.c so that it uses two different smoothing lengths, one in each direction, so that it has the ability to draw ellipses if needed.

I think this approach, or something close to this, will allow adding this feature while leaving the code clean and as modular as possible too, minimizing the interaction between the classes. What do you think? Do you think your projection is suitable for this or I am missing something?

Although my earlier discussion answers partly this comment, I wanted to give some observations as to the equirectangular projection and the implementation in that branch. Notably, you've probably noticed from my explanation that the distortions in the equirectangular scheme are quite different from those in the azimuth equidistant scheme as the distorted kernels are indeed elliptical (actually sort of, more on that below) and easily aligned with the (x,y) axes of the field of view. This makes the equirectangular very straightforward to implement, unlike the azimuth equidistant scheme where one must consider the distorted kernels with different orientations and incurved ellipticities depending on the centroid position. With this in mind, the ttx and tty correspondence with tx[i] and ty[i] won't be a one-to-one correspondence but a more intertwined one that I yet need to understand better.

Additionally, the distorted kernels can become very different to ellipses in both schemes when reaching the nadir/south pole position for the azimuth equidistant projection, or both the nadir/south pole and zenith/north pole position for the equirectangular projection. In the latter case, any disc that contains the zenith position ends projected onto a curved band that spread all along the top part of the projected field. This is also something that I will need to consider more rigorously.

Lastly, just an observation that is more a matter of personal appreciation, but I noticed you assessed the equirectangular case through a condition self._camera_params['r'] == 'equirectangular' on the r parameter of the Camera. In theory (and perhaps you would prefer that), the same could be done for assessing the azimuth equidistant case. However, that would mean taking away from the user the ability to define a target for the camera to point at from a certain distance which, in my opinion, is quite a quality-of-life feature. Instead, I find the way I do a bit more ergonomic i.e. using a projection parameter with the ability to tell the software where to place that target in the resulting projected field of view. What do you think?

Well, that reply did not end up as quick as I was planning to do it, sorry for the long read! Back to formating the maths!

@alejandrobll
Copy link
Owner

The steps you described are not far from what I do, though it doesn't exactly project the particles onto a sphere, just kind-of: the idea of projecting onto a sphere is just an in-between step in getting to the final projection scheme...

  • Yes, I understand what you describe. We are saying the same thing. This reinforces what I think, namely, that we should implement an intermediate function that returns the particle's coordinates onto the sphere. This has to be part of scenemodule.c. This will enable to implement a range of projections in the future, making the projection itself very transparent. I don't think it would be a bad idea to have an intermediate step that mediates between real coordinates and projected coordinates.

The fact that I had to give the projection and extent to render.render frustrated me quite a bit when I first implemented that new scheme, but I needed something working quickly and didn't give more thoughts to it...

  • Well, the fact that this frustrated you for a while tell us that there is something that can be done differently here, no? I am convinced you don't need this information in this function. Given that the whole Universe is projected onto the sphere, extent is always [0,2*pi] in one direction, and [0, pi] in the other, unless you are trying to do something else that I don't fully understand. You say that the spread of the mass cannot be described by just two values, but I don't see why. The image is 2D, so you should be able to describe any transformation with two orthogonal directions.

For example, once you have the coordinates of each particle in the unity sphere, and you know how the smoothing lengths translate into delta_lat, and delta_lon (delta_lat and delta_lon the with of latitude and longitude, respectively), then you can use that information to calculate delta_x and delta_y in map's coordinates, for each particle located in the pixel x,y. This should always be possible to do unless I am missing something (again!). I'll try to work on the next level of complexity after the equirectangular projection. Hopefully, this would help to clarify what I have in mind.

With this in mind, the ttx and tty correspondence with tx[i] and ty[i] won't be a one-to-one correspondence but a more intertwined one that I yet need to understand better.

  • You might be right, but I am still not convinced that this is the case. Need to check the maths.

Lastly, just an observation that is more a matter of personal appreciation, but I noticed you assessed the equirectangular case through a condition self._camera_params['r'] == 'equirectangular' on the r parameter of the Camera. In theory (and perhaps you would prefer that), the same could be done for assessing the azimuth equidistant case. However, that would mean taking away from the user the ability to define a target for the camera to point at from a certain distance which, in my opinion, is quite a quality-of-life feature. Instead, I find the way I do a bit more ergonomic i.e. using a projection parameter with the ability to tell the software where to place that target in the resulting projected field of view. What do you think?

  • Yes, I see your point. I don't have a strong opinion on this. I did this because it was handy. However, If we project the whole universe, then r does not mean anything anymore. This was my logic when I first introduced the infinity value for r, just because under the parallel projection it does not mean anything either. For constructing whole-sky projections, the only way to change the distance of the camera to a given object is by actually placing the unity sphere in different locations. For that, x,y,z parameter of the camera should be the only relevant ones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants