Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Color Correction #24

Open
jakiestfu opened this issue Feb 14, 2016 · 14 comments
Open

Color Correction #24

jakiestfu opened this issue Feb 14, 2016 · 14 comments

Comments

@jakiestfu
Copy link
Owner

Provide an optional parameter to color correct the resulting imagery to better reflect images such as the following:

@ungoldman
Copy link
Contributor

@jakiestfu what color correction techniques are you using above?

@jakiestfu
Copy link
Owner Author

@ngoldman From the wiki article:

Example of a Rayleigh-corrected, true-color full disk image created from the AHI sensor.

@mdcarter
Copy link

This would be awesome.
Is it doable ?

@jakiestfu
Copy link
Owner Author

@mdcarter color correction, yes, but Rayleigh Correction, not sure...

@Joshfindit
Copy link

Just a layman's observation based on reading Correction of Rayleigh scattering effects in cloud optical thickness retrievals, in order to correct for Rayleigh scattering (which affects the optical density of clouds), it's necessary to measure the density more directly.
They mention using images taken at different wavelengths, and they also talk about using images taken at different angles, both of which are not available from a single flat image.
As it looks like the source that himawari.js pulls from is a single flat image, calculating Rayleigh scattering correction may not be technologically possible.

@Joshfindit
Copy link

The question is: what colour correction methods can get close enough to the same result?

@ungoldman
Copy link
Contributor

You're not going to be able to get quite the same result as above working with a single full disk image but you could use @celoyd's color tweaks suggestions from his Himawari 8 animation tutorial as a starting point. He's using convert which is a part of ImageMagick, so that tool is already available to this project with its current dependencies.

@jakiestfu
Copy link
Owner Author

@ngoldman do you think you could provide a sample picture or some code that outputs that picture?

@jakiestfu
Copy link
Owner Author

This code works fine, however, it doesn't do much to the image other than brighten and add contrast:

convert original.jpg -channel R -gamma 1.2 -channel G -gamma 1.1 +channel -sigmoidal-contrast 3,50% updated.jpg

original.jpg, updated.jpg

@celoyd
Copy link

celoyd commented Feb 22, 2016

it doesn't do much to the image other than brighten and add contrast

Right. It tones down blues (and greens) too, but it is extremely simple compared to what CIRA RAMMB has done to the image on Wikipedia. Their code is based on what’s used with MODIS and VIIRS, which is nontrivial. They have a paper in review, so more details should be public soon. I’m pretty confident that it’s more complex than anything that makes sense to implement here. (But skip to the “However” heading for some ideas.)

Rationale for simple adjustment

We don’t have good access to data for complex adjustment

Our options are limited because we don’t have the bands that CIRA does. They’re able to calculate optical depth, cloud height, etc., from information that’s at best only kinda present in the RGB PNGs we’re looking at. And they can mix some of the NIR channel into the green channel to account for the green band being to the < λ (blue) side of the 550 nm peak visible reflectance of chlorophyll.

Sidebar: Why? Because the data is produced by the government of Japan, and they haven’t licensed and distributed it that way, as far as I can tell. I’ve signed up for their P-Tree service, but its TOS is vague and refers to other lengthier and confusinger TOSes. They seem to imagine forecasters, researchers, and resellers as the only potential users – which is completely understandable, but frustrating in our position.

So while I’m very grateful that they’ve done as much as they have to license and distribute this data – they’ve clearly worked hard to serve their intended users well – I would like to convince them to do a little more. I don’t have the language skills, the contacts, or (at the moment) the time to make a persuasive case for truly open data here. If someone else does, I’d be happy to contribute some sort of amicus brief based on professional experience with these issues. End sidebar.

We just don’t have the raw information that CIRA RAMMB does. If we could get it, it’s not clear (to me, yet) that we could “publish” it.

We don’t necessarily want complex adjustment

However! CIRA’s path is not necessarily ideal. While I would use the NIR→green trick if I could, everything else they’re doing is more science- than esthetics- or realism-oriented. A person in space would see Rayleigh scattering making the atmosphere bluer toward the horizon, for example, and BRDF effects, both of which the correction deliberately and efficiently removes. They would also see the halo of the atmosphere, which CIRA’s correction cuts out, and they would not see the smudgy artifacts that the correction sometimes introduces near the terminator (dawn and dusk): look west of India in the example image at full size. What CIRA’s doing is extremely impressive, cutting-edge correction. But it’s not necessarily what makes sense here – at least as I’ve been envisioning it.

However

I’m all for experiments more elaborate adjustment!

what colour correction methods can get close enough to the same result?

You can model the atmosphere as a spherical shell of known radii (in pixel dimensions) and do some very light trig to find the distance for which a given pixel’s ray intersects it, then use that field to weight a correction. And you can add low-res gridded elevation data (e.g.) to account for the fact that, for example, the Tibetan Plateau extends above most of the optical atmosphere. Instead of estimating, you can look up measured optical depth, from MODIS or from ground-based weather reports. Or you could pull in something like 6S, which is a standard atmospheric corrector that’s actually related to what CIRA is using. It’s really just a question of how far you want to go.

So

That’s why I’m pretty happy to use a simple/simplistic static adjustment. For input to ffmpeg, incidentally, I’ve been using this:

convert -gamma 1.33 -channel B -gamma 0.9 -channel G -gamma 0.975 +channel -sigmoidal-contrast 3,33%

Which is a bit light and low-contrast by itself, but whatever random color profiles ffmpeg makes up seem to put it about where I want it.

Just one last point and I’ll give you back the mic

You can download CIRA’s images if you prefer them for any reason! There could even be flags to produce, say:

  • CIRA
  • NICT
  • NICT with basic correction
  • NICT with experimental correction

Okay, I’ll be quiet now

@ungoldman
Copy link
Contributor

Thanks @celoyd for taking time to get into the details! 😁 👌 💯

@Joshfindit
Copy link

Re: hue correction:

I just ran convert -gamma 1.33 -channel B -gamma 0.9 -channel G -gamma 0.975 +channel -sigmoidal-contrast 3,33%, and got the following image:
himawariearth post-processed and resized

The image at the top of this issue has quite a bit more green and Australia is noticeably different.
Is there a more correct colouring?

@celoyd
Copy link

celoyd commented Feb 24, 2016

The image at the top of this issue has quite a bit more green and Australia is noticeably different.

Yes. This is mainly because CIRA does NIR→green mixing to account for the green channel not being on the chlorophyll peak.

Is there a more correct colouring?

Of the two, or theoretically? Could you be more specific?

@Joshfindit
Copy link

Is there a more correct colouring of the two.
If it's the top image as it seems to be, it would get closer by adding some green and nudging down the red a bit, but that's just off the top of my head.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants