Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: How to unify grids to resolve this error? #128

Open
wharfbanger opened this issue May 3, 2024 · 18 comments
Open

[Bug]: How to unify grids to resolve this error? #128

wharfbanger opened this issue May 3, 2024 · 18 comments

Comments

@wharfbanger
Copy link

SBAS and PSI analysis on Otmanbozdagh Mud Volcano, Azerbaijan, 2020-2023

Fails at "SBAS Trend Correction": ---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in ()
4 inc = decimator(sbas.incidence_angle())
5 yy, xx = xr.broadcast(topo.y, topo.x)
----> 6 trend_sbas = sbas.regression(unwrap_sbas.phase, [topo, topoyy, topoxx, topoyyxx, yy, xx, inc], corr_sbas)

/usr/local/lib/python3.10/dist-packages/pygmtsar/Stack_detrend.py in regression(data, variables, weight, valid_pixels_threshold, **kwargs)
233 vshapes = [v[0].shape if v.ndim==3 else v.shape for v in variables]
234 equals = np.all([vshape == dshape for vshape in vshapes])
--> 235 assert equals, f'ERROR: shapes of variables slices {vshapes} and data slice {dshape} differ.'
236 #assert equals, f'{dshape} {vshapes}, {equals}'
237 variables_stack = [v.chunk(dict(y=chunk2d[0], x=chunk2d[1])) for v in variables]

AssertionError: ERROR: shapes of variables slices [(776, 940), (776, 940), (776, 940), (776, 940), (776, 940), (776, 940), (776, 940)] and data slice (775, 940) differ.

Processing Log
If applicable, attach a log file from notebook cell or terminal output or relevant error messages

[Please note you can use the ChatGPT4-based InSAR AI Assistant for guidance or troubleshooting. Visit https://insar.dev/ai for more information.]

@AlexeyPechnikov
Copy link
Owner

Use xarray reindex_like() function to unify the grids: array1.reindex_like(array2).

@wharfbanger
Copy link
Author

wharfbanger commented May 3, 2024

Thanks Alexey. Can you please provide code with actual array1 and array2? Even better would be to provide an updated Colab notebook..

@AlexeyPechnikov
Copy link
Owner

Here is the Xarray documentation: https://docs.xarray.dev/en/latest/generated/xarray.DataArray.interp_like.html

Example notebooks with the function usage are shared on my Patreon, but it is pretty obvious.

@wharfbanger
Copy link
Author

Thanks Alexey,

I modified using the Xarray function (as suggested by ChatGPT) and the notebook successfully executed that step (SBAS Trend Correction), and the remaining SBAS steps. GREAT.

Unfortunately there was an incomplete red progress indicator at a later PS processing step during materialization:

image

In subsequent steps the PS plots looked empty:

image

image

Here is a my revised Colab notebook: https://drive.google.com/file/d/100g6LRKmWc-geL6OMFLv6Ytoa59VFNy4/view?usp=sharing

@AlexeyPechnikov
Copy link
Owner

There are some pixel-wise inconsistencies between the interferograms valid pixels in the stack, which can be resolved as follows:

# check pixel stacks
validmask_ps = np.isfinite(intf_ps).all(axis=0)
# optionally, materialize to disk and open
validmask_ps = sbas.sync_cube(validmask_ps, 'validmask_ps')
validmask_ps.where(validmask_ps).plot.imshow()

disp_ps_pairs = sbas.los_displacement_mm(sbas.unwrap1d(
    intf_ps.where(validmask_ps)
    ))
disp_ps_pairs

@wharfbanger
Copy link
Author

wharfbanger commented May 8, 2024

My notebook completed, but there was no correlation between PS and SBAS (SBAS vs PS Results Comparision). I exported SBAS (disp-sbas) and calculated rate (mm/year) by linear regression.

Otmanbozdagh_mud gave very different SBAS results to the SBAS Imperial Valley example (same area, same scenes):

Otmanbozdagh_mud (mm/yr):
image

Imperial_Valley (mm/yr):
image

You can see this makes a big difference in the maps (points labelled mm/year):

Otmanbozdagh_mud (mm/yr):
image

Imperial_Valley (mm/yr):
image

Can you provide an explanation for these different results? What parameters can I adjust?

@AlexeyPechnikov
Copy link
Owner

The examples are very different. In the Imperial Valley case, we remove very strong stratified plus turbulent atmospheric noise using Gaussian filtering, whereas the Otmanbozdagh mud volcano shows pretty good results with linear regression trend correction. How do you propose we unify the nature of these processes? :)

@wharfbanger
Copy link
Author

wharfbanger commented May 11, 2024

It would be very useful to have a general purpose example that can be applied to a variety of areas, and where the parameters are well documented. StaMPS has been very good in this respect, as results with default parameters are very good in a variety of cases (agree with convectional levelling contours). I have spent a great deal of time trying to adapt these examples to my own areas but perhaps they are too specific? For example, I gave up trying to modify the Otmanbozdagh mud volcano because the results were so different from the more simple Imperial Valley (I cant imagine it is different atmospheric filtering). But the Imperial Valley example has very coarse resolution (60 m), with a large wavelength (400 m). I have spent many hours trying to decrease the Imperial Valley wavelength and grid size (to reduce the spatial smoothing) so I can examine smaller-scale phenomenon, without success. Is there anywhere I can access more documentation to allow me to better modify the examples? Would your book be helpful? As a Patreon subscriber do I have access to the book? Thanks, Mark

@AlexeyPechnikov
Copy link
Owner

Mark, StaMPS does not have any parameters to process distributed scatterers at all, and it is primarily a persistent scatterers (PS) analysis tool, as mentioned in its name. The PyGMTSAR Imperial Valley notebooks illustrate DS SBAS analysis in the absence of persistent scatterers, making it impossible to produce complete area coverage results using StaMPS and other PSI tools. The Otmanbozdagh mud volcano is a completely different case, having a great density of PS. As illustrated in the notebook, both PSI and SBAS methods can produce very similar results.

If you apply DS analysis as in the Imperial Valley case to sparse persistent scatterers, it does not work due to exact physical reasons. PyGMTSAR allows for both PS and DS analysis using PSI and SBAS techniques, but it is impossible to unify the different physics into a single standalone example. PS are smaller than a single Sentinel-1 pixel and are affected by speckle noise and often sparse, whereas DS are larger than a pixel, and speckle noise is not an issue because it is effectively removed using anti-aliasing filtering and proper DS analysis scale selection typically can produce dense but lower resolution pixels.

There are many practical cases where not-so-rare PS can be grouped into DS, and all analysis methods work well enough, although they still have some limitations, as explained in my publications available on Patreon.

@wharfbanger
Copy link
Author

wharfbanger commented May 12, 2024 via email

@AlexeyPechnikov
Copy link
Owner

The Imperial Valley example uses a wavelength of 400 meters for Gaussian filtering (antialiasing and speckle removal) and spatial averaging (multilooking) into a ~60-meter square grid (16x4=64 original pixels aggregated into one) to enhance the signal-noise ratio and correlation, plus a Goldstein filter is applied. This processing is targeted at highly decorrelated areas with high radiometric noise and strong atmospheric effects. It's possible to apply a much lower wavelength, starting from about 4 meters, and to exclude multilooking and the Goldstein filter if you have more frequent and higher-quality interferograms in a well-coherent area like in the Golden Valley example. But why do you need to improve the SBAS results? Use these to start your PSI processing, utilizing SBAS-detected phase ramps to produce higher resolution results, as demonstrated in the Otmanbozdagh mud volcano example. PSI 1D unwrapping is more accurate but requires a priori known phase corrections, which we define using less accurate and lower-resolution SBAS analysis.

@wharfbanger
Copy link
Author

wharfbanger commented May 12, 2024 via email

@wharfbanger
Copy link
Author

wharfbanger commented May 12, 2024 via email

@AlexeyPechnikov
Copy link
Owner

Unfortunately, the images are not visible in your last messages.

In the Imperial Valley case, we use a wavelength of 400 meters to detect larger DS instead of the default ("gold standard") value of 200 meters. For small DS, we can apply a wavelength of 100 meters and even 60 meters for PS detection. However, this is tricky for SNAPHU unwrapping, so I do not recommend using a wavelength less than 100 meters. It's preferable to use PSI analysis after SBAS to enhance your results.

# Gaussian filtering 400m cut-off wavelength with multilooking 1x4 on Sentinel-1 intensity
intensity = sbas.multilooking(np.square(np.abs(data)), wavelength=400, coarsen=(1,4))
phase = sbas.multilooking(sbas.phasediff(pairs), wavelength=400, coarsen=(1,4))

Additionally, we can specify a more detailed output grid like 30 meters using sbas.decimator(30) instead of the default 60-meter one:

# use default 60m resolution
decimator = sbas.decimator()

Commonly, there is no reason for a higher resolution output SBAS grid due to SNAPHU unwrapping error accumulation.

The Goldstein filter can introduce some spatial inaccuracies while it helps to clean up the interferograms' spatial spectrum:

# Goldstein filter expects square grid cells produced using multilooking
intf_filt = sbas.interferogram(sbas.goldstein(phase, corr, 32))

Typically, we do not need to exclude the filter because it can be adjusted to use a smaller processing window of 16 pixels as sbas.goldstein(phase, corr, 16).

As shown in my other examples, for many InSAR cases, the best approach is to apply SBAS analysis with default or nearly default parameters to produce stable results on a rough grid and enhance it using PS analysis with default or close to default parameters too, to calculate the most accurate displacements on the original Sentinel-1 resolution grid. Of course, SBAS and PSI results need to be consistent to validate the analyses. If you only need an intermediate output resolution like 30 meters, sometimes it is sufficient to apply only SBAS processing.

@wharfbanger
Copy link
Author

wharfbanger commented May 13, 2024 via email

@wharfbanger
Copy link
Author

wharfbanger commented May 14, 2024 via email

@AlexeyPechnikov
Copy link
Owner

For geocoding the results, you need to create the proper geocoding matrix like:

# use the original Sentinel-1 resolution (1 pixel spacing)
sbas.compute_geocode(1)

Select a suitable coarsening factor instead of 1 (it works too but creates a matrix much larger than you actually need).

@wharfbanger
Copy link
Author

wharfbanger commented May 18, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants