-
-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: How to unify grids to resolve this error? #128
Comments
Use xarray reindex_like() function to unify the grids: array1.reindex_like(array2). |
Thanks Alexey. Can you please provide code with actual array1 and array2? Even better would be to provide an updated Colab notebook.. |
Here is the Xarray documentation: https://docs.xarray.dev/en/latest/generated/xarray.DataArray.interp_like.html Example notebooks with the function usage are shared on my Patreon, but it is pretty obvious. |
Thanks Alexey, I modified using the Xarray function (as suggested by ChatGPT) and the notebook successfully executed that step (SBAS Trend Correction), and the remaining SBAS steps. GREAT. Unfortunately there was an incomplete red progress indicator at a later PS processing step during materialization: In subsequent steps the PS plots looked empty: Here is a my revised Colab notebook: https://drive.google.com/file/d/100g6LRKmWc-geL6OMFLv6Ytoa59VFNy4/view?usp=sharing |
There are some pixel-wise inconsistencies between the interferograms valid pixels in the stack, which can be resolved as follows:
|
My notebook completed, but there was no correlation between PS and SBAS (SBAS vs PS Results Comparision). I exported SBAS (disp-sbas) and calculated rate (mm/year) by linear regression. Otmanbozdagh_mud gave very different SBAS results to the SBAS Imperial Valley example (same area, same scenes): You can see this makes a big difference in the maps (points labelled mm/year): Can you provide an explanation for these different results? What parameters can I adjust? |
The examples are very different. In the Imperial Valley case, we remove very strong stratified plus turbulent atmospheric noise using Gaussian filtering, whereas the Otmanbozdagh mud volcano shows pretty good results with linear regression trend correction. How do you propose we unify the nature of these processes? :) |
It would be very useful to have a general purpose example that can be applied to a variety of areas, and where the parameters are well documented. StaMPS has been very good in this respect, as results with default parameters are very good in a variety of cases (agree with convectional levelling contours). I have spent a great deal of time trying to adapt these examples to my own areas but perhaps they are too specific? For example, I gave up trying to modify the Otmanbozdagh mud volcano because the results were so different from the more simple Imperial Valley (I cant imagine it is different atmospheric filtering). But the Imperial Valley example has very coarse resolution (60 m), with a large wavelength (400 m). I have spent many hours trying to decrease the Imperial Valley wavelength and grid size (to reduce the spatial smoothing) so I can examine smaller-scale phenomenon, without success. Is there anywhere I can access more documentation to allow me to better modify the examples? Would your book be helpful? As a Patreon subscriber do I have access to the book? Thanks, Mark |
Mark, StaMPS does not have any parameters to process distributed scatterers at all, and it is primarily a persistent scatterers (PS) analysis tool, as mentioned in its name. The PyGMTSAR Imperial Valley notebooks illustrate DS SBAS analysis in the absence of persistent scatterers, making it impossible to produce complete area coverage results using StaMPS and other PSI tools. The Otmanbozdagh mud volcano is a completely different case, having a great density of PS. As illustrated in the notebook, both PSI and SBAS methods can produce very similar results. If you apply DS analysis as in the Imperial Valley case to sparse persistent scatterers, it does not work due to exact physical reasons. PyGMTSAR allows for both PS and DS analysis using PSI and SBAS techniques, but it is impossible to unify the different physics into a single standalone example. PS are smaller than a single Sentinel-1 pixel and are affected by speckle noise and often sparse, whereas DS are larger than a pixel, and speckle noise is not an issue because it is effectively removed using anti-aliasing filtering and proper DS analysis scale selection typically can produce dense but lower resolution pixels. There are many practical cases where not-so-rare PS can be grouped into DS, and all analysis methods work well enough, although they still have some limitations, as explained in my publications available on Patreon. |
Thanks Alexey,Your explanations are useful and appreciated. Imperial Valley: Is it possible to reduce the grid spacing from 60 to 30 meters (or 15 meters)? If so, which parameters? Please note this is for my own study area in New Zealand, there are many PS in the area. I have already used the Imperial Valley example in this area and the results are consistent with StaMPS. However, I would like to see if results can be improved; e.g. by reducing averaging of DS, using a smaller grid size. If not possible to do this, then perhaps the Otmanbozdagh_mud example will be a better template?Thanks,MarkSent from my iPhoneOn 11/05/2024, at 12:39 AM, Alexey Pechnikov ***@***.***> wrote:
Mark, StaMPS does not have any parameters to process distributed scatterers at all, and it is primarily a persistent scatterers (PS) analysis tool, as mentioned in its name. The PyGMTSAR Imperial Valley notebooks illustrate DS SBAS analysis in the absence of persistent scatterers, making it impossible to produce complete area coverage results using StaMPS and other PSI tools. The Otmanbozdagh mud volcano is a completely different case, having a great density of PS. As illustrated in the notebook, both PSI and SBAS methods can produce very similar results.
If you apply DS analysis as in the Imperial Valley case to sparse persistent scatterers, it does not work due to exact physical reasons. PyGMTSAR allows for both PS and DS analysis using PSI and SBAS techniques, but it is impossible to unify the different physics into a single standalone example. PS are smaller than a single Sentinel-1 pixel and are affected by speckle noise and often sparse, whereas DS are larger than a pixel, and speckle noise is not an issue because it is effectively removed using anti-aliasing filtering and proper DS analysis scale selection typically can produce dense but lower resolution pixels.
There are many practical cases where not-so-rare PS can be grouped into DS, and all analysis methods work well enough, although they still have some limitations, as explained in my publications available on Patreon.
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
The Imperial Valley example uses a wavelength of 400 meters for Gaussian filtering (antialiasing and speckle removal) and spatial averaging (multilooking) into a ~60-meter square grid (16x4=64 original pixels aggregated into one) to enhance the signal-noise ratio and correlation, plus a Goldstein filter is applied. This processing is targeted at highly decorrelated areas with high radiometric noise and strong atmospheric effects. It's possible to apply a much lower wavelength, starting from about 4 meters, and to exclude multilooking and the Goldstein filter if you have more frequent and higher-quality interferograms in a well-coherent area like in the Golden Valley example. But why do you need to improve the SBAS results? Use these to start your PSI processing, utilizing SBAS-detected phase ramps to produce higher resolution results, as demonstrated in the Otmanbozdagh mud volcano example. PSI 1D unwrapping is more accurate but requires a priori known phase corrections, which we define using less accurate and lower-resolution SBAS analysis. |
Thanks Alexey!
To answer your question: why do I need to improve SBAS results?
Here is my situation - I have a study: a two year time interval in a
geothermal area with a subsidence bowl.
StaMPS results in color shading (see legend). Note: ascending and
descending combined to give an estimate of vertical. White contours are
from a ground-based levelling survey (same time period) showing a maximum
of -90 mm/year. StaMPS is in good agreement with *-85 mm/year* (at bowl
center).
[image: image.png]
Now, Imperial SBAS results in color, showing a maximum of* -60 mm/year*,
quite nice, but I am hoping for better agreement with StaMPS and
levelling. So maybe we can just reduce the wavelength to ~100m, and reduce
the grid size to ~30 m? I have tried, but get errors because I don't know
what parameters to change.
[image: image.png]
…On Mon, May 13, 2024 at 5:22 AM Alexey Pechnikov ***@***.***> wrote:
The Imperial Valley example uses a wavelength of 400 meters for Gaussian
filtering (antialiasing and speckle removal) and spatial averaging
(multilooking) into a ~60-meter square grid (16x4=64 original pixels
aggregated into one) to enhance the signal-noise ratio and correlation,
plus a Goldstein filter is applied. This processing is targeted at highly
decorrelated areas with high radiometric noise and strong atmospheric
effects. It's possible to apply a much lower wavelength, starting from
about 4 meters, and to exclude multilooking and the Goldstein filter if you
have more frequent and higher-quality interferograms in a well-coherent
area like in the Golden Valley example. But why do you need to improve the
SBAS results? Use these to start your PSI processing, utilizing
SBAS-detected phase ramps to produce higher resolution results, as
demonstrated in the Otmanbozdagh mud volcano example. PSI 1D unwrapping is
more accurate but requires a priori known phase corrections, which we
define using less accurate and lower-resolution SBAS analysis.
—
Reply to this email directly, view it on GitHub
<#128 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMOB4Q5T573RYLKESEIKBJTZB6QL5AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWGMYTSNZYHE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Thanks Alexey!
*Please disregard last email that contained an error*
To answer your question: why do I need to improve SBAS results?
Here is my situation - I have a study: a two year time interval in a
geothermal area with a subsidence bowl.
StaMPS results in color shading (see legend). Note: ascending and
descending combined to give an estimate of vertical. White contours are
from a ground-based levelling survey (same time period) showing a maximum
of -90 mm/year. StaMPS is in good agreement with *-85 mm/year* (at bowl
center).
[image: image.png]
Now, Imperial SBAS results in color, showing a maximum of* -30 mm/year*,
the shape is good, but I am hoping for better agreement with StaMPS and
levelling in terms of rate (mm/year). So maybe we can just reduce the
wavelength to ~30m and grid size to ~30 m? I have tried, but get
errors because I don't know what parameters to change.
[image: image.png]
…On Sun, May 12, 2024 at 10:22 AM Alexey Pechnikov ***@***.***> wrote:
The Imperial Valley example uses a wavelength of 400 meters for Gaussian
filtering (antialiasing and speckle removal) and spatial averaging
(multilooking) into a ~60-meter square grid (16x4=64 original pixels
aggregated into one) to enhance the signal-noise ratio and correlation,
plus a Goldstein filter is applied. This processing is targeted at highly
decorrelated areas with high radiometric noise and strong atmospheric
effects. It's possible to apply a much lower wavelength, starting from
about 4 meters, and to exclude multilooking and the Goldstein filter if you
have more frequent and higher-quality interferograms in a well-coherent
area like in the Golden Valley example. But why do you need to improve the
SBAS results? Use these to start your PSI processing, utilizing
SBAS-detected phase ramps to produce higher resolution results, as
demonstrated in the Otmanbozdagh mud volcano example. PSI 1D unwrapping is
more accurate but requires a priori known phase corrections, which we
define using less accurate and lower-resolution SBAS analysis.
—
Reply to this email directly, view it on GitHub
<#128 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMOB4Q5T573RYLKESEIKBJTZB6QL5AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWGMYTSNZYHE>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Unfortunately, the images are not visible in your last messages. In the Imperial Valley case, we use a wavelength of 400 meters to detect larger DS instead of the default ("gold standard") value of 200 meters. For small DS, we can apply a wavelength of 100 meters and even 60 meters for PS detection. However, this is tricky for SNAPHU unwrapping, so I do not recommend using a wavelength less than 100 meters. It's preferable to use PSI analysis after SBAS to enhance your results.
Additionally, we can specify a more detailed output grid like 30 meters using sbas.decimator(30) instead of the default 60-meter one:
Commonly, there is no reason for a higher resolution output SBAS grid due to SNAPHU unwrapping error accumulation. The Goldstein filter can introduce some spatial inaccuracies while it helps to clean up the interferograms' spatial spectrum:
Typically, we do not need to exclude the filter because it can be adjusted to use a smaller processing window of 16 pixels as sbas.goldstein(phase, corr, 16). As shown in my other examples, for many InSAR cases, the best approach is to apply SBAS analysis with default or nearly default parameters to produce stable results on a rough grid and enhance it using PS analysis with default or close to default parameters too, to calculate the most accurate displacements on the original Sentinel-1 resolution grid. Of course, SBAS and PSI results need to be consistent to validate the analyses. If you only need an intermediate output resolution like 30 meters, sometimes it is sufficient to apply only SBAS processing. |
Perfect, I'll give it a try with your suggestions.
I have attached the images FYI.
Mark
…On Sun, May 12, 2024 at 11:11 PM Alexey Pechnikov ***@***.***> wrote:
Unfortunately, the images are not visible in your last messages.
In the Imperial Valley case, we use a wavelength of 400 meters to detect
larger DS instead of the default ("gold standard") value of 200 meters. For
small DS, we can apply a wavelength of 100 meters and even 60 meters for PS
detection. However, this is tricky for SNAPHU unwrapping, so I do not
recommend using a wavelength less than 100 meters. It's preferable to use
PSI analysis after SBAS to enhance your results.
# Gaussian filtering 400m cut-off wavelength with multilooking 1x4 on Sentinel-1 intensity
intensity = sbas.multilooking(np.square(np.abs(data)), wavelength=400, coarsen=(1,4))
phase = sbas.multilooking(sbas.phasediff(pairs), wavelength=400, coarsen=(1,4))
Additionally, we can specify a more detailed output grid like 30 meters
using sbas.decimator(30) instead of the default 60-meter one:
# use default 60m resolution
decimator = sbas.decimator()
Commonly, there is no reason for a higher resolution output SBAS grid due
to SNAPHU unwrapping error accumulation.
The Goldstein filter can introduce some spatial inaccuracies while it
helps to clean up the interferograms' spatial spectrum:
# Goldstein filter expects square grid cells produced using multilooking
intf_filt = sbas.interferogram(sbas.goldstein(phase, corr, 32))
Typically, we do not need to exclude the filter because it can be adjusted
to use a smaller processing window of 16 pixels as sbas.goldstein(phase,
corr, 16).
As shown in my other examples, for many InSAR cases, the best approach is
to apply SBAS analysis with default or nearly default parameters to produce
stable results on a rough grid and enhance it using PS analysis with
default or close to default parameters too, to calculate the most accurate
displacements on the original Sentinel-1 resolution grid. Of course, SBAS
and PSI results need to be consistent to validate the analyses. If you only
need an intermediate output resolution like 30 meters, sometimes it is
sufficient to apply only SBAS processing.
—
Reply to this email directly, view it on GitHub
<#128 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMOB4Q37YZ7PJHTJP4O3GMTZCBKR3AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWG4ZDQNBRHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Thanks Alexey,
I took your advice and tried changing the Imperial Valley to a 30m grid:
decimator = sbas.decimator(30)
This gives an error at the later step:
tqdm_dask(unwrap_ll := sbas.ra2ll(unwrap.phase).persist(), desc='Geocoding')
Error: .Transforming grid spacing (grid_dy, grid_dx) is smaller than
transform matrix spacing (trans_dy, trans_dx), call Stack.geocode()
with less coarsing
I tried changing coarsen=(1,4) to coarsen=(1,1),
then I tried changing coarsen=(1,4) to coarsen=(1,2)
..but still get the same error
What other parameters do I need to change Imperial Valley to a 30m grid?
Thanks,
Mark
I tried changing
…On Sun, May 12, 2024 at 11:11 PM Alexey Pechnikov ***@***.***> wrote:
Unfortunately, the images are not visible in your last messages.
In the Imperial Valley case, we use a wavelength of 400 meters to detect
larger DS instead of the default ("gold standard") value of 200 meters. For
small DS, we can apply a wavelength of 100 meters and even 60 meters for PS
detection. However, this is tricky for SNAPHU unwrapping, so I do not
recommend using a wavelength less than 100 meters. It's preferable to use
PSI analysis after SBAS to enhance your results.
# Gaussian filtering 400m cut-off wavelength with multilooking 1x4 on Sentinel-1 intensity
intensity = sbas.multilooking(np.square(np.abs(data)), wavelength=400, coarsen=(1,4))
phase = sbas.multilooking(sbas.phasediff(pairs), wavelength=400, coarsen=(1,4))
Additionally, we can specify a more detailed output grid like 30 meters
using sbas.decimator(30) instead of the default 60-meter one:
# use default 60m resolution
decimator = sbas.decimator()
Commonly, there is no reason for a higher resolution output SBAS grid due
to SNAPHU unwrapping error accumulation.
The Goldstein filter can introduce some spatial inaccuracies while it
helps to clean up the interferograms' spatial spectrum:
# Goldstein filter expects square grid cells produced using multilooking
intf_filt = sbas.interferogram(sbas.goldstein(phase, corr, 32))
Typically, we do not need to exclude the filter because it can be adjusted
to use a smaller processing window of 16 pixels as sbas.goldstein(phase,
corr, 16).
As shown in my other examples, for many InSAR cases, the best approach is
to apply SBAS analysis with default or nearly default parameters to produce
stable results on a rough grid and enhance it using PS analysis with
default or close to default parameters too, to calculate the most accurate
displacements on the original Sentinel-1 resolution grid. Of course, SBAS
and PSI results need to be consistent to validate the analyses. If you only
need an intermediate output resolution like 30 meters, sometimes it is
sufficient to apply only SBAS processing.
—
Reply to this email directly, view it on GitHub
<#128 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AMOB4Q37YZ7PJHTJP4O3GMTZCBKR3AVCNFSM6AAAAABHE6PHX6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCMBWG4ZDQNBRHA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
For geocoding the results, you need to create the proper geocoding matrix like:
Select a suitable coarsening factor instead of 1 (it works too but creates a matrix much larger than you actually need). |
Thanks Alexey, this solved my issue!Sent from my iPhoneOn 15/05/2024, at 12:34 AM, Alexey Pechnikov ***@***.***> wrote:
For geocoding the results, you need to create the proper geocoding matrix like:
# use the original Sentinel-1 resolution (1 pixel spacing)
sbas.compute_geocode(1)
Select a suitable coarsening factor instead of 1 (it works too but creates a matrix much larger than you actually need).
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
SBAS and PSI analysis on Otmanbozdagh Mud Volcano, Azerbaijan, 2020-2023
Fails at "SBAS Trend Correction": ---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
in ()
4 inc = decimator(sbas.incidence_angle())
5 yy, xx = xr.broadcast(topo.y, topo.x)
----> 6 trend_sbas = sbas.regression(unwrap_sbas.phase, [topo, topoyy, topoxx, topoyyxx, yy, xx, inc], corr_sbas)
/usr/local/lib/python3.10/dist-packages/pygmtsar/Stack_detrend.py in regression(data, variables, weight, valid_pixels_threshold, **kwargs)
233 vshapes = [v[0].shape if v.ndim==3 else v.shape for v in variables]
234 equals = np.all([vshape == dshape for vshape in vshapes])
--> 235 assert equals, f'ERROR: shapes of variables slices {vshapes} and data slice {dshape} differ.'
236 #assert equals, f'{dshape} {vshapes}, {equals}'
237 variables_stack = [v.chunk(dict(y=chunk2d[0], x=chunk2d[1])) for v in variables]
AssertionError: ERROR: shapes of variables slices [(776, 940), (776, 940), (776, 940), (776, 940), (776, 940), (776, 940), (776, 940)] and data slice (775, 940) differ.
Processing Log
If applicable, attach a log file from notebook cell or terminal output or relevant error messages
[Please note you can use the ChatGPT4-based InSAR AI Assistant for guidance or troubleshooting. Visit https://insar.dev/ai for more information.]
The text was updated successfully, but these errors were encountered: