forked from ubc-geomatics-textbook/geomatics-textbook
-
Notifications
You must be signed in to change notification settings - Fork 0
/
13-image-processing.Rmd
669 lines (561 loc) · 32.2 KB
/
13-image-processing.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
---
editor_options:
markdown:
wrap: 72
output:
word_document: default
html_document:
df_print: paged
---
# Image Processing {#image-processing}
The collection of imagery is challenging and time consuming, with sensor
design and deployment often requiring the development of new
technologies. Driven by the growing demand for relevant information
about environmental change, great emphasis is placed on the creation of
"the next new satellite" or "a smaller, yet more powerful drone".
Although innovation in these physical technologies is critical for the
advancement of image collection, they don't guarantee that the imagery
will be useful. In reality, the pixel values collected by a sensor are
not purely the reflectance values from the surface of interest.
Depending on the wavelengths being observed, there may be variations in
the amount of photons emitted from the light source and a variety of
atmospheric effects, such as scattering. There may also be slight
inconsistencies in images collected by the same sensor on the same day
or adjacent areas, which would affect the quality of analyses.
::: {.box-content .learning-objectives-content}
::: {.box-title .learning-objectives-top}
#### Learning Objectives {.unnumbered}
:::
1. Relate common issues of image collection to relevant image
processing techniques
2. Understand the logic supporting the use of specific image processing
techniques
3. Explore a variety of processing methodologies employed by published
research.
:::
### Key Terms {.unnumbered}
pixel, spatial resolution, temporal resolution, radiometric resolution,
spectral resolution, geometric correction, atmospheric correction,
## Overview
The application of correction methods that address these inconsistencies
are often described generally as "image processing". In this chapter we
will explore a variety of common image processing techniques and strive
to understand the logic behind employing one, or more, to remote sensed
imagery. Before diving into specific processing workflows that render
imagery scientifically useful, however, it is important to review some
key terms.
First and foremost, *image processing,* or *digital image analysis,*
refers to any actions taken to improve the accuracy of one or more
component of raw imagery. In remote sensing science, the goal of image
processing is to generate a *product* that provides accurate and useful
information for scientific pursuit. This is in contrast to image
processing for artistic purposes, which could include many similar
steps, but focus on generating a product that is visually appealing.
*Noise* is another common term associated with image processing and it
refers to any element of the data that is not wanted. There are a
variety of noise types, which we will discuss later. In contrast to
noise, *signal* describes wanted components of the imagery. Combined,
signal and noise provide guidance on what specific steps should be taken
in relation to the data during image processing. Before they are
addressed, however, it is also important to confirm the spatial accuracy
of the data.
It is also important to review the elements of an *image*, or *raster*.
The terms image and raster are interchangeable and refer to the a
collection of spatial adjacent *cells*, or *pixel*. The terms cell and
pixel are also interchangeable. An empty raster, or image, would contain
pixels with no information. To say an image has been collected would
simply mean that a sensor has collected information and stored that
information in adjacent pixels. Although this seems straight forward
there are a multitude of environmental and engineering factors that can
affect the accurate collection of information. It is these confounding
factors that image processing attempts to resolve in hopes of generating
a data product that can be compared across space and time.
## Geometric correction
By definition, remotely sensed data is collected by a sensor at some
distance apart from the object(s) being observed. In many instances, the
sensor is in motion, and at the very least it is subject to influence
from environmental actors like wind. Interactions with phenomena like
wind can cause the instantaneous field of view (IFOV) to move slightly
during data collection, introducing spatial errors to the imagery. On
top of issues relating to sensor displacement, sensors in motion may
also observe adjacent areas with different topographic properties.
A combination of these two effects could be visualized by imagining an
airplane flying over a forested hillside. As the sensor collects data
the plane can be buffeted with wind, changing the direction of a
sensor's IFOV to an location that is not the original target. On top of
issues with sensor movement, the elevation of the ground is constantly
undulating, altering the distance at which the sensors observes the
landscape below. These two issues that affect the spatial components of
image collection compromise the accuracy of an image and need to be
corrected. The general term used to refer to the spatial correction of
an image is geometric correction.
### Orthoimagery
An orthoimage is an aerial photograph or satellite imagery geometrically
corrected so that the scale is uniform. Unlike orthoimages, the scale of
ordinary aerial images varies across the image, due to the changing
elevation of the terrain surface (among other things). The process of
creating an orthoimage from an ordinary aerial image is called
orthorectification. Photogrammetrists are the professionals who
specialize in creating orthorectified aerial imagery, and in compiling
geometrically-accurate vector data from aerial images.
Digital aerial photographs can be rectified using specialized
photogrammetric software that shifts image pixels toward or away from
the principal point of each photo in proportion to two variables: the
elevation of the point of the Earth's surface at the location that
corresponds to each pixel, and each pixel's distance from the principal
point of the photo. Aerial images need to be transformed from
perspective views into plan views before they can be used to trace the
features that appear on topographic maps, or to digitize vector features
in digital data sets.
Compare photographs in Figure \@ref(fig:13-ortho). Both show the same
gas pipeline, which passes through hilly terrain. Note the deformation
of the pipeline route in the photo on the left relative to the shape of
the route on the orthoimage to the right. The deformation in the photo
is caused by relief displacement. The original photo would not serve
well on its own as a source for topographic mapping.
Think of it this way: where the terrain elevation is high, the ground is
closer to the aerial camera, and the photo scale is a little larger than
where the terrain elevation is lower. Although the altitude of the
camera is constant, the effect of the undulating terrain is to zoom in
and out. The effect of continuously-varying scale is to distort the
geometry of the aerial photo. This effect is called relief displacement.
### **Relief displacement**
An important component of geometric correction deals with the effects of
elevation on the pixels in an image. Changes in the terrain over which
an image is collected lead to inconsistencies in the distances at which
information is collected. These differences lead to objects in the image
appearing in location inaccurate with reality and can be rectified using
the spatial information of the sensor, datum and object. Examples of
this effect can be observed in any photograph containing tall
structures, which would appear to be leaning outward from the center of
the image, or principal point. The rectification of relief displacement
can be represented by the equation:
```{=tex}
\begin{equation}
d = rh/H
(\#eq:ortho)
\end{equation}
```
where d = relief displacement, r = distance from the principal point to
the image point of interest, h = difference in height between the datum
and the point of interest and H = the height of the sensor above the
datum.
::: {.box-content .your-turn-content}
::: {.box-title .your-turn-top}
#### Your turn! {.unnumbered}
:::
<p>
What is the image displacement of a pixel that is 0.5 mm from the
principal point, 57 m below the datum, and collected from a sensor that
is 135 m above the datum?
</p>
:::
### **Georeferencing**
The removal of inaccuracies in the spatial location of an image can be
conducted using a technique called georeferencing. The basic concept of
georeferencing is to alter the coordinates of an image through the
association of highly precise coordinates collected on site using a GPS.
Coordinates collected in the field are often called ground, or control
points, and form the base of successful georeferencing. In general,
increasing the amount and accuracy of ground points leads to increased
spatial accuracy of the image.
There are a variety of techniques used to transform the coordinates of
an image based on control points, all of which are require some level of
mathematics. The complexity of the polynomials used to transform the
dataset, the more accurate the output spatial coordinates will be. Of
course, the overall accuracy of the transformation depends on how
accurate the control points are. Upon transforming a rasters coordinates
it is important to evaluate how accurate the output raster is compared
to the input raster. A common method of evaluating the success is
through the calculating the square root of the mean of the square of all
error, often called the Root Mean Squared Error (RMSE). In short, RMSE
represents the average distance that the output raster is from the
ground or control points. The smaller the RMSE, the more accurate the
transformation.
### **Georegistration (georectification)**
Similar to georeferencing, georegistration involves adjusting the raw
coordinates of an image to match more accurate ones. In the case of
georegistration, however, ground points collected in the field are
replaced with coordinates from a map or image that has been verified as
spatially accurate. This method could be considered a matching of two
products, enabling the two products to be analyzed together. An example
would be matching two images collected one year apart. If the first
image is georeferenced accurately the second image can simply be
georegistered by identifying shared features, such as intersections or
buildings, and linking them through the creation of control points in
each image.
### **Resampling**
Despite all the efforts to accurately place an image in space, it is
likely that any spatial alterations also change the shape or alignment
of the image's pixels. This dislocation between pixel sizes within the
image, as well as potential changes in their directionality alter the
capacity to evaluate the radiation values stored within them. Imagine
that a raw image is represented by a table cloth. On top of the table
cloth are thousands of pixels, all of uniform height and width. The
corners of this table cloth each have X and Y coordinates. Now imagine
that you have to match these coordinates (the corners) to the four more
accurate ground points that don't match the table cloth. Completion of
this task requires you to stretch one corner of the table cloth
outwards, while the other three corners are shifted inside, leading the
table cloth to alter it's shape and, in doing so, altering the location
and shape of the raw pixels.
This lack of uniformity in pixel size and shape could compromise future
image analysis and need to be corrected for. To do so, users can
implement a variety of methods to reassign the spatially accurate values
to spatially uniform pixels. This process of transforming an image from
one set of coordinates to another is called resampling (Parker 1983
Zotero not working).
The most important concept to understand about resampling is also the
first step in the process, which is the creation of a new, empty raster
in which all cells are of equal size and are aligned North (Insert demo
image). The transformed raster is overlaid with the empty raster with a
user defined cell size before each empty cell is assigned a value based
on values from the transformed raster. Confusing? Perhaps, but there are
a variety of process that can be used to assign cell values to the empty
raster and we will discuss three in hopes of clarifying this
methodology.
**Nearest neighbor**
Nearest neighbor (NN) is the most simple method of resampling as it
looks only at one pixel from the transformed raster. This pixel is
selected based on the proximity of it's center to the center of the
empty cell and the value is added without further transformation. The
simplicity of this method makes it excellent at preserving categorical
data, like land cover or aspect, but struggles to capture transitions
between cells and can result in output rasters that appear somewhat
crude and blockish.
**Bilinear interpolation**
In contrast to NN resampling, bilinear interpolation (BI) uses the
values of multiple neighboring cells in the transformed raster to
determine the value for a single cell in the empty raster. Essentially,
the four cells with centers nearest to the center of the empty cell are
selected as input values. A weighted average of these four values is
calculated based on their distance from the empty cell and this averaged
value becomes the values of the empty cell. The process of calculating
an average means that the output value is likely not the same as any of
the input values, but remains within their range. These features make BI
ideal for datasets with continuous variables, like elevation, rather
than categorical ones.
**Cubic convolution**
Similar to bilinear interpolation, cubic convolution (CC) uses multiple
cells in the transformed raster to generate the output for a single cell
in the empty raster. Instead of using four neighbors, however, this
method uses 16. The idea supporting the use of the 16 nearest neighbors
is that it results in an output raster with cell values that are more
similar to each other than the values of the input raster. This effect
is called smoothing and is effective at removing noise, which makes CC
the ideal sampling method for imagery. There is one drawback of this
smoothing effect, however, as the output value of a cell may be outside
the range of the 16 input cell values.
```{r 13-cubic, fig.cap = fig_cap, out.width = "75%", out.height="75%", echo = FALSE}
fig_cap <- paste("Demonstration of neighbour selection (red) using cubic convolution resampling to determine the value of a single cell (yellow) in an empty raster. (Hacker, CC-BY-4.0")
knitr::include_graphics(here::here("images", "13-cubic_convolution.png"))
```
## Atmospheric correction
Following similar logic to that promoting the need for geometric
correction, atmospheric correction is intended to minimize discrepancies
in pixel values within and across images that occur due to interactions
between observed radiation and atmosphere. The severity of impact that
the atmosphere has on an image relates to the amount of heterogeneity
that occurs in the atmosphere during image collection. The majority of
impacts are caused by the three main types of scattering, which were
presented in Chapter \@ref(fundamentals-of-remote-sensing).
### **Atmospheric windows**
A key characteristic of the earth's atmosphere that impacts the
collection of passive remotely sensed data is the impediment of certain
wavelengths. If solar radiation of a specific wavelength cannot reach
Earth's surface, it is impossible for a sensor to detect the radiance of
that wavelength. There are, however, certain regions of the
electromagnetic spectrum (EMS), called atmospheric windows, that are
less effected by absorption and scattering than others and it is the
observation of these regions that remote sensing relies on. Upon
reaching the Earth's surface, however, there are a variety of
atmospheric constituents that can affect image quality.
### **Clouds and shadows**
Two of the most common culprits in the disturbance of remotely sensed
imagery are clouds and shadows. Both are relatively transient, making
the prediction of their inclusion in an image difficult and rendering
their effects within an image relatively inconsistent. On top of issues
of presence, each introduces unique challenges for image correction.
Clouds are an inherent component of Earth's atmosphere and therefore
should warrant respect and care in image processing, rather than sighs
of frustration. You could imagine that a single image with 30% cloud
cover may not be entirely useful, but their aforementioned permanent
transience means that data users must work to reduce their effects, if
not remove them entirely.
When approaching the removal of clouds, it is important to recall the
physics that drive Mie scattering (Chapter
\@ref(fundamentals-of-remote-sensing). Essentially, water vapors in the
atmosphere scatter visible and near infrared light and generate what
appears to be white objects in the sky. Since the visible and near
infrared regions of the EMS fall within an atmospheric window in which
many sensor detect radiation, clouds can be recorded as part of an
image.
The removal of clouds is often referred to as masking and can prove
challenging depending on the region in question as they also generate
cloud-shadows. You could imagine a study attempting to evaluate snow
cover over a landscape using imagery comprised of wavelengths in the
visible region of the EMS. If there was intermittent cloud cover, cloudy
areas with no snow could be classified as having snow and snowy areas
with cloud-shadows may be classified as no snow.
Martinuzzi et al 2007 published a masking method to remove cloud and
cloud-shadow from Landsat imagery . They proposed the utilization of
radiation values collected in the blue (Band 1) and thermal (Band 6)
ranges of the EMS to differentiate clouds from objects on the Earth's
surface. To remove cloud-shadow, they suggested using information
collected from Band 4, in the near infrared region, combined with the
predicted location of shadows based on the spatial relationships of the
cloud, sun and senor.
Although Martinuzzi et al. 2007 presented a straightforward method for
making cloud and cloud-shadow in an image, they did not address the
inclusion of shadows caused by land cover structures, such as trees or
buildings [@martinuzzi2007]. Shadows present unique problems for image
analysis as they can shade out underlying structures and also be
classified as separate, individual objects. The former issues presents
problems for studies evaluating land cover, while the latter confounds
machine learning algorithms attempting to identify unique classes in the
image based on spectral similarities. Another confounding issue is that
the location and size of shadows change throughout the day in accordance
with the sun.
An important feature of any shadow is that the area shaded is still
considered to be illuminated, but only by skylight . The exclusion of
sunlight from the area creates a unique opportunity for shadow
identification and removal [@finlayson2001]. Finlayson et al.'s method
of shadow removal is complex, using derivative calculus to capitalize on
the fact that a illumination invariant function can be recognized based
solely on surface reflectance. Although more complicated that Martinuzzi
et al.'s approach, it may be worth reviewing Finlayson's (2001) work if
you are interested in learning more about shadow removal.
### **Smoke and Haze**
Smoke and haze present unique issues to image processing as they tend to
vary in presence, consistency and density. They also represent different
types of scattering, with smoke causing Mie scattering and haze causing
non-selective scattering. Makarau et al., 2014 demonstrated that haze
can be removed through the creation of a haze thickness map (HTM)
[@makarau2014]. This methodology is equally as complex as that of
Finlayson (2001) and is perhaps beyond the scope of this book. It is
important to note, though, that removal of shadows, clouds, smoke and
haze relies on an understanding of how their respective scattering types
affect incoming solar radiation. Successful removal, then, depends on
using spectral bands that are not effected by the particular type of
scattering occurring within the image.
## Radiometric correction
We have discussed how the creation of an image by a remote sensor leads
to slight variations in spatial and atmospheric properties between
pixels and that these inconsistencies must be corrected for. In this
section, we will discuss some issues affecting the information within a
pixel and some common remedies. In essence, we will explore how the raw
digital numbers collected by a sensor can be converted to radiance and
reflectance.
### **Signal-to-noise**
A key concept of radiometric correction is the ratio of desired
information, or signal to background information (noise) within a pixel.
The signal-to-noise ratio (SNR) is a common method of presenting this
information and provides an overall statement about image quality. A
common method of calculating SNR is to divide the mean (µ) signal value
of the sensor by its standard deviation (𝛔), where signal represents an
optical intensity. (Equation \@ref(eq:snr))
```{=tex}
\begin{equation}
SNR = µ _{signal} /𝛔 _{signal}
(\#eq:snr)
\end{equation}
```
It is clear, through Equation \@ref(eq:snr), that the average signal
value of an instrument represents the value that its designers desire to
capture. It is also clear that an increase in signal leads to an
improved SNR. What remains unclear, however, is what causes a sensor to
observe and record undesired noise to be recorded. In reality, there are
a variety of noise types that can affect the SNR of a sensor.
**Photon shot noise**
The first type of noise we will address is due to inconsistencies in the
number of photons observed at different time intervals. Since photons
are quantum particles (they are the smallest measurable amount of
radiation), a sensor can only observe them as whole numbers. This,
coupled with other random fluctuations in radiation properties, means
that a sensor directed at the same location may observe 10 photons at
one time interval and 12 a later interval. Over time, it is possible to
determine the average amount of photons observed by a sensor, but
creates inconsistencies across individual observations. Photon shot
noise can be calculated as the square root, of the signal. This suggests
that stronger signal leads to a relative decrease in shot noise.
**Pattern noise**
Also known as fixed-pattern noise, pattern noise arises in instruments
that have multiple sensors along a single array. Pattern noise occurs
due to one sensor collecting relatively higher values than others,
resulting in heightened brightness in the pixels it creates.
**Readout noise**
Readout noise is created through the inconsistencies relating to the
interaction of multiple physical measurement electronic devices. Since
it is impossible to have a sensor without physical devices, readout
noise is inherent in all sensors. Readout noise is therefore equal to
any difference in pixel value when all sensors are exposed to identical
levels of illumination. There are a variety of technical methods used to
correct for this error, but the concepts and mathematics supporting them
are perhaps beyond the scope of this book. If you are interested in
learning more about readout noise, check out this
[webpage](http://spiff.rit.edu/classes/phys445/lectures/readout/readout.html "Readout Noise by Michael Richmond")
created by Michael Richmond.
**Thermal noise**
Another inherent type of sensor noise is thermal noise. Thermal noise
occurs in any device using electricity and is caused by the vibrations
of the devices charge carriers. This means that thermal noise can never
fully be removed from an image, although it can be reduced by lowering
the temperature of the environment at which the sensor is operating.
::: {.box-content .your-turn-content}
::: {.box-title .your-turn-top}
#### Your turn! {.unnumbered}
:::
Calculate SNR or it's associated values for various Landsat sensors:
1. OLI = signal 5288.1, standard deviation - 18.7
2. TM = 𝛔: 0.4, µ: 5.8
3. ETM+ = SNR: 22.3, µ: 13.4
:::
### **Radiometric normalization**
As discussed in Chapter \@ref(fundamentals-of-remote-sensing), the
normalization of radiometric values collected by a sensor enables the
comparison of data within and across images. A common normalization
output in remote sensing studies evaluating the environment is
reflectance, which can be calculated simply by dividing the radiance
value of a pixel collected when observing the object(s) of interest with
the radiance value of a pixel observing a surface with 100% reflectance.
::: {.box-content .case-study-content}
::: {.box-title .case-study-top}
#### Case Study {.unnumbered}
:::
#### An overview of Landsat Processing {.unnumbered}
<p>
The field of remote sensing has witnessed the creation of a variety of
national programs designed to observe the Earth's surface. NASA's
Landsat mission is one of these programs and has enabled the collection
of terrestrial information from space for over 40 years. The imagery
collected from Landsat sensors has proven to be highly useful for
environmental monitoring and the mission is scheduled to continue into
the future.
Despite its success, however, the imagery collected by Landsat sensors
continues to face the same processing issues as most other sensors.
Variations in geometric and atmospheric characteristics exist within and
across images, and radiometric inconsistencies also occur. To combat
these issues, the United States Geological Survey (USGS) has implemented
a tiered image processing structure.
</p>
<p>
Level-1 processing is designed to address the geometric and radiometric
inconsistencies present in an image. The USGS utilizes a combination of
ground control points (GCP), digital elevation models (DEM) and internal
sensor calibrations to correct an image. Within Level 1 there are a
variety of sub-levels that users can select from, each varying based on
the amount of correction that has been completed. More information about
the specific of Landsat Level-1 processing can be found
[here](https://www.usgs.gov/core-science-systems/nli/landsat/landsat-level-1-processing-details "Landsat Level-1 Processing").
</p>
<p>
With the geometric and radiometric correction complete in Level-1,
Level-2 processing is designed to reduce atmospheric inconsistencies.
The output images created from Level-2 processing are considered to be
Science Products (images with corrections completed at a quality
suitable for scientific inquiry without further processing) and
accurately represent surface reflectance and surface temperature. Data
regarding surface reflectance is useful for evaluating a variety of
environmental metrics, such as land cover, while surface temperature
lends insight to vegetation health and the global energy balance. It is
important to note that there are two Collection levels at which Landsat
processing takes place, with Collection 2 representing the highest
quality processing stream. You can explore the differences between
Collections
[here](https://prd-wret.s3.us-west-2.amazonaws.com/assets/palladium/production/atoms/files/Landsat-C1vsC2-2021-0430-LMWS.pdf "Landsat Collection 1 vs 2"),
while information about Collection 2 images processed at a Level-2
standard can be observed
[here](https://www.usgs.gov/core-science-systems/nli/landsat/landsat-collection-2-level-2-science-products "Landsat Colleciton 2 Level-2").
</p>
```{r 13-landsat-c2-l2, fig.cap = fig_cap, out.width="90%", echo = FALSE}
fig_cap <- paste("Three unqiue Landsat 8 Colleciton 2 images. In order from left to right: Level-1 Top of Atmosphere reflectance image (no atmospheric correction), Level-2 atmospherically corrected surface reflectance and Level-2 surface temperature. Images were collected on May 3, 2013 over the Sapta Kosh River in Bairawa, Nepal. Created by Michelle A. Bouchard, USGS. Public domain.")
knitr::include_graphics(here::here("images", "13-landsat-c2-l2.png"))
```
<p>
As you may realize, there are a variety of processing options that
Landsat users can select from. Each of these Collections and Levels
present different opportunities for scientific study and allow users to
customize processing streams based on the needs of their project. Such
flexibility is key for the continued improvement of Landsat products and
promotes the use of Landsat imagery across a broad range of users. It is
important to remember, however, that the selection of pre-processed data
for scientific inquiry requires the user to understand the foundations
upon which correction were made. Be sure to draw on the fundamentals
learned in this chapter to evaluate the usefulness of any processed data
you consider in your work.
</p>
:::
## Image enhancement
So far, in this Chapter, we have discussed methods of correcting
spatial, atmospheric and radiometric errors that are commonly present in
remotely sensed images. While the removal of these artifacts is
necessary, it is also important to explore some common methods of
enhancing an image once these aforementioned corrections have been made.
Both image stretching and sharpening have roots in spatial and
radiometric correction, so keep your mind open to the inherent links
that arise.
### **Stretching**
Image stretching refers to the adjustment of radiometric values of the
input methods to better exploit the radiometric resolution of an image.
In principle, the distribution of radiometric values within a image is
altered in a manner that improves it's capacity to perform a desired
task (). For instance, if an image collected with an 8-bit radiometric
resolution (256 radiometric values; 0-255). appears too dark, it is
likely that the distribution of pixel values is centered on a
radiometric value greater than 127 (middle of a 8-bit scale). In fact,
of 0 represents white and 255 represents black, it is very likely that
the majority of pixels are closer to 255. In this case, the lowest
value(s) observed in the image can be adjusted to 0, the relationship of
this change can be determined and then applied to all other observed
values.
```{r 13-stretch, fig.cap = fig_cap, out.width = "100%", echo = FALSE}
fig_cap <- paste("Example of how (a) the original distribution of radiometric values in a image is (b) stretched")
knitr::include_graphics(here::here("images", "13-stretch.png"))
```
### **Sharpening**
Image sharpening is often used to preserve the spectral information of a
pixel while improving it's spatial resolution [@king2001]. This is
particularly useful for applications concerned with classification, such
as land cover evaluation, as transitions between pixels with different
values become more clear. For example, there are a variety of studies
demonstrating that Landsat 8 imagery can be sharpened using information
collected by a panchromatic sensor operating in unison with the other
multispectral sensors on board the satellite [@gilbertson2017]. This is
made possible because the spatial resolution of the panchromatic sensor
is 15 m^-2^, while the multispectral sensors collect information at 30
m^-2^.
::: {.box-content .call-out-content}
::: {.box-title .call-out-top}
#### Call out {.unnumbered}
:::
<p>
If you are interested in learning more about how sharpening can impact
hyperspectral imagery you can check out [@inamdar2020]. Their research
demonstrates that recorded pixel values contain information from areas
beyond the traditional spatial boundary of a cell. These findings have
interesting implications for a variety of applications.
</p>
:::
## Summary
This chapter provided an overview of common image processing techniques
and discussed the logic that supports their usage. Overall, each
technique strives to create imagery that is consistent across time and
space in order for individual pixel values to be evaluated and/or
compared. Although necessary, these processes can take time and need to
be applied in accordance with the desired application. Understanding the
general workflow of image processing will allow you to determine what
steps should be taken to create the highest quality imagery for your
research.
### Reflection Questions {.unnumbered}
1. Define geometric correction and discuss one of its components.
2. List three resampling techniques and describe the differences.
3. What is the differences between cloud and smoke with regards to
scattering?
4. Why is the signal-to-noise ratio important for evaluating image
quality?
```{r include=FALSE}
knitr::write_bib(c(
.packages(), 'bookdown', 'knitr', 'rmarkdown', 'htmlwidgets', 'webshot', 'DT',
'miniUI', 'tufte', 'servr', 'citr', 'rticles'
), 'packages.bib')
```