1 00:00:01,110 --> 00:00:02,840 - [Amanda] Hi, everyone, welcome to week four 2 00:00:02,840 --> 00:00:04,360 of Remote Sensing Foundations. 3 00:00:04,360 --> 00:00:06,260 This week, we will learn about image processing, 4 00:00:06,260 --> 00:00:08,453 filtering, masking, and classification, 5 00:00:09,310 --> 00:00:11,270 all using Google Earth Engine. 6 00:00:11,270 --> 00:00:12,340 At the end of this week, 7 00:00:12,340 --> 00:00:15,160 I hope you will have accomplished the following. 8 00:00:15,160 --> 00:00:16,410 First, I hope you'll understand 9 00:00:16,410 --> 00:00:18,180 how and why images are processed. 10 00:00:18,180 --> 00:00:20,880 Second, understand the difference between a composited 11 00:00:20,880 --> 00:00:22,240 and a raw image, 12 00:00:22,240 --> 00:00:24,420 then select and filter an image, 13 00:00:24,420 --> 00:00:26,910 apply an image mask to remove water, 14 00:00:26,910 --> 00:00:29,380 understand different classification methods 15 00:00:29,380 --> 00:00:31,310 and when to apply each. 16 00:00:31,310 --> 00:00:33,680 And finally, you will classify an image 17 00:00:33,680 --> 00:00:36,023 and calculate the pixel area for each class. 18 00:00:36,890 --> 00:00:40,510 Readings this week and the included tutorials 19 00:00:40,510 --> 00:00:42,460 will help guide you through this list, 20 00:00:42,460 --> 00:00:44,080 which I know is a tall order. 21 00:00:44,080 --> 00:00:45,053 Let's get started. 22 00:00:46,720 --> 00:00:48,490 Last week, we went into some detail 23 00:00:48,490 --> 00:00:50,260 about how we take satellite measurements 24 00:00:50,260 --> 00:00:51,890 and organize them into the imagery 25 00:00:51,890 --> 00:00:54,210 that we know of as satellite imagery. 26 00:00:54,210 --> 00:00:55,570 We learned that there are several types 27 00:00:55,570 --> 00:00:58,020 of geospatial data sets, but in remote sensing, 28 00:00:58,020 --> 00:01:00,680 roster data sets are the most common format. 29 00:01:00,680 --> 00:01:03,270 Rosters organize information into cells or pixels 30 00:01:03,270 --> 00:01:06,370 where rows and columns of pixels form a grid. 31 00:01:06,370 --> 00:01:07,650 Rosters are great for showing 32 00:01:07,650 --> 00:01:09,593 continually varying information. 33 00:01:10,510 --> 00:01:12,530 The size of pixels in a roster determines 34 00:01:12,530 --> 00:01:13,763 its spatial resolution. 35 00:01:14,920 --> 00:01:17,290 Roster images can contain one or more bands, 36 00:01:17,290 --> 00:01:19,110 each covering the same spatial area, 37 00:01:19,110 --> 00:01:21,840 but containing different spectral information. 38 00:01:21,840 --> 00:01:23,920 When roster data contains bands 39 00:01:23,920 --> 00:01:26,170 from different parts of the electromagnetic spectrum, 40 00:01:26,170 --> 00:01:28,333 they are called multi-spectral images. 41 00:01:29,440 --> 00:01:31,550 We can visualize the multi-spectral image 42 00:01:31,550 --> 00:01:33,630 using any three bands of that image 43 00:01:33,630 --> 00:01:37,280 and displaying them in the red, green, and blue colors. 44 00:01:37,280 --> 00:01:40,910 Single bands can also be visualized in gray scale. 45 00:01:40,910 --> 00:01:44,130 Roster data sets of satellite images are typically stored 46 00:01:44,130 --> 00:01:47,270 in a GeoTIFF, or a stack of roster data sets, 47 00:01:47,270 --> 00:01:49,810 which can be organized into many types of maps, 48 00:01:49,810 --> 00:01:53,500 including surface maps, thematic maps, and attribute maps. 49 00:01:53,500 --> 00:01:55,120 We also discussed in further detail 50 00:01:55,120 --> 00:01:57,110 about what determines spatial resolution 51 00:01:57,110 --> 00:01:59,650 and the trade-off between grain and extent. 52 00:01:59,650 --> 00:02:01,080 The spatial grain of the landscape 53 00:02:01,080 --> 00:02:03,280 is the smallest resolution of data. 54 00:02:03,280 --> 00:02:06,450 Extent is the overall size of the landscape or area 55 00:02:06,450 --> 00:02:08,283 encompassed by the observation. 56 00:02:09,310 --> 00:02:10,730 Finally, we gave an example 57 00:02:10,730 --> 00:02:13,150 of how to display multi-band images 58 00:02:13,150 --> 00:02:16,400 and the difference between true color 59 00:02:16,400 --> 00:02:18,563 and false color image composites. 60 00:02:20,860 --> 00:02:22,560 Okay, so let's dive right in here 61 00:02:22,560 --> 00:02:26,040 to talk about image processing and what that entails. 62 00:02:26,040 --> 00:02:30,120 Digital image processing are the steps that we take 63 00:02:30,120 --> 00:02:32,410 to turn satellite data into images. 64 00:02:32,410 --> 00:02:34,620 We can break them up into four basic operations, 65 00:02:34,620 --> 00:02:38,040 including image restoration, image enhancement, 66 00:02:38,040 --> 00:02:41,260 image classification, and image transformation. 67 00:02:41,260 --> 00:02:43,870 Broadly speaking, image restoration is concerned 68 00:02:43,870 --> 00:02:46,400 with the correction and calibration of images 69 00:02:46,400 --> 00:02:49,030 in order to achieve as faithful a representation 70 00:02:49,030 --> 00:02:51,170 of the Earth's surface as possible, 71 00:02:51,170 --> 00:02:54,170 which is a fundamental goal in remote sensing. 72 00:02:54,170 --> 00:02:55,870 Image enhancement, on the other hand, 73 00:02:55,870 --> 00:02:58,260 is predominantly concerned with the modification of images 74 00:02:58,260 --> 00:03:01,210 to optimize their appearance to the visual system. 75 00:03:01,210 --> 00:03:05,010 So this is really important in terms of making maps. 76 00:03:05,010 --> 00:03:07,060 Visual analysis is a key element, 77 00:03:07,060 --> 00:03:08,800 even in digital image processing, 78 00:03:08,800 --> 00:03:11,523 and the effects of these techniques can be dramatic. 79 00:03:12,610 --> 00:03:14,720 Image transformation, on the other hand, 80 00:03:14,720 --> 00:03:17,740 refers to the derivation of new imagery 81 00:03:17,740 --> 00:03:19,760 as a result of mathematical treatment 82 00:03:19,760 --> 00:03:21,630 of the raw image bands. 83 00:03:21,630 --> 00:03:23,770 And finally, image classification refers 84 00:03:23,770 --> 00:03:28,120 to the computer-assisted interpretation of images. 85 00:03:28,120 --> 00:03:31,040 Today, we will touch on all four types of image processing 86 00:03:31,040 --> 00:03:32,640 and go into some greater detail 87 00:03:32,640 --> 00:03:34,813 about image classification methods. 88 00:03:37,610 --> 00:03:39,900 Okay, so image restoration is often defined 89 00:03:39,900 --> 00:03:42,640 by operations that aim to correct distorted 90 00:03:42,640 --> 00:03:44,480 or degraded image data 91 00:03:44,480 --> 00:03:46,450 to create a more faithful representation 92 00:03:46,450 --> 00:03:48,070 of the original scene. 93 00:03:48,070 --> 00:03:51,240 It's often also termed image pre-processing 94 00:03:51,240 --> 00:03:53,750 because these operations or methods are normally done 95 00:03:53,750 --> 00:03:56,470 before doing any sort of further manipulation 96 00:03:56,470 --> 00:03:58,230 or analysis of the image data 97 00:03:59,220 --> 00:04:01,230 in order to extract information. 98 00:04:01,230 --> 00:04:04,320 And usually they involve correcting the distortions 99 00:04:04,320 --> 00:04:07,920 and degradations that are caused by several factors 100 00:04:07,920 --> 00:04:11,540 during the actual acquisition of the image data, 101 00:04:11,540 --> 00:04:14,730 so during the image acquisition data process. 102 00:04:14,730 --> 00:04:17,720 So there are two main types of image restoration, 103 00:04:17,720 --> 00:04:20,210 radiometric and geometric. 104 00:04:20,210 --> 00:04:24,540 Radiometric restoration, which the schematic is showing 105 00:04:24,540 --> 00:04:27,050 an example of here on the left, 106 00:04:27,050 --> 00:04:30,000 is basically defined as the removal or diminishment 107 00:04:30,000 --> 00:04:33,590 of distortions in the degree of electromagnetic energy 108 00:04:33,590 --> 00:04:35,980 registered by each detector. 109 00:04:35,980 --> 00:04:39,210 So many of these include things that happen 110 00:04:39,210 --> 00:04:41,190 to the sensors themselves 111 00:04:41,190 --> 00:04:43,540 or in the communication of the sensor data 112 00:04:43,540 --> 00:04:45,793 back to the computer on the ground. 113 00:04:47,600 --> 00:04:50,300 There are a variety of things that can cause distortions 114 00:04:50,300 --> 00:04:53,850 in the values recorded for image cells or pixels. 115 00:04:53,850 --> 00:04:55,700 Some of the most common distortions 116 00:04:55,700 --> 00:04:58,690 for which correction procedures exist include things 117 00:04:58,690 --> 00:05:02,850 like what's called uniformly elevated values, 118 00:05:02,850 --> 00:05:06,580 which are due to things like atmospheric haze 119 00:05:06,580 --> 00:05:10,560 and which basically scatter short wavelength bands, 120 00:05:10,560 --> 00:05:13,460 particularly in the blue regions. 121 00:05:13,460 --> 00:05:18,460 So if you're looking at an image that has atmospheric haze, 122 00:05:19,170 --> 00:05:21,110 if you're looking at the raw image of that, 123 00:05:21,110 --> 00:05:23,700 that image is going to sort of uniformly appear 124 00:05:23,700 --> 00:05:25,590 to be more washed out 125 00:05:25,590 --> 00:05:28,950 and not have a high degree of contrast. 126 00:05:28,950 --> 00:05:32,980 A second correction procedure is called striping. 127 00:05:32,980 --> 00:05:36,800 Striping is due to when detectors go out of calibration. 128 00:05:36,800 --> 00:05:38,890 This happened to one of the Landsat sensors 129 00:05:38,890 --> 00:05:40,023 about a decade ago. 130 00:05:44,635 --> 00:05:45,557 A third way that distortions can happen 131 00:05:46,900 --> 00:05:48,670 is with random noise. 132 00:05:48,670 --> 00:05:52,450 So this is due to unpredictable or unsystematic performances 133 00:05:52,450 --> 00:05:54,380 of the sensor itself, 134 00:05:54,380 --> 00:05:57,890 or can be also in the transmission of the data. 135 00:05:57,890 --> 00:06:00,540 And finally we have what's called scan line dropout, 136 00:06:00,540 --> 00:06:04,410 and that is due to signal loss from specific detectors, 137 00:06:04,410 --> 00:06:08,030 which is a sometimes common occurrence 138 00:06:08,030 --> 00:06:09,860 depending on your orbit path 139 00:06:09,860 --> 00:06:13,570 and kind of what's going on with that particular satellite 140 00:06:13,570 --> 00:06:15,370 and where it's located. 141 00:06:15,370 --> 00:06:18,820 So the other type of restoration that I want to talk about 142 00:06:19,920 --> 00:06:22,220 is called geometric restoration. 143 00:06:22,220 --> 00:06:25,860 And by the way, I forgot to put up the examples there. 144 00:06:25,860 --> 00:06:28,900 So those are the four radiometric restorations 145 00:06:28,900 --> 00:06:30,990 there at the bottom, uniformly elevated values, 146 00:06:30,990 --> 00:06:33,810 striping, random noise, and scan line dropout. 147 00:06:33,810 --> 00:06:36,410 Okay, so back to geometric restoration. 148 00:06:36,410 --> 00:06:37,630 So for mapping purposes, 149 00:06:37,630 --> 00:06:41,700 it's essential for any form of remotely sensed imagery 150 00:06:41,700 --> 00:06:45,890 to be accurately registered, sorry, to the map base, 151 00:06:45,890 --> 00:06:47,950 or basically the points on the ground. 152 00:06:47,950 --> 00:06:49,070 So with satellite imagery, 153 00:06:49,070 --> 00:06:51,680 the very high altitude of the sensing platform 154 00:06:51,680 --> 00:06:53,680 results in minimal image displacements 155 00:06:53,680 --> 00:06:56,600 due to topography or relief. 156 00:06:56,600 --> 00:06:58,250 This can definitely be more of an issue 157 00:06:58,250 --> 00:06:59,980 when you're talking about airborne sensors 158 00:06:59,980 --> 00:07:03,160 mounted on aircraft or drones. 159 00:07:03,160 --> 00:07:07,300 As a result, registration of the image 160 00:07:07,300 --> 00:07:09,260 can usually be achieved through a use 161 00:07:09,260 --> 00:07:11,270 of a systematic transformation process 162 00:07:11,270 --> 00:07:15,010 that generally warps the entire image 163 00:07:15,010 --> 00:07:16,540 or changes the entire image 164 00:07:17,440 --> 00:07:21,060 through the use of different types of equations 165 00:07:21,060 --> 00:07:22,910 or other methodologies, 166 00:07:22,910 --> 00:07:26,330 which I'll go over a little bit more of in the next slide. 167 00:07:26,330 --> 00:07:29,870 And these are based on the known positions of a set 168 00:07:29,870 --> 00:07:33,223 of a wide variety of control points. 169 00:07:34,360 --> 00:07:36,350 With airborne sensor data, however, 170 00:07:36,350 --> 00:07:39,570 the process can be also pretty complex. 171 00:07:39,570 --> 00:07:42,470 So not only are there systematic distortions related 172 00:07:42,470 --> 00:07:44,580 to tilt and varying altitude, 173 00:07:44,580 --> 00:07:48,040 which is shown here in the figure at the bottom left, 174 00:07:48,040 --> 00:07:49,940 but also variable topography can lead 175 00:07:49,940 --> 00:07:53,370 to a very irregular pattern of distortions 176 00:07:53,370 --> 00:07:55,890 that can be a lot harder to remove. 177 00:07:55,890 --> 00:07:58,200 So you can think of it a little along the lines 178 00:07:58,200 --> 00:08:01,350 of a 2D versus a 3D problem. 179 00:08:01,350 --> 00:08:04,160 However, even the 2D problem is complex. 180 00:08:04,160 --> 00:08:08,380 With satellites, though we may not have to interact 181 00:08:08,380 --> 00:08:10,110 with topography as much, 182 00:08:10,110 --> 00:08:13,920 we instead have to consider a lot of other details 183 00:08:13,920 --> 00:08:16,560 like the curvature of the earth, the rotation of the earth, 184 00:08:16,560 --> 00:08:18,370 the angle of the sensor or sensors 185 00:08:18,370 --> 00:08:20,320 in relation to those things, 186 00:08:20,320 --> 00:08:22,960 as well as the sensor orbit path. 187 00:08:22,960 --> 00:08:26,690 So needless to say, with respect to translating raw data 188 00:08:26,690 --> 00:08:29,030 from a sensor to an image, 189 00:08:29,030 --> 00:08:32,820 we need to know a lot of both physics and geometry. 190 00:08:32,820 --> 00:08:36,670 Okay, so with both of these types of restoration, 191 00:08:36,670 --> 00:08:37,820 it's important to mention here 192 00:08:37,820 --> 00:08:40,040 about the procedures that we use 193 00:08:40,040 --> 00:08:43,880 to convert these raw, unitless relative reflectance values. 194 00:08:43,880 --> 00:08:47,023 Remember, these are called digital numbers or DNs, 195 00:08:48,100 --> 00:08:52,240 into the true measures of reflective power or radiance 196 00:08:52,240 --> 00:08:55,773 that we know of when we look at imagery. 197 00:08:58,520 --> 00:09:01,240 So let's talk a little about the methods we typically use 198 00:09:01,240 --> 00:09:05,720 to correct images and why correction is often needed. 199 00:09:05,720 --> 00:09:08,010 First, why is it needed? 200 00:09:08,010 --> 00:09:09,540 Well, as we saw from the last slide, 201 00:09:09,540 --> 00:09:12,790 raw digital images usually contain significant radiometric 202 00:09:12,790 --> 00:09:14,500 and geometric distortions, 203 00:09:14,500 --> 00:09:17,970 and these distortions make the raw images unusable, 204 00:09:17,970 --> 00:09:20,400 meaning they cannot be directly applied 205 00:09:20,400 --> 00:09:25,010 as a map base without subsequent processing. 206 00:09:25,010 --> 00:09:27,180 By applying geometric correction procedures, 207 00:09:27,180 --> 00:09:29,260 the distortions introduced by these factors 208 00:09:29,260 --> 00:09:32,370 are compensated for so that the corrected image 209 00:09:32,370 --> 00:09:34,390 will have the highest practical geometric 210 00:09:34,390 --> 00:09:38,270 and radiometric accuracy and integrity, 211 00:09:38,270 --> 00:09:40,320 meaning they alter the image 212 00:09:40,320 --> 00:09:43,440 to make it more approximate what we see on the ground. 213 00:09:43,440 --> 00:09:44,810 At the top of the slide, 214 00:09:44,810 --> 00:09:47,870 I've put a list of why we use correction processes, 215 00:09:47,870 --> 00:09:50,640 many of which I also mentioned on the previous slide. 216 00:09:50,640 --> 00:09:53,480 We apply corrections because of variations in altitude, 217 00:09:53,480 --> 00:09:56,650 attitude, and velocity of the sensor platform, 218 00:09:56,650 --> 00:10:00,470 the Earth's curvature, the Earth's eastward rotation, 219 00:10:00,470 --> 00:10:03,913 atmospheric refraction, and relief displacement. 220 00:10:05,530 --> 00:10:07,860 One way to begin the correction process 221 00:10:07,860 --> 00:10:11,520 is to use ground control points or GCPs. 222 00:10:11,520 --> 00:10:14,630 I've also heard of them referred to as tie points. 223 00:10:14,630 --> 00:10:17,570 GCPs are features of known ground location 224 00:10:17,570 --> 00:10:20,390 that can be accurately located in an image. 225 00:10:20,390 --> 00:10:23,940 Numerous GCPs are located both in terms 226 00:10:23,940 --> 00:10:25,950 of their two image coordinates, 227 00:10:25,950 --> 00:10:28,780 the column and row numbers on the distorted image, 228 00:10:28,780 --> 00:10:30,990 and in terms of their ground coordinates. 229 00:10:30,990 --> 00:10:32,040 Using these GCPs 230 00:10:32,040 --> 00:10:34,130 with what are called transformation equations, 231 00:10:34,130 --> 00:10:35,283 we can correct the map. 232 00:10:36,160 --> 00:10:38,800 One important set of methods use types 233 00:10:38,800 --> 00:10:41,470 of transformation equations to correct the image. 234 00:10:41,470 --> 00:10:43,210 Resampling is the most common 235 00:10:43,210 --> 00:10:45,830 and basically involves generating equations 236 00:10:45,830 --> 00:10:47,750 like the example on the slide 237 00:10:47,750 --> 00:10:50,780 that equate the GCPs on the distorted image 238 00:10:50,780 --> 00:10:53,110 with where they are on the corrected image, 239 00:10:53,110 --> 00:10:54,990 and then applying those same equations 240 00:10:54,990 --> 00:10:58,500 to the remaining pixels to correct the rest of the image. 241 00:10:58,500 --> 00:11:01,200 Once the coefficients for these equations are determined, 242 00:11:01,200 --> 00:11:03,450 the distorted image coordinates 243 00:11:03,450 --> 00:11:06,013 for any map position can be precisely estimated. 244 00:11:07,380 --> 00:11:10,030 Okay, so in the resampling method of correction, 245 00:11:10,030 --> 00:11:12,540 the coordinates of each element 246 00:11:12,540 --> 00:11:15,440 in the undistorted output matrix are transformed 247 00:11:15,440 --> 00:11:18,220 to determine their corresponding location 248 00:11:18,220 --> 00:11:21,503 in the original input or distorted image matrix. 249 00:11:22,410 --> 00:11:25,990 The intensity value or digital number assigned to a cell 250 00:11:25,990 --> 00:11:27,690 in the output matrix is determined based 251 00:11:27,690 --> 00:11:29,900 on the basis of the pixel values 252 00:11:29,900 --> 00:11:32,320 that surround its transformed position 253 00:11:32,320 --> 00:11:34,510 in the original input matrix. 254 00:11:34,510 --> 00:11:37,777 Three types of resampling methods are nearest neighbor 255 00:11:37,777 --> 00:11:40,280 and bilinear and bicubic interpolation. 256 00:11:40,280 --> 00:11:42,670 These are good methods to know and understand 257 00:11:42,670 --> 00:11:44,060 for other types of processing 258 00:11:44,060 --> 00:11:46,253 and data manipulation in remote sensing, 259 00:11:47,580 --> 00:11:50,570 which we'll cover later in this course, as well. 260 00:11:50,570 --> 00:11:54,940 The nearest neighbor, or in the nearest neighbor, 261 00:11:54,940 --> 00:11:58,930 the DN of the transformed or corrected pixel is equal 262 00:11:58,930 --> 00:12:03,400 to the DN of its closest original or distorted pixel. 263 00:12:03,400 --> 00:12:05,360 Advantages of the nearest neighbor method 264 00:12:05,360 --> 00:12:08,780 are that it's really relatively simple to implement, 265 00:12:08,780 --> 00:12:13,780 and it does not involve altering the original pixel values. 266 00:12:13,970 --> 00:12:15,930 One disadvantage, however, 267 00:12:15,930 --> 00:12:18,920 is that the features in the output image 268 00:12:18,920 --> 00:12:22,620 may be offset spatially by up to 1/2 pixel, 269 00:12:22,620 --> 00:12:25,810 which can cause disjointed or blocky appearance 270 00:12:25,810 --> 00:12:26,893 in the output image. 271 00:12:28,306 --> 00:12:30,270 Bilinear interpolation is where the DN 272 00:12:30,270 --> 00:12:32,100 of the transformed or corrected pixel 273 00:12:32,100 --> 00:12:34,890 is equal to the distance weighted average 274 00:12:34,890 --> 00:12:36,710 of the four nearest pixels, 275 00:12:36,710 --> 00:12:38,460 which gives a smoother image appearance 276 00:12:38,460 --> 00:12:42,050 than nearest neighbor, but alters the original DN values. 277 00:12:42,050 --> 00:12:46,010 And finally, bicubic interpolation or cubic convolution 278 00:12:46,010 --> 00:12:49,750 evaluates a block of 16 pixels in the original image 279 00:12:49,750 --> 00:12:51,810 that surrounds the output pixel 280 00:12:51,810 --> 00:12:56,170 to determine the DN of the transformed or corrected pixel. 281 00:12:56,170 --> 00:13:01,170 This also leads to images that are smoother in appearance 282 00:13:01,420 --> 00:13:03,010 than nearest neighbor, 283 00:13:03,010 --> 00:13:04,830 and even probably a little bit more smoother 284 00:13:04,830 --> 00:13:06,570 than bilinear interpolation. 285 00:13:06,570 --> 00:13:07,830 And there's an added benefit 286 00:13:07,830 --> 00:13:09,860 of making the image slightly sharper 287 00:13:09,860 --> 00:13:11,433 than that of interpolation. 288 00:13:12,760 --> 00:13:16,050 But again, the original DN values are altered 289 00:13:16,050 --> 00:13:19,070 from the original image. 290 00:13:19,070 --> 00:13:20,040 So there are trade offs 291 00:13:20,040 --> 00:13:23,060 in which of these methods you decide to use, 292 00:13:23,060 --> 00:13:25,877 between nearest neighbor, bilinear interpolation, 293 00:13:25,877 --> 00:13:28,010 and bicubic interpolation. 294 00:13:28,010 --> 00:13:31,010 And when you're using Google Earth Engine, 295 00:13:31,010 --> 00:13:36,010 oftentimes when you are choosing how to display a map 296 00:13:36,080 --> 00:13:37,680 or how to output a map 297 00:13:37,680 --> 00:13:40,850 that you're interested in working with in another program, 298 00:13:40,850 --> 00:13:42,450 you're going to be having to choose 299 00:13:42,450 --> 00:13:45,140 between nearest neighbor and a couple of other options 300 00:13:45,140 --> 00:13:47,453 for resampling, so that's just good to know. 301 00:13:49,740 --> 00:13:52,100 Image enhancement is concerned with the modification 302 00:13:52,100 --> 00:13:54,150 of images to make them more suited 303 00:13:54,150 --> 00:13:56,920 to the vision capabilities of the human eye. 304 00:13:56,920 --> 00:14:00,010 Regardless of the extent of digital intervention, 305 00:14:00,010 --> 00:14:03,000 visual analysis invariably plays a very strong role 306 00:14:03,000 --> 00:14:05,520 in all aspects of remote sensing. 307 00:14:05,520 --> 00:14:08,720 Enhancement is the process of manipulating an image 308 00:14:08,720 --> 00:14:11,400 so that the result is more suitable than the original 309 00:14:11,400 --> 00:14:13,540 for a specific application. 310 00:14:13,540 --> 00:14:14,820 It should also be noted, though, 311 00:14:14,820 --> 00:14:18,420 that enhancement techniques are very problem specific. 312 00:14:18,420 --> 00:14:19,530 And while the range 313 00:14:19,530 --> 00:14:21,740 of image enhancement techniques is broad, 314 00:14:21,740 --> 00:14:24,280 here I will concentrate on two types, 315 00:14:24,280 --> 00:14:27,433 contrast stretching and composite generation. 316 00:14:30,110 --> 00:14:33,340 Digital sensors have a wide range of output values 317 00:14:33,340 --> 00:14:35,990 to accommodate the strongly varying reflectance values 318 00:14:35,990 --> 00:14:38,450 that can be found in different environments. 319 00:14:38,450 --> 00:14:40,510 However, in any single environment, 320 00:14:40,510 --> 00:14:43,630 it is often the case that only a narrow range of values 321 00:14:43,630 --> 00:14:45,633 will occur over most areas. 322 00:14:46,510 --> 00:14:49,750 Contrast manipulation procedures are thus essential 323 00:14:49,750 --> 00:14:52,083 to most visual analyses. 324 00:14:53,220 --> 00:14:56,050 The figure at the bottom left shows a Landsat TM image 325 00:14:56,050 --> 00:14:58,170 displaying the visible red and gray scale 326 00:14:58,170 --> 00:15:00,360 and its related histogram. 327 00:15:00,360 --> 00:15:02,590 Note that the values of the image are quite skewed 328 00:15:02,590 --> 00:15:06,320 to the left, and the image appears very dark. 329 00:15:06,320 --> 00:15:10,140 The right image of that figure shows the same image band 330 00:15:10,140 --> 00:15:12,580 after a linear stretch has been applied 331 00:15:12,580 --> 00:15:14,223 and its associated histogram. 332 00:15:15,330 --> 00:15:16,440 So by the way, 333 00:15:16,440 --> 00:15:19,293 why do we care about histograms in remote sensing? 334 00:15:20,390 --> 00:15:23,440 Histograms can provide a lot of information on images 335 00:15:23,440 --> 00:15:26,100 even without looking at the images themselves, 336 00:15:26,100 --> 00:15:28,940 such as things like the presence or absence of features, 337 00:15:28,940 --> 00:15:31,080 the distribution of these features. 338 00:15:31,080 --> 00:15:34,460 Histograms also help evaluate images statistically. 339 00:15:34,460 --> 00:15:38,540 So you can look at a histogram, and depending on its shape, 340 00:15:38,540 --> 00:15:40,730 you can evaluate it statistically. 341 00:15:40,730 --> 00:15:45,600 So shapes can be things like normal, skewed, bimodal. 342 00:15:45,600 --> 00:15:46,940 Those are types of distributions, 343 00:15:46,940 --> 00:15:49,910 and I'm showing a number of other ones 344 00:15:49,910 --> 00:15:52,410 at the figure at the right. 345 00:15:52,410 --> 00:15:55,680 And within these different histogram distributions, 346 00:15:55,680 --> 00:15:58,780 you can also gather statistics based 347 00:15:58,780 --> 00:16:00,960 on the shapes of the histograms. 348 00:16:00,960 --> 00:16:01,920 And that can tell you a lot 349 00:16:01,920 --> 00:16:04,880 about what's going on in the image, as well. 350 00:16:04,880 --> 00:16:07,870 So histograms are also used 351 00:16:07,870 --> 00:16:10,540 in individual image enhancements, 352 00:16:10,540 --> 00:16:12,910 like the one that I'm showing at the left. 353 00:16:12,910 --> 00:16:15,260 And they're used in image classification, 354 00:16:15,260 --> 00:16:18,620 which we'll go over more in a bit here. 355 00:16:18,620 --> 00:16:21,930 And they're also used in image segmentation 356 00:16:21,930 --> 00:16:25,860 as well as in helping to match images 357 00:16:25,860 --> 00:16:28,700 across space or through time. 358 00:16:28,700 --> 00:16:32,800 So when you're looking at a time series of different images, 359 00:16:32,800 --> 00:16:34,410 oftentimes the histogram 360 00:16:34,410 --> 00:16:38,710 or the statistics that we evaluate within these histograms 361 00:16:38,710 --> 00:16:40,960 can tell us a lot about what's going on 362 00:16:40,960 --> 00:16:42,583 in the landscape through time. 363 00:16:46,020 --> 00:16:49,600 Okay, so here are a few more examples of images 364 00:16:49,600 --> 00:16:51,450 and their associated histograms. 365 00:16:51,450 --> 00:16:54,830 And as you can see, the shapes of the histogram, again, 366 00:16:54,830 --> 00:16:57,330 can kind of tell us what's going on in the image. 367 00:16:57,330 --> 00:17:01,340 So really tightly or narrow histogram like shown 368 00:17:01,340 --> 00:17:03,600 at the one second from the left 369 00:17:03,600 --> 00:17:07,260 is going to have very low contrast, 370 00:17:07,260 --> 00:17:09,370 and at the low end of the reflectance spectrum 371 00:17:09,370 --> 00:17:10,990 is gonna be very dark, 372 00:17:10,990 --> 00:17:12,720 as you can see in that image. 373 00:17:12,720 --> 00:17:14,240 Something with a normal histogram, 374 00:17:14,240 --> 00:17:16,700 like the one at the far left, 375 00:17:16,700 --> 00:17:19,810 has a better degree of contrast depending 376 00:17:19,810 --> 00:17:22,440 on how thick the histogram is 377 00:17:22,440 --> 00:17:25,360 or how many reflectance bands it covers 378 00:17:25,360 --> 00:17:27,620 or reflectance values it covers. 379 00:17:27,620 --> 00:17:32,260 The one that's being shown here has a pretty good balance. 380 00:17:32,260 --> 00:17:36,110 You can also note two other shapes of histogram, 381 00:17:36,110 --> 00:17:37,340 skewed and bimodal. 382 00:17:37,340 --> 00:17:41,020 Those are very common shapes for histograms, as well. 383 00:17:41,020 --> 00:17:43,797 And for the bimodal one, 384 00:17:47,070 --> 00:17:50,583 the low reflections values there, that first peak, 385 00:17:51,760 --> 00:17:54,660 is indicative of what you see in the image 386 00:17:55,800 --> 00:17:58,960 as basically a whole bunch of dark area, 387 00:17:58,960 --> 00:18:01,560 which in this case is water. 388 00:18:01,560 --> 00:18:03,623 And then with the skewed image, 389 00:18:05,470 --> 00:18:09,420 sometimes a landscape that has a high degree 390 00:18:09,420 --> 00:18:14,420 of human disturbance or human development 391 00:18:14,460 --> 00:18:18,840 can have an impact on the shape of the histogram. 392 00:18:18,840 --> 00:18:21,120 This one is called a skewed histogram, 393 00:18:21,120 --> 00:18:25,070 and as you can see, it's got a very pointy peak. 394 00:18:26,360 --> 00:18:30,360 And that peak is not straight in the middle 395 00:18:30,360 --> 00:18:32,773 as with the normal distribution histogram. 396 00:18:35,890 --> 00:18:39,570 Okay, so the second enhancement technique 397 00:18:39,570 --> 00:18:42,310 that we wanna talk about here is composite generation. 398 00:18:42,310 --> 00:18:43,730 And we can talk about composites 399 00:18:43,730 --> 00:18:45,930 in a couple of different ways in remote sensing. 400 00:18:45,930 --> 00:18:49,040 But for right now, let's stick to generation 401 00:18:49,040 --> 00:18:52,840 of composites for visual analysis. 402 00:18:52,840 --> 00:18:56,290 So for visual analysis, color composites make use 403 00:18:56,290 --> 00:18:58,860 of the fullest capabilities of the human eye. 404 00:18:58,860 --> 00:19:01,530 Depending upon the graphic systems in use, 405 00:19:01,530 --> 00:19:02,980 composite generation ranges 406 00:19:02,980 --> 00:19:05,080 from simply selecting the bands to use 407 00:19:05,080 --> 00:19:08,070 to more involved procedures of band combinations 408 00:19:08,070 --> 00:19:11,800 and associated contrast-stretching techniques. 409 00:19:11,800 --> 00:19:15,490 The figure at the left shows how a three-band composite 410 00:19:15,490 --> 00:19:18,090 is constructed from a multi-band image. 411 00:19:18,090 --> 00:19:21,690 This is the most basic way that we construct composites. 412 00:19:21,690 --> 00:19:22,700 And this is what we do, 413 00:19:22,700 --> 00:19:25,840 what I showed you last week in Google Earth Engine. 414 00:19:25,840 --> 00:19:29,060 When you display a multi-band image 415 00:19:29,060 --> 00:19:30,590 on the Google Earth Engine map, 416 00:19:30,590 --> 00:19:33,360 and you open up that layer dropdown box, 417 00:19:33,360 --> 00:19:37,770 and then you go into the popup associated with that map, 418 00:19:37,770 --> 00:19:39,970 you can pick and choose which bands 419 00:19:39,970 --> 00:19:42,633 you want to visualize in the map. 420 00:19:43,739 --> 00:19:45,990 And then the figures down at the bottom 421 00:19:45,990 --> 00:19:48,930 show several composite examples made 422 00:19:48,930 --> 00:19:50,320 with different band combinations 423 00:19:50,320 --> 00:19:54,090 from the same set of TM images. 424 00:19:54,090 --> 00:19:58,230 So based on what you're interested in looking at, 425 00:19:58,230 --> 00:20:01,300 it's a good idea to go through and kind of play around 426 00:20:01,300 --> 00:20:06,220 with how you visualize your bands 427 00:20:06,220 --> 00:20:08,403 and what order you might visualize them in. 428 00:20:11,740 --> 00:20:14,473 What is an image composite, and why do we make one? 429 00:20:15,760 --> 00:20:18,020 Well, we make composites for a variety of reasons, 430 00:20:18,020 --> 00:20:20,720 such as the examples I'm showing at the right. 431 00:20:20,720 --> 00:20:23,200 Three types caused by center failures 432 00:20:23,200 --> 00:20:25,320 include stripes or banding, 433 00:20:25,320 --> 00:20:27,280 which have the appearance of defective lines 434 00:20:27,280 --> 00:20:29,610 and are due to the variations in the response 435 00:20:29,610 --> 00:20:31,550 of individual detectors, 436 00:20:31,550 --> 00:20:34,110 which result in relatively higher or lower values 437 00:20:34,110 --> 00:20:35,950 along every sixth line, 438 00:20:35,950 --> 00:20:39,943 as is what happened with Landsat's MSS product. 439 00:20:40,880 --> 00:20:43,440 Line drops, a second way, 440 00:20:43,440 --> 00:20:46,210 which is when a number of adjacent pixels along the line 441 00:20:46,210 --> 00:20:48,020 may contain spurious DNs 442 00:20:48,020 --> 00:20:52,100 and is usually caused by data transmission errors. 443 00:20:52,100 --> 00:20:57,100 Or bit errors or shot noise or random noise as a third way, 444 00:20:58,870 --> 00:21:00,750 which can occur in an image 445 00:21:00,750 --> 00:21:03,060 and which can be spiky in character 446 00:21:03,060 --> 00:21:05,160 in terms of the associated histograms 447 00:21:05,160 --> 00:21:07,810 and causes images to have a salt and pepper 448 00:21:07,810 --> 00:21:09,383 or snowy appearance. 449 00:21:10,540 --> 00:21:14,430 We also make image composites for atmospheric interferences, 450 00:21:14,430 --> 00:21:18,543 such as clouds or smoke from fires or volcanic eruptions. 451 00:21:19,780 --> 00:21:21,550 Temporal composites, on the other hand, 452 00:21:21,550 --> 00:21:23,580 involve stacking a bunch of images 453 00:21:23,580 --> 00:21:25,770 of the same area through time 454 00:21:25,770 --> 00:21:27,920 and using pixel resampling methods 455 00:21:27,920 --> 00:21:30,350 to build an uncorrupted image. 456 00:21:30,350 --> 00:21:33,090 There are, again, numerous methods that we can use 457 00:21:33,090 --> 00:21:36,280 to build the composite from the stack of images. 458 00:21:36,280 --> 00:21:39,330 Three examples include, the first way, 459 00:21:39,330 --> 00:21:40,950 we can average all the pixel values 460 00:21:40,950 --> 00:21:42,200 of a certain confidence interval 461 00:21:42,200 --> 00:21:44,720 for a given pixel location through time. 462 00:21:44,720 --> 00:21:46,160 Second, we can take the median 463 00:21:46,160 --> 00:21:48,880 or maybe the mode pixel value. 464 00:21:48,880 --> 00:21:51,880 Or third, we can apply another statistic or sampling method 465 00:21:51,880 --> 00:21:55,380 if, say, all of the pixels in a given pixel location 466 00:21:55,380 --> 00:21:58,780 are clouded for the temporal composite time period, 467 00:21:58,780 --> 00:22:01,323 as can happen over tropical forests sometimes. 468 00:22:02,880 --> 00:22:05,940 The way that we build composites in Google Earth Engine 469 00:22:05,940 --> 00:22:08,660 is to typically create an image collection 470 00:22:08,660 --> 00:22:11,370 by filtering a satellite product collection 471 00:22:11,370 --> 00:22:12,880 to the desired time range. 472 00:22:12,880 --> 00:22:16,000 We then write a function to build the image composite. 473 00:22:16,000 --> 00:22:18,070 We will explore how to do this 474 00:22:18,070 --> 00:22:20,123 in Google Earth Engine more next week. 475 00:22:22,290 --> 00:22:25,360 Let's talk now about image transformation. 476 00:22:25,360 --> 00:22:27,930 Digital image processing offers a limitless range 477 00:22:27,930 --> 00:22:30,940 of possible transformations on remotely sensed data. 478 00:22:30,940 --> 00:22:32,950 Some of these include things like color, 479 00:22:32,950 --> 00:22:35,730 space transformations, texture calculations, 480 00:22:35,730 --> 00:22:36,730 thermal transformations, 481 00:22:36,730 --> 00:22:39,030 and a wide variety of other ad hoc transformations 482 00:22:39,030 --> 00:22:41,180 such as image ratioing 483 00:22:41,180 --> 00:22:44,850 that can all be most effectively accomplished for images. 484 00:22:44,850 --> 00:22:46,900 Because of their spectral significance 485 00:22:46,900 --> 00:22:49,970 in environmental monitoring and applications, however, 486 00:22:49,970 --> 00:22:51,700 for the purposes of this course, 487 00:22:51,700 --> 00:22:56,370 the two that we'll come to know most and focus on most 488 00:22:56,370 --> 00:23:00,063 are vegetation indices and principal components analysis. 489 00:23:02,330 --> 00:23:05,490 There are a wide variety of vegetation indices 490 00:23:05,490 --> 00:23:07,210 that have been developed to help 491 00:23:07,210 --> 00:23:09,380 in the monitoring of vegetation. 492 00:23:09,380 --> 00:23:12,190 Most are based on the way very different interactions 493 00:23:12,190 --> 00:23:15,200 between vegetation and electromagnetic energy 494 00:23:15,200 --> 00:23:17,963 in the red and near infrared wavelengths occur. 495 00:23:18,900 --> 00:23:20,650 In the figure at the top left, 496 00:23:20,650 --> 00:23:23,090 we see a generalized spectral response pattern 497 00:23:23,090 --> 00:23:25,550 for green broad-leaf vegetation. 498 00:23:25,550 --> 00:23:28,600 From the graph, reflectance in the red region 499 00:23:28,600 --> 00:23:31,750 of about 0.6 to 0.7 is low 500 00:23:31,750 --> 00:23:34,690 because of absorption by leaf pigments, 501 00:23:34,690 --> 00:23:36,980 principally chlorophyll. 502 00:23:36,980 --> 00:23:37,970 The infrared region, 503 00:23:37,970 --> 00:23:41,470 which is about from 0.8 to 0.9, however, 504 00:23:41,470 --> 00:23:43,570 characteristically shows high reflectance 505 00:23:43,570 --> 00:23:46,880 because of scattering by the cell structure of the leaves. 506 00:23:46,880 --> 00:23:48,930 A very simple vegetation index 507 00:23:48,930 --> 00:23:51,750 can thus be achieved by comparing the measure 508 00:23:51,750 --> 00:23:55,530 of infrared reflectance to that of red reflectance. 509 00:23:55,530 --> 00:23:56,740 Although a number of variants 510 00:23:56,740 --> 00:23:58,540 of this basic logic have been developed, 511 00:23:58,540 --> 00:24:01,080 the one which has received most attention 512 00:24:01,080 --> 00:24:05,350 is the normalized difference vegetation index, or NDVI. 513 00:24:05,350 --> 00:24:08,720 So it's calculated basically by subtracting the red 514 00:24:08,720 --> 00:24:09,700 from the near infrared 515 00:24:09,700 --> 00:24:13,920 and dividing it by the near infrared plus the red band. 516 00:24:13,920 --> 00:24:15,280 So the image on the bottom left 517 00:24:15,280 --> 00:24:19,940 displays a Landsat 8 surface reflectance on the left panel. 518 00:24:19,940 --> 00:24:20,773 And on the right, 519 00:24:20,773 --> 00:24:23,030 the surface reflectance derived 520 00:24:24,130 --> 00:24:28,490 normalized difference vegetation index for that same image. 521 00:24:28,490 --> 00:24:32,190 As you can see, different vegetation areas 522 00:24:32,190 --> 00:24:33,550 and different vegetation types 523 00:24:33,550 --> 00:24:35,290 have different spectral signatures 524 00:24:35,290 --> 00:24:37,500 that are being shown in green, 525 00:24:37,500 --> 00:24:41,880 with high NDVI values being darker green, 526 00:24:41,880 --> 00:24:46,690 and low NDVI values being blue or whitish. 527 00:24:49,090 --> 00:24:51,210 On the right, I'm showing another type 528 00:24:51,210 --> 00:24:53,240 of image transformation technique, 529 00:24:53,240 --> 00:24:55,970 which is using a spatial principal components analysis 530 00:24:55,970 --> 00:24:58,520 to look at the spatial variation of pixel values 531 00:24:58,520 --> 00:25:00,793 across the North American Tundra Biome. 532 00:25:03,040 --> 00:25:07,930 This map shows an RGB image of PCAs one, two, and three, 533 00:25:07,930 --> 00:25:10,970 where red shows areas where the majority of variance 534 00:25:10,970 --> 00:25:13,490 is explained in PCA one. 535 00:25:13,490 --> 00:25:17,330 Green is PCA two, and blue is PCA three. 536 00:25:17,330 --> 00:25:19,390 Pinks, purples, and browns are areas 537 00:25:19,390 --> 00:25:22,653 where more than one PCA explains the variance. 538 00:25:24,530 --> 00:25:27,100 And this image that I'm showing 539 00:25:28,379 --> 00:25:30,560 was actually calculated by me 540 00:25:30,560 --> 00:25:34,773 using a MODIS composite of images, 541 00:25:36,670 --> 00:25:39,783 and it was done in Google Earth Engine, also. 542 00:25:41,510 --> 00:25:43,640 Finally, I'd like to spend a few minutes talking 543 00:25:43,640 --> 00:25:45,910 about image classification. 544 00:25:45,910 --> 00:25:47,470 Image classification refers 545 00:25:47,470 --> 00:25:49,460 to the computer-assisted interpretation 546 00:25:49,460 --> 00:25:51,620 of remotely sensed images. 547 00:25:51,620 --> 00:25:53,000 Though we will go over methods 548 00:25:53,000 --> 00:25:54,770 for accomplishing image classifications 549 00:25:54,770 --> 00:25:56,590 using Google Earth Engine in more detail 550 00:25:56,590 --> 00:25:57,800 in the coming weeks, 551 00:25:57,800 --> 00:25:59,840 today, I will provide you with a brief overview 552 00:25:59,840 --> 00:26:02,610 of different types of classification methods. 553 00:26:02,610 --> 00:26:05,920 Although some procedures are able to incorporate information 554 00:26:05,920 --> 00:26:09,200 about such characteristics as texture and context, 555 00:26:09,200 --> 00:26:12,010 the majority of image classification is based solely 556 00:26:12,010 --> 00:26:14,550 on the detection of the spectral signatures 557 00:26:14,550 --> 00:26:16,490 of land cover classes. 558 00:26:16,490 --> 00:26:18,680 The success with which this can be done 559 00:26:18,680 --> 00:26:20,780 will depend on two things. 560 00:26:20,780 --> 00:26:23,240 First, the presence of distinctive signatures 561 00:26:23,240 --> 00:26:25,540 for the land cover classes of interest 562 00:26:25,540 --> 00:26:27,730 in the band set being used, 563 00:26:27,730 --> 00:26:30,630 and second, the ability to reliably distinguish 564 00:26:30,630 --> 00:26:33,380 these signatures from other spectral response patterns 565 00:26:33,380 --> 00:26:34,830 that may be present, as well. 566 00:26:37,620 --> 00:26:40,790 There are two general approaches to image classification, 567 00:26:40,790 --> 00:26:42,940 supervised and unsupervised. 568 00:26:42,940 --> 00:26:46,240 They differ in how the classification is performed. 569 00:26:46,240 --> 00:26:48,630 In the case of supervised classification, 570 00:26:48,630 --> 00:26:51,780 Google Earth Engine, or the program being used, 571 00:26:51,780 --> 00:26:54,270 delineates specific land cover types based 572 00:26:54,270 --> 00:26:56,890 on statistical characterization data drawn 573 00:26:56,890 --> 00:26:59,330 from known examples in the image. 574 00:26:59,330 --> 00:27:01,900 With unsupervised classification, however, 575 00:27:01,900 --> 00:27:04,540 instead, clustering algorithms are used 576 00:27:04,540 --> 00:27:07,260 to uncover the commonly occurring land cover types 577 00:27:07,260 --> 00:27:09,560 with the analyst providing interpretations 578 00:27:09,560 --> 00:27:11,853 of those land cover types at a later stage. 579 00:27:13,290 --> 00:27:15,570 For a supervised classification, 580 00:27:15,570 --> 00:27:18,390 like what I'm showing here on the left panel, 581 00:27:18,390 --> 00:27:20,410 the first step is to identify examples 582 00:27:20,410 --> 00:27:21,820 of the information classes 583 00:27:21,820 --> 00:27:24,940 or the land cover types of interest in the image. 584 00:27:24,940 --> 00:27:27,060 These are called training sites. 585 00:27:27,060 --> 00:27:29,660 Embedded applications in Google Earth Engine 586 00:27:29,660 --> 00:27:32,090 will then develop a statistical characterization 587 00:27:32,090 --> 00:27:35,113 of the reflectances for each information class. 588 00:27:35,980 --> 00:27:38,760 This stage is often called signature analysis 589 00:27:38,760 --> 00:27:40,860 and may involve developing a characterization 590 00:27:40,860 --> 00:27:42,090 as simple as the mean 591 00:27:42,090 --> 00:27:45,330 or the range of reflectances on each band, 592 00:27:45,330 --> 00:27:48,810 or as complex as detailed analyses of the mean variances 593 00:27:48,810 --> 00:27:51,850 and covariances over all the bands. 594 00:27:51,850 --> 00:27:53,620 Once a statistical characterization 595 00:27:53,620 --> 00:27:56,530 has been achieved for each information class, 596 00:27:56,530 --> 00:27:58,080 the image is then classified 597 00:27:59,259 --> 00:28:02,700 by examining the reflectances for each pixel 598 00:28:02,700 --> 00:28:05,140 and making a decision about which of the signatures 599 00:28:05,140 --> 00:28:06,323 it resembles most. 600 00:28:07,290 --> 00:28:09,420 There are several techniques 601 00:28:09,420 --> 00:28:11,963 for making these decisions, called classifiers, 602 00:28:13,100 --> 00:28:16,683 which have three types, hard, soft, and hyperspectral. 603 00:28:17,990 --> 00:28:19,300 With hard classifiers, 604 00:28:19,300 --> 00:28:21,440 the distinguishing characteristic 605 00:28:21,440 --> 00:28:24,170 is that they all make a definitive decision 606 00:28:24,170 --> 00:28:28,030 about the land cover class to which any pixel belongs, 607 00:28:28,030 --> 00:28:32,760 using a classifier method like parallelepiped, 608 00:28:32,760 --> 00:28:35,920 minimum distance to means, or maximum likelihood, 609 00:28:35,920 --> 00:28:38,043 the latter of which is the most popular. 610 00:28:39,460 --> 00:28:41,660 Contrary to hard classifiers, 611 00:28:41,660 --> 00:28:44,130 soft classifiers do not make a definitive decision 612 00:28:44,130 --> 00:28:47,380 about the land cover class to which each pixel belongs. 613 00:28:47,380 --> 00:28:51,430 Rather, they develop statements of the degree 614 00:28:51,430 --> 00:28:52,930 to which each pixel belongs 615 00:28:52,930 --> 00:28:55,970 to each of the land cover classes being considered. 616 00:28:55,970 --> 00:28:59,600 Thus, for example, a soft classifier might indicate 617 00:28:59,600 --> 00:29:04,600 that a pixel has a 72% probability of being forest, 618 00:29:05,750 --> 00:29:08,050 a 24% probability of being pasture, 619 00:29:08,050 --> 00:29:11,100 and a 4% probability of being bare ground. 620 00:29:11,100 --> 00:29:13,770 A hard classifier would resolve this uncertainty 621 00:29:13,770 --> 00:29:16,840 by concluding that the pixel was forest. 622 00:29:16,840 --> 00:29:19,410 However, a soft classifier makes this uncertainty 623 00:29:19,410 --> 00:29:23,090 explicitly available for any of a variety of reasons. 624 00:29:23,090 --> 00:29:25,460 For example, the analyst might conclude 625 00:29:26,380 --> 00:29:29,170 that the uncertainty arises because the pixel contains 626 00:29:29,170 --> 00:29:30,590 more than one cover type 627 00:29:30,590 --> 00:29:34,470 and could use the probabilities as indications 628 00:29:34,470 --> 00:29:37,060 of the relative proportion of each. 629 00:29:37,060 --> 00:29:40,000 This is known as subpixel classification. 630 00:29:40,000 --> 00:29:42,050 Alternatively, the analyst may conclude 631 00:29:42,050 --> 00:29:43,310 that the uncertainty arises 632 00:29:43,310 --> 00:29:45,780 because of unrepresented training site data, 633 00:29:45,780 --> 00:29:48,510 and therefore may wish to combine these probabilities 634 00:29:48,510 --> 00:29:50,690 with other evidence before hardening the decision 635 00:29:50,690 --> 00:29:51,993 to a final conclusion. 636 00:29:53,250 --> 00:29:56,560 And finally, there are hyperspectral classifiers. 637 00:29:56,560 --> 00:29:58,810 All of the classifiers mentioned above operate 638 00:29:58,810 --> 00:30:02,320 on multi-spectral imagery, as we know, 639 00:30:02,320 --> 00:30:04,480 images where several spectral bands 640 00:30:04,480 --> 00:30:06,920 have been captured simultaneously 641 00:30:06,920 --> 00:30:09,633 as independently accessible image components. 642 00:30:10,560 --> 00:30:13,510 And extending this logic to many bands 643 00:30:13,510 --> 00:30:18,100 produces what has come to be known as hyperspectral imagery. 644 00:30:18,100 --> 00:30:19,690 Although there is essentially no difference 645 00:30:19,690 --> 00:30:22,510 between hyperspectral and multi-spectral imagery, 646 00:30:22,510 --> 00:30:24,900 i.e., they differ only in degree, 647 00:30:24,900 --> 00:30:28,370 the volume of data and high spectral resolution 648 00:30:28,370 --> 00:30:32,050 of hyperspectral images does lead to differences in the way 649 00:30:32,050 --> 00:30:36,553 that we must go about handling classification methodologies. 650 00:30:38,130 --> 00:30:41,440 So on the other hand, and shown in the panel on the right, 651 00:30:41,440 --> 00:30:43,860 in contrast to supervised classification, 652 00:30:43,860 --> 00:30:46,200 where we tell the system about the character, 653 00:30:46,200 --> 00:30:50,240 i.e. the spectral signature of the information classes 654 00:30:50,240 --> 00:30:51,650 we are looking for, 655 00:30:51,650 --> 00:30:55,070 unsupervised classification requires no advanced information 656 00:30:55,070 --> 00:30:57,310 about the classes of interest. 657 00:30:57,310 --> 00:31:00,560 It examines the data and breaks it into the most prevalent, 658 00:31:00,560 --> 00:31:04,580 natural spectral groupings or clusters present in the data. 659 00:31:04,580 --> 00:31:07,160 The analyst then identifies these clusters 660 00:31:07,160 --> 00:31:09,860 as land cover classes through combination 661 00:31:09,860 --> 00:31:13,173 of familiarity with the region and ground truth visits. 662 00:31:14,020 --> 00:31:15,190 The mathematical logic 663 00:31:15,190 --> 00:31:17,780 by which unsupervised classification works 664 00:31:17,780 --> 00:31:19,760 is known as cluster analysis. 665 00:31:19,760 --> 00:31:20,830 And in Google Earth Engine, 666 00:31:20,830 --> 00:31:23,400 there are a number of different types of cluster methods 667 00:31:23,400 --> 00:31:25,360 that can be tried to investigate 668 00:31:25,360 --> 00:31:28,840 which gives the best outcome for a given site. 669 00:31:28,840 --> 00:31:30,320 It is important to recognize, however, 670 00:31:30,320 --> 00:31:34,380 that the clusters unsupervised classification produces 671 00:31:34,380 --> 00:31:37,920 are not information classes, but spectral classes, 672 00:31:37,920 --> 00:31:40,690 meaning they group together features 673 00:31:41,600 --> 00:31:43,733 with similar reflectance patterns. 674 00:31:46,270 --> 00:31:48,640 It's usually the case that the analyst needs 675 00:31:48,640 --> 00:31:52,590 to reclassify spectral classes into information classes. 676 00:31:52,590 --> 00:31:56,420 For example, the system might identify classes differently 677 00:31:56,420 --> 00:31:58,053 for asphalt and cement. 678 00:31:59,050 --> 00:32:02,020 But the analyst might want to later group those together, 679 00:32:02,020 --> 00:32:04,643 creating an information class called pavement. 680 00:32:08,530 --> 00:32:10,460 So finally, I'd like to spend a minute here 681 00:32:10,460 --> 00:32:14,360 mentioning masking because masking pixels 682 00:32:14,360 --> 00:32:17,120 is also an important component of processing 683 00:32:17,120 --> 00:32:20,240 or pre-processing prior to interpretation 684 00:32:20,240 --> 00:32:22,840 of an image in remote sensing. 685 00:32:22,840 --> 00:32:27,330 So masking pixels basically renders those pixels transparent 686 00:32:27,330 --> 00:32:29,960 and excludes them from an analysis. 687 00:32:29,960 --> 00:32:32,960 A lot of the images that we will use in Google Earth Engine 688 00:32:32,960 --> 00:32:36,110 have already had things like clouds 689 00:32:36,110 --> 00:32:38,240 masked out of them for us. 690 00:32:38,240 --> 00:32:40,920 However, if you were working with raw imagery, 691 00:32:40,920 --> 00:32:44,970 you would want to apply a cloud mask to that image 692 00:32:44,970 --> 00:32:47,010 to remove the clouds. 693 00:32:47,010 --> 00:32:49,860 Other things you might wanna remove would be shadows 694 00:32:49,860 --> 00:32:54,350 from things like mountains or buildings 695 00:32:54,350 --> 00:32:56,450 or other topographical features. 696 00:32:56,450 --> 00:33:01,450 And then also we typically, for terrestrial image analysis, 697 00:33:01,640 --> 00:33:03,900 we wanna remove water. 698 00:33:03,900 --> 00:33:07,470 And that water can be in the form 699 00:33:07,470 --> 00:33:10,340 of rivers and lakes and oceans. 700 00:33:10,340 --> 00:33:12,810 It can also be in the form of snow. 701 00:33:12,810 --> 00:33:15,410 But typically, if we are trying to do some sort 702 00:33:15,410 --> 00:33:16,910 of spectral analysis, 703 00:33:16,910 --> 00:33:21,510 it's important to remove those pixels from our analysis. 704 00:33:21,510 --> 00:33:23,290 And the way that we do that is either 705 00:33:23,290 --> 00:33:25,390 by using somebody else's mask, 706 00:33:25,390 --> 00:33:28,010 or we can also create a classified image 707 00:33:28,010 --> 00:33:31,140 where we classify water 708 00:33:32,550 --> 00:33:35,760 to then essentially remove it eventually. 709 00:33:35,760 --> 00:33:37,630 So we'll show you how to do all that 710 00:33:37,630 --> 00:33:39,690 in Google Earth Engine in the future. 711 00:33:39,690 --> 00:33:43,830 But because I include masking in the list of things 712 00:33:43,830 --> 00:33:46,720 that we do to pre-process or process an image 713 00:33:47,580 --> 00:33:52,580 prior to interpretation, I wanted to mention it here today. 714 00:33:56,210 --> 00:33:58,350 Okay, so we went over a lot today. 715 00:33:58,350 --> 00:34:00,120 We talked a lot about image processing, 716 00:34:00,120 --> 00:34:01,950 and four areas that we focused on 717 00:34:01,950 --> 00:34:04,860 were restoration, enhancement, 718 00:34:04,860 --> 00:34:06,770 classification, and transformation. 719 00:34:06,770 --> 00:34:08,880 And we talked about subcategories, 720 00:34:08,880 --> 00:34:11,550 including radiometric versus geometric, 721 00:34:11,550 --> 00:34:14,170 contrast stretching, composite generation, 722 00:34:14,170 --> 00:34:16,590 supervised versus unsupervised, 723 00:34:16,590 --> 00:34:19,040 and then also vegetation indices, 724 00:34:19,040 --> 00:34:21,560 principal components analysis, and masking. 725 00:34:21,560 --> 00:34:24,990 And though some of these things are accomplished for us 726 00:34:24,990 --> 00:34:26,770 when we're using Google Earth Engine, 727 00:34:26,770 --> 00:34:29,380 I think it's important to know a lot about 728 00:34:29,380 --> 00:34:31,270 what is kind of going on behind the scenes 729 00:34:31,270 --> 00:34:34,260 before we get to that processed image 730 00:34:34,260 --> 00:34:36,493 to use for our interpretation. 731 00:34:39,123 --> 00:34:41,890 Some of the divisions here we do ourselves. 732 00:34:41,890 --> 00:34:44,940 And remember that with image enhancement, a lot of times, 733 00:34:44,940 --> 00:34:49,520 the focus is on rendering an image more visible 734 00:34:51,300 --> 00:34:56,300 for us to be more visibly able to kind of analyze it 735 00:34:59,790 --> 00:35:01,250 as opposed to some of the other ones, 736 00:35:01,250 --> 00:35:04,600 which actually involve using 737 00:35:04,600 --> 00:35:07,180 different mathematical algorithms 738 00:35:07,180 --> 00:35:11,790 to actually translate or transfer the spectral signatures 739 00:35:11,790 --> 00:35:13,023 of each pixels. 740 00:35:14,120 --> 00:35:16,340 And I put this schematic here at the right 741 00:35:16,340 --> 00:35:19,370 just as kind of an example to show you 742 00:35:19,370 --> 00:35:22,190 that there are a lot of different processes 743 00:35:23,960 --> 00:35:27,190 that come behind any sort of model creation 744 00:35:27,190 --> 00:35:32,190 for an image analysis or an interpretation analysis 745 00:35:33,130 --> 00:35:34,750 in remote sensing. 746 00:35:34,750 --> 00:35:39,750 This was just a figure from a scientific article 747 00:35:40,520 --> 00:35:42,940 that was talking about using object 748 00:35:42,940 --> 00:35:47,592 and pixel-based approaches to classifiers 749 00:35:47,592 --> 00:35:52,592 and using them in wetland, to create wetland maps. 750 00:35:52,870 --> 00:35:56,450 So as you can see, a lot of different steps and processes 751 00:35:56,450 --> 00:36:01,450 go into creating these maps and doing this analysis. 752 00:36:01,730 --> 00:36:05,720 And some of the things that we have listed here at the left 753 00:36:05,720 --> 00:36:10,693 are involved in the process stream over there at the right. 754 00:36:11,880 --> 00:36:14,760 Okay, so in terms of the readings this week 755 00:36:14,760 --> 00:36:17,700 and how they apply to what we've talked about today, 756 00:36:17,700 --> 00:36:19,480 you're gonna be doing two readings. 757 00:36:19,480 --> 00:36:21,620 And within those readings, 758 00:36:21,620 --> 00:36:23,470 there are some tutorials to follow 759 00:36:23,470 --> 00:36:24,940 that are going to help you do things 760 00:36:24,940 --> 00:36:28,990 like explore images, filter an image collection, 761 00:36:28,990 --> 00:36:32,190 explore temporal features 762 00:36:32,190 --> 00:36:35,270 and examine pixel quality and image quality, 763 00:36:35,270 --> 00:36:37,380 all using metadata. 764 00:36:37,380 --> 00:36:41,080 And so please let us know if you have any questions, 765 00:36:41,080 --> 00:36:45,030 and I look forward to seeing you all virtually next week. 766 00:36:45,030 --> 00:36:45,863 Take care.