Tuesday, April 29, 2014

Remote Sensing Lab 7: Photogrammetry

Goal

The goal of Lab 7 is to develop skills dealing with aerial photographs and satellite images. This lab focuses on the mathematics involved with these methods. We will look at how to find scales, perimeters, and measurements of certain while using computer images and Imagine satellite images.

Methods
  
Part 1
In part 1 of the lab we are given multiple images of the map and select distances or elevation on the image. The first image we were given was of a section of an Eau Claire Highway, this length in the real world was 8822.47 feet and on the image it was 3.05 inches. After dividing the actual distance by the image distance I found that the scale was 1:2900.

The second image of part one we were given the focal length from the camera used, 152mm, the elevation of the plane, 20,000ft, and lastly, the elevation of Eau Claire County, 796ft. With this data we had to find the scale of the photo. Which using the formula Scale = (focal length)/(Elevation-Ground level) I found the image was 1:38,400

The third image we used Erdas Imagine and basically plotted points around a lagoon to figure out what the distance around and how much area the lagoon took up. In order to access this tool we used the Measure tool and then selected point, polygon. Next we began placing points around the lagoon until we were all the way around it. We found that the lagoon had an area of 37.7382 hectares or 93.25 acres, and had a perimeter of 4122.28 meters, or just over 2.5 miles.

The last image we were given for this section was a zoomed in image of west south east Eau Claire. For this image we had to find the relief displacement of the smoke stack on the University of Wisconsin- Eau Claire Upper Campus. Using the equation  d = (h x r) / (H). where d = relief displacement, h = real world height of object, r = radial distance of top of displaced object from principal point, and H = height of camera above local datum. While since we did not know the actual height of the object we had to modify the formula to h = (d x H)/(r). h = (.5in/3209ft)/(11.4in) after doing the math I found the tower was 140.75 feet.

Part 2
For part 2 we are going to create a terrain looking image. using the stereoscopy images of Eau Claire, ec_city.img and ec_dem2.img which is a digital elevation model of the city. We then used the terrain-anaglyph tool. Inputting the images were the DEM image is under input DEM and the city image is the other input. Then saving the image to our own photos and using 3D (red and blue lens ones) you can see the elevation changes.

Zoomed in portion of 3D looking image
Part 3
This final part of the lab is broken up into many sections. An overall look at what we will be doing in this section is we will be taking two photos and using orthorectification to correct for displacement and what the elevation is.

The first section is to create a new project. we will be using SPOT satellite images of Palm Springs, California. The next step to do is create a new Sat_Ortho block file in our output folders. When going through all the settings make sure to pick UTM in the projection type, Clarke 1866 in the spheroid name, NAD27(CONUS) in the datum name, UTM zone 11, and lastly select north.

The second section is to add imagery to the Block and define sensor model. Here we will be adding frames such as the spot_pan.img. After bringing in the frame click show and edit frame properties. This will then change the Int. tab to green.

The third section is to activate point measurement tool and collect the ground control points. This section will consist of creating ground control points. In this section we will bring in another image to use as a reference image. This step we will create 9 ground control points using the method of selecting the same parts of the image. The next two points will use a different image to collect the points. After adding all the ground control points we will reset the vertical reference source. Here we will use the palm_springs_dem.img. This process will add the vertical z reference to all our points.
Now we will do the same updating to the other image. Because not all the points are on the image we will only have to do a few ground control points.

The next section will be automatic tie point collection, triangulation and ortho re-sample. Here we will use the automatic tie point generation properties icon. After clicking the icon make sure the all available button is activated as well as the exterior button. Then select the distribution tab and change the intended number of points to 40. Lastly we will make sure all the ground point type and standard deviation defaults are changed to 10.

Lastly we will conduct an Ortho resampling. this will basically combine the two images using all the ground control points and tie points. Finally the images are ready to be viewed and after the long process the images have an almost seamless combination.




















Wednesday, April 16, 2014

Remote Sensing Lab 6: Geometric Correction

Goal

The goal of this lab was to introduce us to geometric correcting of a satellite image. This lab is focused on the two main ways to correct a satellite image, which will be introduced to in the method section of this blog post.

Method

In the first part of the lab we are working with two Chicago images. One is a topographic map of the Chicago and surrounding area, while the other one is a remotely sensed image of a smaller portion area. The two images should be in separate viewers on Erdas Imagine 2013. The next step is to select the multispectral tab and click the tool Control Points. In this lab we will use the Polynomial function and leave all the other options at their defaults. After following through the next few pop up boxes you will come to a place where we will have to select the reference image, which will be the Chicago_2000.img. Since we are using the 1st polynomial function, we will need at least three points before a solution will be possible. When placing ground control points the fourth one will place automatically. After placing the ground control points, we will have to move them around to minimize the Root Mean Square error (RMS). Ideally you would like the RMS number to be less than 2.0. Once this is done with all four Ground Control Points you will hit the Windows looking logo button to finalize the image.


Geometrically Corrected Image
Part two of this lab is to work with two images of Sierra Leone, instead of working with one map and one image. This process is done similar to the way of the first part, but instead of selecting the 1st polynomial we will be selecting the 3rd polynomial. This way we will now need 10 Ground Control Points before a solution can be found. After moving around the Ground Control Points to make sure the RMS error is below 2, you will hit the Windows looking logo button again to finalize the image. Unlike in the first part when we used Nearest Neighbor as the default, we will be using bilinear interpolation to fix these images.


Corrected Sierra Leone images with RMS errors present
Results

For part one the results were a geometrically corrected satellite image of the Chicago area. It was correct when zoomed in between the two images.
For part two the images were a lot closer than the previous. They were much more geometrically correct than the previous images.

Sources

United States Geological Survey (USGS) 7.5 minute digital raster graphic (DRG)
Images provided by Dr. Wilson

Friday, April 11, 2014

Remote Sensing Lab 5: Image Mosaic and Miscellaneous Image Functions 2

Goal
The goal of this lab was to get a beginners grasp at several remote sensing tools. This lab introduced us to image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection.

Methods
The first part of the lab introduced us to image mosaicking. In this section we will take two images of the surrounding Eau Claire area and bring them together. To start we had to overlap the images in the right way and using the mosaic tool (mosaic express) under the raster tab we could start the process of joining the two images together. After joining the two images, since this is not an advanced class we did not change anything away from the defaults, you would get a seamless image of the two satellite obtained images.

Left: Before joining images
Right: After joining the images
After joining the two images without blending the colors you can easily tell where the boundaries are. The next section would work on fixing this. Still working with the mosaic tool, we switched from mosaic express to use MosaicPro instead. Using the same two images, color correction tools, and a histogram matching tool. We were able to join the two images together once again. Again not changing the defaults we got a pretty simple final product.


Left: Before joining the images
Right: After joining the images
After using MosaicPro the color change looked much more natural, apart from the black line down the middle the images look like one.

The next section had to deal with Band Ratios. In this section we will use the NDVI tool under the Raster tab and Unsupervised tool. This tool helps to show where vegetation on the land image is. After inserting the Eau Claire area image and saving it to our own personal files we ran the tool. The end product was a black and white looking image.


Left: Original Image
Right: Image after running NDVI tool to show land use
On the right image the darker areas, not black which represents the river, you can expect to find more urbanized, built up, areas and in the lighter areas find more farmland.

The next section dealt with Spatial and spectral Image enhancement. In this section we deal with high frequency images, images with sharp borders between colors, and low frequency images, images with a more "blurred" looking border between colors. The first part of this section was to apply a 5x5 Low Pass Convolution filter to the image of the Chicago area.


Left: Original Image
Right: Image with a 5x5 Low Pass Convolution Filter

The 5x5 Low Pass Convolution Filter makes the new image to appear smoother than the original. The next section of this part had to deal with applying a 5x5 High Pass Convolution filter to an image in Sierra Leone. Done the same way as the Chicago image, Raster tab> Spatial tool> select Convolution.


Left: Original Image
Right: Image with a 5x5 High Pass Convolution Filter
The new image of Sierra Leone now has much sharper boarders between colors and it also appears much more sharp. The next part of this section was to use a different image of Sierra Leone and apply a 3x3 Laplacian Edge Detection Filter. This filter is used to detect rapid change in an image. From the visual perspective it would look like a quick change of color on the image.

Left: Original Image
Right: New Laplacian Edge Detection

Left: Original zoomed in
Right: New Laplacian Edge Detection filter zoomed in
When zoomed in you can see how different the Laplacian Edge Detection image is from the original. The next section of part 3 dealt with Spectral Enhancement. In this section we will stretch out the color histograms to help make the images look as if they have a wider variety of color to them. For this section we used the Panchromatic tab and the General Contrast tool. After playing around with the variables to get the contrast we wanted we could create a final product that is much easier to interoperate.
The tool used and what the image looked like before applying the tool
After adjusting the contrast and applying it to the same image above
 The last section of part 3 had to deal with Histogram Equalization. Similar to the previous images we are expanding the range of the histograms to add more color to the images. Under the Raster tab> Radiometric tool > Histogram Equalization tool. We then ran the tool to get a much brighter, contrast picture.
Left: Original
Right: New Histogram adjusted image
Even someone who has never seen an image like this before and has no clue what processes have been done to it can tell that the two images look drastically different, some people might even think that they are not even of the same place. In the newly adjusted image compared to the original's histogram you can see a wider area of color being used.

The final part of the lab works with binary change detection also called image differencing. The first part of this lab was to create a difference image. This was done by bringing in two images, one from 1991 and the other from 2011. We then used the functions, two image functions tool, found under the raster tab. After imputing the two images and changing the operation to a - instead of a + and only selecting layer 4, we could save the image to our folder.

Left: Original 1991 Image
Right: Pixels that have changed between 1991 and 2011
After working with the image we then took a look at the metadata and viewed the histogram. Given the mean and standard deviation, we could figure out what portion of the image was in the upper and lower limits. The second section of this part was to map the changes in pixels in the difference image using spatial modeler. This was done using the equation
ΔBVijk = BVijk(1) – BVijk(2) + c
 



Where
        ΔBVijk    = Change in pixel values
        BVijk(1)    = Brightness value of 2011 image    
         BVijk(2)     =  Brightness value of 1991 image
        C           = constant

In order to find the difference we first had to use model maker. Using just the basic functions we were able to create two different models. The first one dealt with the 2011 Near Infrared band and the 1991 Near Infrared band. We subtracted the 1991 image from the 2011 image and added the constant. The final image would than be used on the next model. The second model was to detect the change/no change threshold value. This model would also use the conditional either if or otherwise function. After running the model we than got an image that showed where the change was. We would later use this image on ArcMap to overlay it on the 1991 Near Infrared band image to see where the changes have occurred.

Results
The results from the last section showed that the pixel changes were in relation to changing lands. Over the past twenty years we have seen changes in areas of urbanization, road creation, farm land change, possible water level changes, and many more features.

Sources
Erdas Imagine 2013
ArcMap 10.2
Images provided by Dr. Wilson