Thursday, December 11, 2014

Exercise 8:Raster Modeling

Goals and Objectives
In this exercise we are adding up all the skills we have worked with over the 2014 Fall Semester GIS II class. The Goal of this lab was to use multiple geoprocessing tools to bring together multiple raster files in order to figure out a suitable area to have a frac-sand mine be built in the Lower Trempealeau County. In this lab we will be building a suitability model, using layers for bedrock layers, land type usages, proximity to railroads, the slope of the land, as well as, the water table level. Using those five layers we will create a suitability model which will overlay with the risk assessment model that includes closeness to streams, prime farmland, residential areas, wildlife areas, and visibility from some of the highest peaks in Trempealeau County.

Methods
To start the lab we first had to know what type of sand formations to look for in the bedrock. Research done throughout the semester shows that the Jordan and Wonewoc formations are the best formations to use for mining purposes.

The image on the right is of the bedrock formation for Western Wisconsin. On the map there is a black outline of Lower Trempealeau County. For timing purposes we used just a segment of Trempealeau County to run our data for.

The next step was to use this information to create a ranked raster file to show which places would have the best bedrock to mine and which would not be the primary choices.






Above in the bright green color is the ideal mining locations which include the Wonewoc and the Jordan formations. The areas in gray are what would not be considered the most ideal locations to mine. Converting this file into a more pleasing image would result in changing the gray to a red or black and keep the green part green, but that will all be taken care of at the end of the first seven objectives when they all get added together.

The second step was to import the National Land Cover Data set which we downloaded and used back in Exercise 5. In this section we wanted to select areas of land which would be suitable to drill on. Suitable areas to mine on include planted and cultivated lands, or forest areas. Those areas received the best ranking, while areas of developed land and water areas/marshlands received the worst rankings.


The green areas on this image show those which rank the best in areas of suitability, while black is the worst areas to conduct a sand mine operation, and blue are areas in between the best and worst. 

The third step was to find the proximity to the rail depots. We are using the rail depots because we want to see where they can get on the rail line, not just some random point.

Just like the first two images, this one also is ranked.The closer you are to the rail depot the better ranking you would receive. Using a three category Euclidean Distance we were able to split the county up into three separate areas. 

The next step of this project was to find a relatively flat slop area, mines do not want to be found in places of high slope, so a good slope would be one closer to a flat surface. 


The image above shows what the area, using a Digital Elevation Model, the green areas are the much more flatter areas, than blue are the medium slope, and finally the red are the steeper slop areas. For this part it was important to remember that we have to have all the units the same. In the above image all units were changed to show meters.

Objective 5 required using ground water data. We used a generalized water table elevation map to make our jobs easier. After downloading, we had to import the .e00 file. Since we were unable to use the actual contour lines in the project, we converted it into a raster and added it to the list of ranked rasters which we will use in the next steps.

The image above is a screen shot from the website, Wisconsin Geological Survey, the green lines represent the individual contour lines of the ground water table. 

Objective six involves using the map algebra tool to add up all the above mentioned areas to find the most suitable areas to construct a new frac sand mine. 
After adding all the categories together we came up with a map like the one above. On the map black is the worst ranked area, followed by blue, then green, then red, then white is the best ranked areas. 

In order to make the map look more appealing, I used the block statistics tool to get rid of the salt and pepper look. On the map below, green is the most ideal spot, and the closer to red the worse it gets.


We had to also find areas where we deemed at the beginning where we would not want to build regardless of how good the land is. 

This map uses the same legend as the map above it, but the black areas are the areas where we will not be wanting to consider doing work due to what was mentioned toward the start of this blog post.

After completing the suitability model, we then moved on to creating a risk model.

Objective 8 we began to use more of the raster tools which we did not use very much in the first part. This step involved importing the river and stream flows for the lower section of the county. Since we had to pick certain streams, I chose all the streams which had a stream grade greater than or equal to 5. This way I would not have the entire county selected as being in too close proximity to a stream. 

This map shows in red the close proximity to the streams, not wanting to build there this received the worst ranking, and then the yellow was the next worst, and green was the best ranking. 

Step nine was to select areas where we were not building on prime farmland, this included prime farmland that is sometimes flooded. 
The image above shows the proximity to the prime farmland, the red is either the farmland itself, or the closest area. Then like the previous maps, yellow is the third ranked, and green is the second best. White areas rank higher than green, making them the best areas.

For objective 10, we had to run the same tools to find the residential and populated areas. Since we do not want to build a mine there they will get the worst rankings. Much like the previous maps, we will create this one by selecting the areas of interest, and then running the euclidean distance tool. And lastly reclassify the data into three categories. The three categories make it set so it will be best, medium, and worst ranked. 

Just like the previous maps, red is the worst, closest proximity to residential, or populated areas. Then yellow is the second closest, and lastly green is the best, furthest area.

The next task will find the impact to the schools. This was included in objective 10, as populated areas, but for the sake of this lab, I found the school data and had to separate the operational schools from the historical schools. 



The last step I had was to do something of my choice. I chose to figure out how close into proximity the wild life areas were. 

Again just like the other maps, the red is the worst area, the yellow is second best and the green is the best when it comes to proximity to the wild life areas. 

After calculating the risk assessment for each of the five categories, just like in the previous half of this lab we will use the map algebra tool in order to find the areas of most risk.

After running the tool we get the near finished product pictured below. 

The green areas are the best, down to the dark red being the worst. There are several areas on the map where a frac sand mine would be acceptable with little risk. Those involve the northeast corner of the county, as well as in the middle of the county there are several locations which would be fitting. 

It is important to take not that this data may not be used to calculate an exact location for a frac mine because it only takes into account of this counties data. The northeastern section may not be as risk free as the map shows because it may have other implications from the surrounding counties. 

Taking the risk assessment and combining it with the suitability map we created we can find the least issued based area.

Again, the areas of red are the worst, up to green being the best.

The last step of this project was to use the visibility tool from prime recreational areas. I chose to use the highest eight peaks in the county to use as my visual sites. Again using the Digital Elevation Model and the visibility tool we were able to see what areas were visible, and overlapped it onto the above map. 

The most suitable and risk free areas, with the least amount of visibility are located in the middle as well as the northeastern corner of the county. 



The last part of this lab was to create a flow chart data flow model of all the steps we did. All the steps mentioned above are included in the model, as well as many other steps which were not mentioned in the methods portion.



Lastly, mentioned in the reading regarding the rankings, the table shows the rankings again.


Python Portion
The second part of the lab was to create a Python Script to show which areas of the previously mentioned five risk assessment areas would have a heavier impact. The area I chose to have the highest impact was streams.

And then using this Pyscript we were able to come up with a slightly different map then from above, note this map was not combined with the sustainability map, it is solely a stream weighted risk assessment map.



Conclusion
Overall the data we used to make the map can show where the best areas are with the most suitability to the environment, and the least amount of risk. As stated in the blog this data should not be used alone to figure out where to build a mine. There are far more areas to consider such as road usages, proximity to other mines, cost, and you will have to use other county's data to get a more full looking and accurate map.

Sources
http://wgnhs.uwex.edu/maps-data/gis-data/








Friday, November 21, 2014

GIS II-Exercise 7: Network Analysis

Goals of Exercise 7
In Exercise Seven we split the lab up into two different parts. The first part of the lab, the goal was to be able write a Python Script (see blog post below to view python script). After running the script we were left with 41 mines to use for part two.

In part two of the lab, we were to use Network Analysis tools to figure out the distance from the mines to the closest rail terminal, using roadways. Finally, after accumulating the routes, we were to find how much it cost each individual county to have these trucks running on the roads (equation further down on blog).

Data sets we will be using include, Wisconsin County Map, ESRI street map of the United States, and the points we found in the previous part.

Lastly, in order to calculate the cost we used the equation of 50 trucks per year drive down the roads at 2.2 cents per mile. The trucks will also have to make the trip back from the train rail to the sand mine. In order to calculate this I used an equation of 100 trucks times the 2.2 cents per mile.

Methods

The model above shows the step by step process to figure out the routes needed for the mines to get to the closest rail terminal. 
This involved using the streets layer, and the network analysis to find the closest facility. Next we had to add the mines we found in the previous part of the lab and again find the closest facility. Lastly we added the rail terminals and found the closest facility, and solved. Next we selected our data, this would be the routes we made, and copied the features, and thus we could have them saved as the final results.


The above image is the final result of the above blog post, after a select by attribute was applied in order to see how many counties were effected by routes.

From there we would move on to the next part of the exercise, trying to figure out how much money it cost each county.



This model shows the steps taken in order to calculate the total cost per county. It involved creating many new fields, such as miles and kilometers fields (kilometers field was created to make the conversion easier since the data was saved in meters). The last field we calculated was created using the equation mentioned above in order to calculate the final cost. Using the cost data we were able to create the graphs seen below.


After calculating the cost per county and summarizing the data we were able to create a chart of how much it was costing each county to have trucks run on their roads. 



The two images above, top side, shows the table used to calculate the cost and how much it is going to cost to each county, the image on the bottom was a graph made on excel to show the cost in a lowest to highest price. 

CONCLUSION

Overall we were able to see which counties were receiving the most traffic and cost on them. The two highest I found in my data were Chippewa and Wood Counties. While Trempealeau County having the most mines on the map, they were all relatively close to the rail stations.








Tuesday, November 11, 2014

Exercise 7-Pyscript 2

For this exercise we are using Pyscripter to create a script which will be used to select a certain number of mine locations which meet the following standards;

-Currently Active Mines
-Are not connected to a rail loading station, or is a rail loading station
      *no trucks are needed to get the sands to the trains
-Mines must be more than 1.5 kilometers away from the nearest rail


The above script was created, debugged, and ran to produce a total of 41 mines that mean the above standards.

Below is a map of the selected maps (in yellow) which meet the above standards.

Wednesday, November 5, 2014

Exercise 6-Data Normalization, Geocoding, and Error Assessment

Goals

The goal of this lab is to use data provided by the DNR to find the exact locations of several different fracking facilities. With the given spreadsheet we are to normalize the data and then plot it on a map and compare it to fellow classmates and see how closely they match up.

Objectives

The objectives for this exercise are to obtain the database of the mines in Western Wisconsin. Next we are to geocode the mines using street addresses. After doing the second objective we will find the data given to us has errors in it and we are then to normalize the table and find the addresses for our selected mines. The next objective is to geocode the mines. After the class has finished geocoding their selected mines we are to compare them with our own to see how closely they match up. Lastly, we are to make a map showing our mines relative to our colleagues' mines.

Methods

In this lab we are given many different locations of several sand mines in Western Wisconsin. After selecting our specific mines, we instantly tried to geocode them in ArcMap, only to find out the points would not be processed. After opening up the spreadsheet again we changed the addresses to be normalized. In order to normalize the data, we would have to change the excel file layout.

In the image above you can see the not normalized excel spread sheet. In order to normalize the data we would have to go through the addresses given and separate them out into the entire address, which would then break down into the following categories;

  • Street Number
  • Street Prefix
  • Street Name
  • Street Suffix
  • State
  • City
  • Zip Code
  • Town/City/Village
  • County
Other categories included in the excel spread sheets to help with the normalizing include Mine UNIQUE ID, which will be the most important category later on in the lab, facility name, operator, landowner, status, and size of the property.

Not all of the entries had all the data needed to normalize the table which resulted in having to use two different methods of normalizing. The first was taking the data straight from the spread sheet and breaking it up into the categories listed above. But others did not have all the data needed and would have to use the method of picking it out on a map. In this case I used Google Maps, World View, to figure out the locations as well as Public Land Survey System data to find exact locations.

After finalizing the normalization we got a table that would be usable in ArcMap.

The figure above shows the normalized data in the excel spread sheet to be used in the plotting of the mine locations.

After completion of the spread sheet, we imported them into ArcMap and found any errors with the data. When I added my data to ArcMap I did not come across any errors, but some errors which may occur could be not typing the data right, or having fields missing.

The next step after making sure all your data is right, is to use a base layer data map to make sure your points are in their proper locations. If they were not then you should move them over to a more exact location. After completing this step we will import everyone else's data points. After they were all imported together we will merge the data points together to make one shapefile. A few of the shapefiles had errors when trying to merge with the larger shapefile. In this case I made two separate merge shapefiles to use. After all the mines have been imported onto your map we will extract the ones that share the same unique mine ID as the ones I had to map. This part was a little tricky since there were a few which kept the unique mine ID under a different category. In this case I would have to individually go through the attribute tables to find what the ID was kept under. The unique mine ID was kept under four different categories including; unique mine ID, mine ID, F1, and mine.

After extracting all the mines I compared my distances (using point distance tool) to other people's distances to the mines. (Results below)

Results/Errors





The image on the left is the final map showing my mine points with the mines of the same locations. The points I plotted are the white circles with the mine axes inside them, while the yellow circles are the mines plotted by my class colleagues who had the same mines.

Looking at the map you can see there are some points that look out of place, and some that look like they are overlapping or slightly away from the actual location of the other mine spots.









Some errors which may have occurred would be
  • Incorrect addresses entered
  • Mistyping the data
  • Entering the wrong data for the mine
  • Map Projections
  • Labeling one point with the wrong ID
  • Using the office buildings instead of mine area
  • User neglect
  • Which could include not actually finding the right location

In this image we can see two separate points which are supposed to be the same one. This is one type of error which sprang up from the data we were given from the DNR. On the spread sheet we had two different addresses to choose from. One was a residential looking area, while the other was a forested looking area. Due to the lack of up to date imagery on Google World View neither site looked like a mine. Although the yellow dot chosen on the image represents the office/house most likely owned by the operator while the other represents the actual soon to be mine location.



Conclusion


In conclusion most of the points were fairly close to one another, while others were quite a far distance away. This lab showed how important it is to make sure all your data is normalized, and if you are going to be sharing the data with someone else, make sure everyone knows what coordinate system to use and what coordinate system you are using to plot your data, as well as the proper projections to use.

The image on the right shows how far each point was from the other points, the distance is in meters.

Lastly, the data in the image is arranged from the first mine location to each of the other mine locations, hence four zero distances.


Sources
Bing Maps-Provided world/terrain base map
Drake Bortolameolli-Map Creator
ESRI ArcMap-Software Provider
Google Maps World View-Searching Software to find Locations
Wisconsin DNR-Provider of the mining facilities

Monday, October 20, 2014

Exercise 5-Understanding Data Downloading

Goals and Objectives

In Exercise five we are to gain knowledge with downloading data from multiple websites, importing the data into ESRI ArcMap, and join it with tables. We are also to become more familar with using Python script to project, clip, and load all of our downloaded data into a geodatabase of our choosing. See previous post for the Python script used in this lab. 

Methods

In this lab we have to visit multiple websites to download different data sets to be used in our labs. The first data set we downloaded was from the United States Department of Transportation (DOT). This was to gain access to the railways of the United States. We would later clip these railways down to just the ones in Trempealeau County. The second set of data was downloaded from the USGS National Map Viewer. We downloaded both the National Land Cover Database and National Elevation Dataset. Both downloads of Trempealeau County. The next data set was downloaded from USDA Geospatial Data Gateway. The data gathered from this site was the Cropland data. Next set was downloaded from the Trempealeau County land records. We downloaded an entire geodatabase from this website. Lastly, we downloaded the Soil Survey from the USDA NRCS Web Soil Survey. 

Part two of this lab was to write a Python script to project, clip, and load our data into a geodatabase. See Exercise 5-Python Script for more information or the script used. 





CONCLUSION

When working with this data one thing to watch out for is to make sure you extract everything you need to extract. I forgot to do the double extraction and found myself wondering around trying to find something I never downloaded.


Websites for gathering data
Railway Network Data
http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/-files/publications/national_trasportation_atlas_database/index.html
Landcover Data
http://nationalmap.gov/viewer.html
Landuse Data
http://datagateway.nrcs.usda.gov/
Geodatabase Data
http://www.tremplocounty.com/landrecords/
Soil Data
http://websoilsurvey.sc.egov.usda.gov/App/HomePage.htm

Exercise 5-Python Script


Python is a program that is used to write codes for various tasks. In our Geographic Information Systems II class we are using Python to help us process tasks in ESRI ArcMap 10.2.2 quicker. We use Python for many different functions in ArcMap. We use the tool PyScripter for Python 3.2. It is a free software created for Windows running computers and is made by Object Pascal (python website). 


Monday, September 29, 2014

Geographical Information Systems II-Introduction to Sand Mining

What is Sand Frac Mining?

Sand Frac Mining has been a mining of the round sand particles in Wisconsin for over 100 years, but now today they have found a different use for these sand particles. Up until a few years ago sand was mined to be used in glass manufacturing, foundry molds, and even used in golf course bunkers. Today, they now use the sand for hydrofracking, or frac for short. Hydrofracking is when sand is pumped into the rocks to extract the natural gas and petroleum from the rock.

   http://wcwrpc.org/frac-sand-factsheet.pdf
Where can you find this sand in Wisconsin?

The sand which has had the highest demand due to its good particle size and uniform roundness is located in the sandstone features of Western and Central Wisconsin. Although not as uniform in roundness and size, you can find similar sands in Southern and Eastern Wisconsin, but it is not nearly in as high of a demand as the sands in Western Wisconsin.

The image on the left shows where the sandstone formations can be found, in the brownish color, while the red squares are some of the mine locations and processing plants (as of December 2011).




What are Some Issues Associated with Sand Frac Mining in Western Wisconsin?

 A newspaper article was just released from the Lacrosse Tribune on September 27, 2014. In the article it talks about several environmental issues which the population living around the mines have to face. Since 2010, Wisconsin has seen sand mines rise from seven, to one hundred forty-five, in just four years. Several farmers live within half a mile of all these mines, 58,000 people over 33 different counties (including Minnesota). When the range is brought back to one mile, the number nearly doubles to 162,000 people. Some farmers have reported developing Asthma from the harmful silica dust that is now present. Silica is a known carcinogen, and constantly being breathed in could cause harmful effects on your body. Although there have been no reports of Polyacrylamide leaching into the groundwater, it is still a great threat. Polyacrylamide is a neurotoxin, also found in cigarettes. On average a sand mine uses between half-a-million gallons of water to two million gallons of water every day. Aside from the chemical usages and threats, citizens of the local towns have had numerous complaints as well, such as the mining lights being on 24/7, the trains and trucks constantly running throughout the night, and even building foundations beginning to become compromised from the repeated blast waves.

How Will GIS Be Used To Further Explore Some of These Issues as Part of Our Class Project
Since the large boom of Frac Sand Mining, it has been difficult to have a correct map showing where all the mines are located. Using GIS in our Geographical Information Systems II class, we will look to update the mine maps in Trempealeau County. We will also use GIS to monitor the mines for potential threats such as; run off of harmful chemicals into local streams or other bodies of water. One more item which we can explore is the possibility to reroute transportation units to minimize public delay and disturbances.

Sources
http://wcwrpc.org/frac-sand-factsheet.pdf
http://dnr.wi.gov/topic/mines/silica.html
http://lacrossetribune.com/news/local/report-frac-sand-industry-affects-lives-of-thousands-who-live/article_5c6716dc-dc4a-5c4e-93a8-4523712785e0.html


Monday, May 5, 2014

Remote Sensing Lab 8: Spectral Signature Analysis

Goal

In the final lab of Remote Sensing of the Environment we will be gaining experience on interpretation and measurement of spectral reflectance of different materials on the Earth's surface. We will learn how to how to collect spectral signatures from remotely sensed images. We will also graph them, as well as perform analysis on them and verify if they will pass spectral separability tests.

Methods

For this lab we are going to be sampling the wavelengths from 12 different locations, ground surfaces, across the Eau Claire area. These 12 surfaces are
1-Standing Water
2-Moving Water
3-Vegetation
4-Riparian Vegetation
5-Crops
6-Urban Grass
7-Dry Soil (uncultivated)
8-Moist Soil (uncultivated)
9-Rock
10-Asphalt Highway
11-Airport Runway
12-Concrete Surface (parking lot)

Using the same image we had to take a sample of each area, starting with Lake Wissota. We used the drawing tool to capture a section of Lake Wissota, and then used the supervised tool, signature editor, all found under the Raster Tab. This tool would bring up an empty box where we can import our Lake Wissota polygon. After bringing in the area we looked at the spectral graph, below.
Here we can see what the spectral signature is for the standing water of Lake Wissota. Looking at the Graph we can see that Band Layer 1 is the highest, while Band Layer 4 and 6 are the lowest. We then did the same process for the next 11 ground surfaces. After capturing all 11 surfaces they are imported into the same signature editor and compared to each other using their spectral signatures. All 12 could be put onto one graph, below.
After bring in all 12 surface features we could tell what a good Band Layer would be to use if you did not know what was at the ground level. For this lab I found that Band Layer 5 would be the best layer to use since it had the greatest differences, and Band Layer 2,3,4 would be the worst ones to use since they are so close to each other.


Results

In Lab 8 we found that everything has a different spectral signature, and if you know the spectral signature we can then find out what it is that we are looking at. We found that Band Layers 5 and 6 would be the best to use and Bands 2,3, and 4 would be the worst, band 1, although not ideal, could be a decent one to use.

Resources
Image Provided by Dr. Wilson (University of Wisconsin-Eau Claire-Geography and Anthropology Department)























Tuesday, April 29, 2014

Remote Sensing Lab 7: Photogrammetry

Goal

The goal of Lab 7 is to develop skills dealing with aerial photographs and satellite images. This lab focuses on the mathematics involved with these methods. We will look at how to find scales, perimeters, and measurements of certain while using computer images and Imagine satellite images.

Methods
  
Part 1
In part 1 of the lab we are given multiple images of the map and select distances or elevation on the image. The first image we were given was of a section of an Eau Claire Highway, this length in the real world was 8822.47 feet and on the image it was 3.05 inches. After dividing the actual distance by the image distance I found that the scale was 1:2900.

The second image of part one we were given the focal length from the camera used, 152mm, the elevation of the plane, 20,000ft, and lastly, the elevation of Eau Claire County, 796ft. With this data we had to find the scale of the photo. Which using the formula Scale = (focal length)/(Elevation-Ground level) I found the image was 1:38,400

The third image we used Erdas Imagine and basically plotted points around a lagoon to figure out what the distance around and how much area the lagoon took up. In order to access this tool we used the Measure tool and then selected point, polygon. Next we began placing points around the lagoon until we were all the way around it. We found that the lagoon had an area of 37.7382 hectares or 93.25 acres, and had a perimeter of 4122.28 meters, or just over 2.5 miles.

The last image we were given for this section was a zoomed in image of west south east Eau Claire. For this image we had to find the relief displacement of the smoke stack on the University of Wisconsin- Eau Claire Upper Campus. Using the equation  d = (h x r) / (H). where d = relief displacement, h = real world height of object, r = radial distance of top of displaced object from principal point, and H = height of camera above local datum. While since we did not know the actual height of the object we had to modify the formula to h = (d x H)/(r). h = (.5in/3209ft)/(11.4in) after doing the math I found the tower was 140.75 feet.

Part 2
For part 2 we are going to create a terrain looking image. using the stereoscopy images of Eau Claire, ec_city.img and ec_dem2.img which is a digital elevation model of the city. We then used the terrain-anaglyph tool. Inputting the images were the DEM image is under input DEM and the city image is the other input. Then saving the image to our own photos and using 3D (red and blue lens ones) you can see the elevation changes.

Zoomed in portion of 3D looking image
Part 3
This final part of the lab is broken up into many sections. An overall look at what we will be doing in this section is we will be taking two photos and using orthorectification to correct for displacement and what the elevation is.

The first section is to create a new project. we will be using SPOT satellite images of Palm Springs, California. The next step to do is create a new Sat_Ortho block file in our output folders. When going through all the settings make sure to pick UTM in the projection type, Clarke 1866 in the spheroid name, NAD27(CONUS) in the datum name, UTM zone 11, and lastly select north.

The second section is to add imagery to the Block and define sensor model. Here we will be adding frames such as the spot_pan.img. After bringing in the frame click show and edit frame properties. This will then change the Int. tab to green.

The third section is to activate point measurement tool and collect the ground control points. This section will consist of creating ground control points. In this section we will bring in another image to use as a reference image. This step we will create 9 ground control points using the method of selecting the same parts of the image. The next two points will use a different image to collect the points. After adding all the ground control points we will reset the vertical reference source. Here we will use the palm_springs_dem.img. This process will add the vertical z reference to all our points.
Now we will do the same updating to the other image. Because not all the points are on the image we will only have to do a few ground control points.

The next section will be automatic tie point collection, triangulation and ortho re-sample. Here we will use the automatic tie point generation properties icon. After clicking the icon make sure the all available button is activated as well as the exterior button. Then select the distribution tab and change the intended number of points to 40. Lastly we will make sure all the ground point type and standard deviation defaults are changed to 10.

Lastly we will conduct an Ortho resampling. this will basically combine the two images using all the ground control points and tie points. Finally the images are ready to be viewed and after the long process the images have an almost seamless combination.




















Wednesday, April 16, 2014

Remote Sensing Lab 6: Geometric Correction

Goal

The goal of this lab was to introduce us to geometric correcting of a satellite image. This lab is focused on the two main ways to correct a satellite image, which will be introduced to in the method section of this blog post.

Method

In the first part of the lab we are working with two Chicago images. One is a topographic map of the Chicago and surrounding area, while the other one is a remotely sensed image of a smaller portion area. The two images should be in separate viewers on Erdas Imagine 2013. The next step is to select the multispectral tab and click the tool Control Points. In this lab we will use the Polynomial function and leave all the other options at their defaults. After following through the next few pop up boxes you will come to a place where we will have to select the reference image, which will be the Chicago_2000.img. Since we are using the 1st polynomial function, we will need at least three points before a solution will be possible. When placing ground control points the fourth one will place automatically. After placing the ground control points, we will have to move them around to minimize the Root Mean Square error (RMS). Ideally you would like the RMS number to be less than 2.0. Once this is done with all four Ground Control Points you will hit the Windows looking logo button to finalize the image.


Geometrically Corrected Image
Part two of this lab is to work with two images of Sierra Leone, instead of working with one map and one image. This process is done similar to the way of the first part, but instead of selecting the 1st polynomial we will be selecting the 3rd polynomial. This way we will now need 10 Ground Control Points before a solution can be found. After moving around the Ground Control Points to make sure the RMS error is below 2, you will hit the Windows looking logo button again to finalize the image. Unlike in the first part when we used Nearest Neighbor as the default, we will be using bilinear interpolation to fix these images.


Corrected Sierra Leone images with RMS errors present
Results

For part one the results were a geometrically corrected satellite image of the Chicago area. It was correct when zoomed in between the two images.
For part two the images were a lot closer than the previous. They were much more geometrically correct than the previous images.

Sources

United States Geological Survey (USGS) 7.5 minute digital raster graphic (DRG)
Images provided by Dr. Wilson

Friday, April 11, 2014

Remote Sensing Lab 5: Image Mosaic and Miscellaneous Image Functions 2

Goal
The goal of this lab was to get a beginners grasp at several remote sensing tools. This lab introduced us to image mosaic, spatial and spectral image enhancement, band ratio, and binary change detection.

Methods
The first part of the lab introduced us to image mosaicking. In this section we will take two images of the surrounding Eau Claire area and bring them together. To start we had to overlap the images in the right way and using the mosaic tool (mosaic express) under the raster tab we could start the process of joining the two images together. After joining the two images, since this is not an advanced class we did not change anything away from the defaults, you would get a seamless image of the two satellite obtained images.

Left: Before joining images
Right: After joining the images
After joining the two images without blending the colors you can easily tell where the boundaries are. The next section would work on fixing this. Still working with the mosaic tool, we switched from mosaic express to use MosaicPro instead. Using the same two images, color correction tools, and a histogram matching tool. We were able to join the two images together once again. Again not changing the defaults we got a pretty simple final product.


Left: Before joining the images
Right: After joining the images
After using MosaicPro the color change looked much more natural, apart from the black line down the middle the images look like one.

The next section had to deal with Band Ratios. In this section we will use the NDVI tool under the Raster tab and Unsupervised tool. This tool helps to show where vegetation on the land image is. After inserting the Eau Claire area image and saving it to our own personal files we ran the tool. The end product was a black and white looking image.


Left: Original Image
Right: Image after running NDVI tool to show land use
On the right image the darker areas, not black which represents the river, you can expect to find more urbanized, built up, areas and in the lighter areas find more farmland.

The next section dealt with Spatial and spectral Image enhancement. In this section we deal with high frequency images, images with sharp borders between colors, and low frequency images, images with a more "blurred" looking border between colors. The first part of this section was to apply a 5x5 Low Pass Convolution filter to the image of the Chicago area.


Left: Original Image
Right: Image with a 5x5 Low Pass Convolution Filter

The 5x5 Low Pass Convolution Filter makes the new image to appear smoother than the original. The next section of this part had to deal with applying a 5x5 High Pass Convolution filter to an image in Sierra Leone. Done the same way as the Chicago image, Raster tab> Spatial tool> select Convolution.


Left: Original Image
Right: Image with a 5x5 High Pass Convolution Filter
The new image of Sierra Leone now has much sharper boarders between colors and it also appears much more sharp. The next part of this section was to use a different image of Sierra Leone and apply a 3x3 Laplacian Edge Detection Filter. This filter is used to detect rapid change in an image. From the visual perspective it would look like a quick change of color on the image.

Left: Original Image
Right: New Laplacian Edge Detection

Left: Original zoomed in
Right: New Laplacian Edge Detection filter zoomed in
When zoomed in you can see how different the Laplacian Edge Detection image is from the original. The next section of part 3 dealt with Spectral Enhancement. In this section we will stretch out the color histograms to help make the images look as if they have a wider variety of color to them. For this section we used the Panchromatic tab and the General Contrast tool. After playing around with the variables to get the contrast we wanted we could create a final product that is much easier to interoperate.
The tool used and what the image looked like before applying the tool
After adjusting the contrast and applying it to the same image above
 The last section of part 3 had to deal with Histogram Equalization. Similar to the previous images we are expanding the range of the histograms to add more color to the images. Under the Raster tab> Radiometric tool > Histogram Equalization tool. We then ran the tool to get a much brighter, contrast picture.
Left: Original
Right: New Histogram adjusted image
Even someone who has never seen an image like this before and has no clue what processes have been done to it can tell that the two images look drastically different, some people might even think that they are not even of the same place. In the newly adjusted image compared to the original's histogram you can see a wider area of color being used.

The final part of the lab works with binary change detection also called image differencing. The first part of this lab was to create a difference image. This was done by bringing in two images, one from 1991 and the other from 2011. We then used the functions, two image functions tool, found under the raster tab. After imputing the two images and changing the operation to a - instead of a + and only selecting layer 4, we could save the image to our folder.

Left: Original 1991 Image
Right: Pixels that have changed between 1991 and 2011
After working with the image we then took a look at the metadata and viewed the histogram. Given the mean and standard deviation, we could figure out what portion of the image was in the upper and lower limits. The second section of this part was to map the changes in pixels in the difference image using spatial modeler. This was done using the equation
ΔBVijk = BVijk(1) – BVijk(2) + c
 



Where
        ΔBVijk    = Change in pixel values
        BVijk(1)    = Brightness value of 2011 image    
         BVijk(2)     =  Brightness value of 1991 image
        C           = constant

In order to find the difference we first had to use model maker. Using just the basic functions we were able to create two different models. The first one dealt with the 2011 Near Infrared band and the 1991 Near Infrared band. We subtracted the 1991 image from the 2011 image and added the constant. The final image would than be used on the next model. The second model was to detect the change/no change threshold value. This model would also use the conditional either if or otherwise function. After running the model we than got an image that showed where the change was. We would later use this image on ArcMap to overlay it on the 1991 Near Infrared band image to see where the changes have occurred.

Results
The results from the last section showed that the pixel changes were in relation to changing lands. Over the past twenty years we have seen changes in areas of urbanization, road creation, farm land change, possible water level changes, and many more features.

Sources
Erdas Imagine 2013
ArcMap 10.2
Images provided by Dr. Wilson