Close

Most wildlands in the US have evolved with fire and depend on periodic blazes.  The Fire Monitoring and Assessment Platform (FireMAP) seeks to provide the capability to monitor the severity of wildland fires responsively, while maintaining safety and affordability. FireMAP is composed of unmanned aerial systems (UAS) and software to process and geo-analyze imagery. The UAS will perform a post-fire flyover of the affected area and acquire imagery, which will be georeferenced and mosaicked to create a composite image.  The software then analyzes the imagery, identifying the burn extent and severity.

Image Acquistion - How does FireMAP obtain images?

Image acquisition is first done by defining a project area to map. Our UAS flies at 400 feet above ground, and an efficient route to fly over the planned area is put into place accounting for the topography and other flight conditions.  Automation of preprocessing and georeferencing allows for faster transformation of data collected through imagery into actionable knowledge that can be utilized by FireMAP users.

Our imagery comes from our UAS, more specifically our DJI Phantom 4 and DJI Inspire 1.  One reason we are using UASs is the increase in resolution. UAS Imagery at 200 feet have a resolution of 2 centimeters. That is far better than Aerial imagery which has a resolution of 0.5 meters. Another disturbance that UAS fix are the shadows. Shadows cause a problem with the classifier. When the sun is at its highest, shadows are at their smallest so we take our images then, unlike aerial images which we have no control over the time of day the image is taken. 

 After acquiring all of the imagery, it is loaded into a compter software called Pix4D to create an orthomosaic. An orthomosaic is defined as being a collection of photos that have been stitched together to give an accurate representation of the earth. It is very important to give a true top down perspective in order for our algorithms to classify the finished orthomosaic. The image below shows a typical flight path that our UAS would fly for image acquistion, as each red dot represents an image taken.

                                                                                                                                                                 

Here is an example of a finished orthomosaic from a planned UAS flight for archaeology mapping purposes in the Boise National Forest using the computer software Pix4D. 

                                                                                           

Adding near infrared camera to our sensor would also give us more accurate results. Since burned material does not reflect near infrared and shaded vegetation does, the problem of shadows being identified as burned areas would be solved.

After all of the processing is done, the imagery is loaded into a cloud based database. Now the machine learning algoorithms can be applied to our data. 

Analytics Poster - How does FireMAP tell where Fire Burned?


In order for the computer to understand what we are looking for we have to train it. A program built by a previous computer science graduate called the Training Data Selector is used. It allows us to select parts of an image to train our Algorithms.  


 

 

The analysis of acquired imagery is a vital step in creating actionable knowledge. FireMAP utilizes machine learning classifiers to classify all of the pixels within an image as to whether they burned based on the pixels’ electromagnetic signatures.  Optionally, the classifiers may also classify unburned pixels by vegetation type. 

 

Machine Learning Background

Using classic machine learning algorithms, such as a Support Vector Machine (SVM), is key to identifying burned areas of hyper-spatial aerial imagery based on the spectral signature (reflectance of red, green and blue light) of individual pixels. The identification of burned surfaces is a critical step in determining fire boundary, unburned islands, and severity of fire on local ecology. 
      
 
Support Vector Machine classifiers utilize a nonlinear mapping to transform training pixels into a higher dimensional pattern 
hyperspace.  Within this new hyperspace, it searches for the linear optimal separating hyperplane which represents the boundary between training pixels of one class (burned) from the other class (unburned). (Han, 2012).  Image pixels are then transformed into that same pattern hyperspace and classified based on which side of the hyperplane they are on. 
When mapping wildland fire effects after creating training data in which each class of pixels is less than 1000 pixels and is balanced to withiin 5% of one another, currently the SVM acheived 99.59% burn extent classification accuracy, and  98.75% for biomass consumption classification accuracy (Pacheco). 







Methods

 

The Airframe and Sensor

DJI Phantom is the UAS that the FireMAP team is using to collect their imagery.  Initial work is done with the DJI Phantom 4 and DJI Inspire 1. Both UAS are equipped with a multi-spectral camera.

  • Spatial Resolution - UAS altitude of 200 foot (AGL) results in resolution of three centimeters compared with a minimum resolution of 0.5 meter with manned aircraft.
  • Temporal resolution – Acquire imagery when we want it.  Minimize shadows by taking images when the sun is at its highest.


 

Image Classification Utilizing Machine Learning

Imagery of a burn was acquired using Umanned Aircraft System (UAS).  The image is georeferenced based on GPS data from the UAS.  The imagery was then run through machine learning classifiers in order to determine the extent of the fire.

  • Training examples are selected representing pixels in burned and unburned areas of an image. Each pixel with a known classification of burn or unburned was loaded onto a known pixel array (X), storing the reflectance values from the pixel for the red, green and blue bands from the image.  This results in a n by 3 array where n is the number of training pixels.
  • These training samples are used to train the classifier along with their respective classifications to build a model from which to classify additional pixels as to whether they were burned or not.
  • The image is processed pixel by pixel with the OpenCV implementation of the  SVM classifiers.  Each pixel is attributed with the result from the machine learning classifier.
  • The resulting classified raster may be post processed to remove salt-pepper noise, reclassify outliers or resample the resulting raster to a resolution more suitable to the user's needs.




Results

 
  • Current work done utilizing DJI Phantom 4 and DJI Inspire 1.  Imagery higher resolution than what is currently available elsewhere
  • Burned surfaces are identified with considerable accuracy.

  • Vegetation types are also classified with significant accuracy.‚Äč
  • The tools and processes from FireMAP can not only be used for post-fire mapping, but have proven to be successful in applying machine learning to archaeology and prostate cancer. 
 


References
Pacheco, R., Peltzer, B., Hamilton D., & Myers, B. (2018, May). KNN vs SVM: A Comparison of Algorithms. Fire Continuum Conference, Missoula, MT.