There was a time when satellite imagery was stored on magnetic tape. That time is no longer. Google Earth Engine has created a simple API to access open imagery, including imagery from Landsat and the MODIS sensor. We use the GEE API to check alerts of forest clearing activity. The alerts could be user-submitted stories from the ground or moderate-resolution alerts from satellite imagery, like the FORMA alerts based on MODIS data. When an alert registers, we immediately go to the highest resolution imagery available to get some context. Consider, for example, the set of recent Landsat 8 images for a location flagged by the FORMA alerting system:
It is clear that there is some man-made patterns carved into the landscape. The FORMA algorithm is based on machine interpretation of spectral signals, searching for indications that are historically associated with forest cover loss. But the machine learning algorithm is a blunt instrument to identify forest cover loss; most remote sensing researchers trust their own, human interpretation over the machine-derived results. The machine requires -- at the very least -- explicit direction on which dimensions to measure. Greenness is one dimension, or maybe also the size and shape of irregularities in forests. Humans can infer along unspecified dimensions. I count myself as part of this group. And it is much easier for me to interpret contextual information or the textural patterns in the images.
This is why it is valuable to have humans double-check the machine output. We can do this ourselves. Or, better yet, we can ask you to help us classify satellite images. You, the crowd. Crowdsourcing. We are building a mobile application to facilitate this exercise. The mobile application rides on our GFW API, which underlies the Global Forest Watch site. Specifically, we utilize the
truth branch of the API.
How do we grab the imagery?
We called this branch of the API
truth because we use it to validate the alerts, either from untrustworthy machines or humans. The alerts are evaluated against the best available imagery, which, for now, is from Landsat 8. This process would be roughly the same for higher resolution imagery from, say, Planet Labs or Digital Globe.
The API accepts a few parameters, listed here:
Location (required) Latitude and longitude in degrees of the centroid of the returned, rectangular images.
Dimensions (required) Height and width in meters of the returned imagery.
Asset (required) The source of the imagery. For now the only option is
l8 for Landsat 8.
Date (required) The target date for the image in YYYY-MM-DD format. The returned image set will start with the asset image that is closest to the supplied date, and then the images closest to 1, 2, and 3 months prior.
Format (optional) This is an optional parameter that defaults to
thumbnail. This is the only option for now. We are actively working on an additional option, which will supply the raw information on each of the eight Landsat bands. This is intended to solicit help from developers who can
The set of images above are acquired using the following URL. The image is centered at 1.5N, 101E which is 1.2km high and 0.8km wide. The first image is from September 15, 2013 and the last is from the queried date, December 14, 2013 -- one per month.
We use this query as the basis for our mobile application. It is clear that clouds and cloud shadows pose a problem in identifying deforestation in this area. But it's not impossible to distinguish the outlines of progressively cleared forests.
So far, the crowdsourcing app seems promising. We'll keep you updated -- always in the most open and transparent way we know how.