Traditionally, to determine the accuracy every selected point would have to be
visited in the field. This traditional method is costly, time consuming, and the fields can be
difficult to access. However, another way to perform this analysis would be to create a database
with the selected points and corresponding Landsat images for analysis by an expert
photointerpreter. Since the points cover a wide area, a large database is required to store the
the associated number of images used for this mapping. Thus, for each photointerpreter, it would
be necessary to copy this large database onto each photointerpreter’s computer, install programs
to access the data and create a way for them to register their classification of each point.
Given all this work, we instead chose to create a web classification tool (series class
in order to simplify and
centralize photointerpreter access to the database. This web tool allows the photointerpreter to simultaneously
view the selected points overlain on Google Maps basemap (these images have a better spatial resolution than
Landsat iamges but the date at which they were captured is unkown) and view the points’ corresponding EVI2-MODIS
time series which can help the photointerpreter gain a better understanding of the land use of the selected point over time.
The photointerpreter also has access to the sugarcane map supplied by the SEMA (Environmental Secretariat of São Paulo).
This map is a mosaic of areas mapped by the São Paulo sucargane agribusiness sector that was created in order to determine the
amount of sugarcane area harvested through burning for the 2009/10 crop year harvest. Yet this map does not cover the
entire area of sugarcane in the state of São Paulo.
Figure 1 shows a screenshot of the webtool used by the photointerpreters. When creating the webtool, the objective
was to configure the layout and features so that the photointerpreter could efficiently classify the allocated points.
The project coordinator is the only person who has access to the saved classifications (Figure 2) of the photointepreters.
Thus, each photointerpreter does not know the classification (sugcarcane or no sugarcane) determined by the other three
photointpreters, reducing bias in each classification decision.