Human visual perception almost always outperforms computer image processing algorithms, For example, your brain knows a river when it sees one. But a computer can’t distinguish rivers from lakes, roads or sewage treatment plants.
Especially with the spatially extreme and spectrally minimal resolution UAV image data, a change in thinking must take place. It makes more sense to think in terms of objects or entities to be identified rather than classifying in terms of individual pixels. The basic principle of object-based image analysis (OBIA) is to segment first and then classify.
The segmentation process is algorithm-dependent but looks iteratively for similarities in space, structure and channel dimensions for grouping neighboring and similar pixels into objects. This segments are classified in a next step using supervised training data.
The example of OBIA classification is typical of the manual performance of such operations using a software package. It consists of the following steps:
In software-based processing, additional steps often have to be performed for technical reasons. Even more often, the processing of the individual steps is not necessarily linear, since intermediate results are used repeatedly in different steps. The following figure shows below step by step described process as a graphical model, which you can integrate in QGIS as a tool in the Processing Box.
OBIA classification Workflow for Orthoimages
For reference you may Download the basic data. In Addition you may download the upper OBIA-workflow as an QGIS-Model
. You can add this to your QGIS project with pushing the first icon “Models” on the processing sidebar and choose Add Model to Toolbox
. Please note that is running with fixed default values. For modifying it you need to right-click on the model and choose Edit Model
.
In the following step by step guide an OBIA approach with QGIS and the OTB Toolbox is carried out as an template example. There are many segmentation algorithms and integrated classification methods. The Mean-shift method used here and the subsequent training with Support Vector Machine is a robust and common method. Especially the extension of the feature space (here called Range Radius
) and the search space (Spatial Radius
) as well as the size of the segmented objects (Minimum Region size
) is crucial for a satisfying result. The principle is transferable to the different forms of the OBIA and despite abundant literature and some good instructions it is a free empirical game.
If you need to learn how to digitize with QGIS you may follow this tutorial. However we will only digitize Points and not polygons.
Main Menu->Settings->Digitize
and check Reuse last entered attribute values
. this will makes in much more comfortable to digitize training points of one class in series.
Create a point vector file and digitize the following classes:
class | CLASS_ID |
---|---|
water | 1 |
meadows | 2 |
meadows-rich | 3 |
bare-soil-dry | 4 |
crop | 5 |
green-trees-shrubs | 6 |
dead-wood | 7 |
other | 8 |
Provide at minimum 10 widely spread sampling points.
Save this file naming it sample.gpgk
.
In the search field of the Processing Toolbox
type segmentation and double click Segmentation
.
input image
: example-5.tifmeanshift
from the drop-down list Segmentation algorithmSpatial Radius
value can be set to 25. This is determining the spatial range of the segementation and is also experimental. Try to identify the scale of your major classes in pixelRange Radius
value can be set to 25. We are dealing with RGB images that have a range 0-255
. The optimal value depends on the datatype, the dynamic range of the input image and requires experimental trials for the specific classification objectivesMinimum Region size
in pixels to 25. The minimum size of a region (in pixel unit) in segmentation. Smaller clusters will merged to the neighboring cluster with the closest radiometryProcessing mode
as Vector8-neighborhood connectivity
Minimum object size
in pixels to 200Output vector file
as segments-meanshift.shpRun
Properties->Symbology->Simple Fill
, Fill Style
: No Brush
and Stroke color
: white
.Type zonalstats
in the search field of the Processing Toolbox and open the tool ZonalStatistics
. You find it under the image manipulation section of OTB.
input vector data
the vector file with segments from above segmentation segments-meanshift.shpoutput vector
: segments-meanshift-zonal.shp.Run
.
Type in the search field of the Processing Toolbox
join and double click Join Attributes by Location
.
Base Layer
: segments-meanshift-zonal.shpJoin Layer
: sample.shpJoin Type
: choose Take Attributes of the first matching…Discard records which could not be joined
fix
in the search field of the Processing Toolbox and open Fix Geometries
which will in most cases do the job.
Type train
in the search field of the Processing Toolbox and open TrainVectorClassifier
Vector Data List
select the correct vector file by clicking … and browse directly to the file containing the training area polygons segments-meanshift-zonal.shpOutput model filename
as lahn-gi-spann-obia.modelField names for training features
copy and paste "mean_0 stdev_0 mean_1 stdev_1 mean_3 stdev_3 mean_2 stdev_2"
Field containing the class id for supervision
is CLASS_ID
libsvm
Usually the straighforward Support Vector Machine is doing a good joblinear
csvc
Parameters optimization
to ON
Run
Type class
in the search field of the Processing Toolbox and open VectorClassifier
Vector Data
select manually the correct vector file clicking … and browse directly to the file containing the training area polygons containing segments and features for the whole image lahn-gi-spann-segments-meanshift-zonal.shpinput model
file is lahn-gi-spann-obia.modelCLASS_ID
Field names to be calculated
same attributes as above: "mean_0 stdev_0 mean_1 stdev_1 mean_3 stdev_3 mean_2 stdev_2"
output vector
data lahn-gi-spann-classified_obia.shp.Run
Layer->Layer properties->Symbology->Style->Load style...
.You will see partly predominantly excellent classification. However, there are also significant misclassifications.