OASVMTRAIN

Object-based SVM training


EnvironmentsPYTHON :: EASI
Quick linksDescription :: Parameters :: Parameter descriptions :: Details :: Example :: References :: Related

Back to top

Description


OASVMTRAIN uses a set of training samples and object attributes, stored in a segmentation attribute table, to create a series of hyperplanes, and then writes them to an output file containing a Support Vector Machine (SVM) training model.
Back to top

Parameters


oasvmtrain (filv, dbvs, trnfld, tfile, kernel, svmnorm, trnmodel)

Name Type Caption Length Value range
FILV* str Input vector file 1 -    
DBVS* List[int] Segment number of vector layer 1 - 1 1 -
TRNFLD str Name of training field 0 -   Default: Training
TFILE* str Text file of field names 1 -    
KERNEL* str Kernel function 3 - 6 LINEAR | POLY | RBF | SIGM
Default: RBF
SVMNORM str SVM attribute normalization 0 - 3 YES | NO
Default: YES
TRNMODEL* str Output file containing training model 1 -    

* Required parameter
Back to top

Parameter descriptions

FILV

The name of the vector file that contains the segmentation layer and training field.

DBVS

The segment number of the vector layer that contains the segmentation and training field.

TRNFLD

The name of the field containing the training samples.

The default name is Training.

TFILE

The name of a text file that contains the names of the fields to use for training. The file name extension must be .txt.

Typically, this is a file generated by running OAFLDNMEXP.

KERNEL

The type of kernel function for the SVM classification

Specify the attributes you want to calculate as follows:

For more information about kernels, see Details.

The default value is RBF.

SVMNORM

Whether to apply attribute normalization before SVM classification.

You can specify the value as one of the following:

This parameter is optional.

For more information about normalization, see Details.

TRNMODEL

A text file to create and to which to write the parameters of the SVM training model.

A file of the same name must not exist in the output folder.

The file name extension must be .txt.

Back to top

Details

SVM workflow

A typical workflow starts by running the OASEG algorithm, to segment your image into a series of object polygons. Next you would calculate a set of attributes (statistical, geometrical, textural, and so on) by running the OACALCATT algorithm. Alternatively, when you are working with SAR data, you would use OASEGSAR and OACALCATTSAR. You can then, in Focus Object Analyst, manually collect or import training samples for some land-cover or land-use classes; alternatively, use OAGTIMPORT for this task. The training samples are stored in a field of the segmentation attribute table with a default name of Training.

To train an SVM model with OASVMTRAIN, the following is required as input:
  • A segmentation with a field containing training samples
  • A list of attributes

You can create the list of attributes by running OAFLDNMEXP. Alternatively, the list can be read directly from the table of segmentation attributes using field metadata that was created by OACALCATT or OACALCATTSAR.

Figure 1. Workflow of SVM training

Workflow of SVM training

SVM classification

Based on statistical-learning theory, Support Vector Machine (SVM) is a machine-learning methodology that is used for supervised classification of high-dimensional data. With SVM, the objective is to find the optimal separating hyperplane (decision surface, boundaries) by maximizing the margin between classes, which is achieved by analyzing the training samples located at the edge of the potential class.

These training cases are referred to as support vectors. The algorithm mostly discards (other) training sets beside the support vectors. This results in an optimal hyperplane fitting with effectively fewer training samples used. This implies that SVM achieves better classification results even with a smaller training set.

In its simplest form, SVM is a linear binary classifier. To use SVM for multiclass applications, two main approaches have been suggested, with the basic idea being to reduce multiclass to a set of binary problems.

The first approach, which is used by PCI technology, is called one against all. This approach generates n classifiers, where n is the number of classes. The output is the class that corresponds to the SVM with the largest margin. With multiclass, it must interpret n hyperplanes. This requires n quadratic programming (QP) optimization problems, each of which separates one class from the remaining classes.

The second approach, which is not used by PCI technology, is one against one. This approach combines several classifiers and can perform pair-wise comparisons between all n classes. Therefore, all possible two-class classifiers are evaluated from the training set of n classes, each classifier being trained on only two out of n classes, giving a total of n(n–1)/2 classifiers.

Applying each classifier to the test-data vectors gives one vote to the winning class. The data is assigned the label of the class with the most votes.

SVM kernels

When two classes are not discriminable linearly in a two-dimensional space, they might be separable in a higher-dimensional space (hyperplanes). The kernel is a mathematical function used by the SVM classifier to map the support vectors derived from the training data into the higher-dimensional space.

The are four basic kernels:
  • Radial-basis function (RBF)
  • Linear
  • Polynomial
  • Sigmoid

Typically, the RBF kernel provides the best results.

The polynomial kernel is fixed to the third order.

Optimization and cross-validation

Each SVM kernel has its own set of parameters that affects the behavior of the kernel. For example, each kernel includes a parameter constant (C) that penalizes the model when it gets over-fit.

A specific optimization procedure is used and, using the concept of cross-validation, the appropriate values for the parameters (C et al.) are calculated during model training. The calculated parameter values achieve generally the best accuracy for the training samples while reducing the possibility of model over-fitting.

Normalization of data

It is recommended to normalize data so that each attribute can be treated equally to discriminate classes. Normalization is particularly necessary when all attributes are mixed from various types; for example, the mixture of spectral values with geometrical ones, or various SAR parameters and texture features. Attributes are normalized by using linear scaling to produce a range from zero through one; that is, the minimum value is mapped to zero, the maximum is mapped to one, and the values in between are scaled linearly.

Back to top

Example

from pci.oasvmtrain import oasvmtrain
filv="l7_ms_seg25_0.5_0.5.pix"
dbvs=[2]
trnfld="Training_set1"                                               # Field containing the training samples
tfile="l7_ms_seg25_0.5_0.5_attributes.txt"           # List of oaattributes to use to train the SVM model
kernel="RBF"
svmnorm="YES"                            
trnmodel="l7_ms_seg25_0.5_0.5_svmtrain.txt"   # Output SVM model

oasvmtrain (filv, dbvs, trnfld, tfile, kernel, svmnorm, trnmodel)
      
Back to top

References

The core SVM algorithm described herein is based on the open-source code LIBSVM contributed by C. C. Chang and C. J. Lin and described in the following technical report: For more information about the SVM algorithm, LIBSVM, and to download a PDF copy of the report: csie.ntu.edu.tw

© PCI Geomatics Enterprises, Inc.®, 2024. All rights reserved.