OASEG

Segment an image


EnvironmentsPYTHON :: EASI
Quick linksDescription :: Parameters :: Parameter descriptions :: Details :: Example :: References :: Related

Back to top

Description


OASEG applies a hierarchical region-growing segmentation to image data and writes the resulting objects to a vector layer. Each object represents a homogeneous local area according to the input channels and the definition of criteria of homogeneity based on the scale, shape and compactness.
Back to top

Parameters


oaseg(fili, dbic, maskfile, mask, filo, ftype, dbsd, segscale, segshape, segcomp)

Name Type Caption Length Value range
FILI* str Input file name 1 -    
DBIC List[int] List of input channels 0 -    
MASKFILE str Name of input AOI file 0 -    
MASK List[int] Layer number of AOI 0 - 1  
FILO str Output file name 0 -    
FTYPE str Output file type 0 - 3 PIX | SHP
Default: PIX
DBSD str Output segment description 0 - 64 Default: Segmented Layer
SEGSCALE List[float] Scale of objects 0 - 1 5 - 1000
SEGSHAPE List[float] Shape of objects 0 - 1 0.1 - 0.9
SEGCOMP List[float] Compactness of objects 0 - 1 0.1 - 0.9

* Required parameter
Back to top

Parameter descriptions

FILI

The name of a GDB-supported file that contains the image channels to use for segmentation.

DBIC

A list of channels in the raster image to process.

By default, all channels are processed.

MASKFILE

The file that has the vector layer containing the area of interest (AOI) to process.

This parameter is optional.

MASK

The number of the vector layer that contains the AOI to process.

A vector layer can contain multiple AOIs. Each must be a closed polygon.

If you specify a value for MASKFILE, you must specify a value for this parameter.

FILO

The name of the output file to which to write the segmentation.

If you do not specify a value, a new vector later containing the segmentation is written to the input file you specified.

FTYPE

The format of the output file.

The following formats are supported:

The default is PIX.

DBSD

Describes (in up to 64 characters) the contents or origins of the output layer.

SEGSCALE

The scale parameter of the multiresolution segmentation algorithm.

Scale is a unitless parameter that controls the average object size during the segmentation process. A small scale results in a greater number of objects at the expense of the generalization. A large scale forces the segmentation to create fewer objects, but various types of land cover, for example, might be merged together.

SEGSHAPE

The shape parameter of the multiresolution segmentation algorithm.

A low shape value, such as 0.1, places a high emphasis on color; that is, pixel intensity, which is typically the most important aspect of creating meaningful objects.

SEGCOMP

The compactness parameter of the multiresolution segmentation algorithm.

A high compactness weighting, such as 0.9, produces more compact object boundaries, such as with crop fields or buildings.

Back to top

Details

OASEG uses an open-source segmentation method, known as a multiresolution segmentation algorithm.

The first step in object-based image classification is segmentation: calculating discrete regions of image objects. This is achieved by stratification of an image. These image-objects are used further as the basic unit of analysis for developing image-analysis strategies, including classification and change detection.

To date, various image-segmentation techniques have been developed with mixed results. The earlier developments in segmentation techniques, however, were in the machine-vision domain and aimed at the analysis of patterns, the delineation of discontinuities on materials or artificial surfaces, and the quality control of products. Later, these concepts were used in remote sensing (RS) for attribute identification.

Functionally, the segmentation process partitions an image into statistically homogenous regions or objects (segments) that are more uniform within themselves and differ from their adjacent neighbors. Another conceptually important aspect of image segmentation is its relevance to spatial-scale theory in RS, which describes how local variance of image data in relation to the spatial resolution can be used to select the appropriate scale for mapping individual land-use classes. Image segmentation defines the size and the shape of image objects and influences the quality of the follow-on analysis: classification.

However, a single segmentation algorithm does not yet exist that can reproduce the same objects as a human can identify intuitively. To attempt to overcome this issue, image objects are created that are as homogeneous as possible and grouped together by using a classification process to create objects that can come close to human perception. This can be an iterative process in which various segmentation algorithms are used, or a specific algorithm is used with various parameters, to achieve a suitable result.

A data set of input imagery contains multiple channels. The underlying algorithm is a hierarchical step-wise region grown by using random "seeds" spread over the entire image. This method can be classified as a "bottom-up" optimization routine that starts with a pixel and ends with segments that are groups of like pixels.

The criteria that defines the growth of a region can be based on the difference between the pixel intensity and the mean of the region. The algorithm assesses the local homogeneity based on the spectral and spatial characteristics. The size of the object is controlled with the value for scale, which you specify. The larger the scale, the larger will be the output object. Other homogeneity criteria are based on shape and compactness.

The result of the segmentation process is as follows:
  • An initial abstraction of the original data
  • Creation of a vector (polygon) representation of the image objects

The polygons, at this level of abstraction, can be considered image objects in the image domain and do not necessarily represent objects in the real world.

Required available memory
Segmentation is intense computationally and, as such, relies on the amount of memory (RAM) on your computer. The amount of memory used depends on the size (in pixels and channels) of the input raster. To calculate the amount of RAM (in gigabytes) that will be required by Object Analyst to process a file, you can use the following formula:
  • nGB = nX × nY × (4 × nC + 29) ÷ 795364314.1
  • where:
  • nGB is the amount of required memory (RAM in gigabytes) to perform the segmentation
  • nX is the number of pixels in the x-dimension (pixels)
  • nY is the number of pixels in the y-dimension (lines)
  • nC is the number of channels
Note: The segmentation algorithm respects NoData pixels in the source image. That is, a pixel defined as NoData by the NO_DATA_VALUE metadata tag will be excluded from the segmentation process. The file-level metadata tag is used as the default for each channel; however, channel-level tags, when available, will override the default. When metadata does not exist, each pixel in the input is considered valid.
Back to top

Example

In the following example, the green (2%), red (3%) and infrared spectral bands (4%, 5%) of a Landsat-7 image are used. The segmentation is performed only on the part of the image corresponding to the AOI specified for the MASKFILE parameter.

from pci.oaseg import oaseg

fili="l7_ms.pix"
dbic=[2,3,4,5]	#Input channels
maskfile="l7_ms_OA_Area_Of_Interest.pix"
mask= [2]
filo="l7_ms_seg_25_0.5_0.5.pix"
ftype  = ""
dbsd = "Segmented layer"
segscale  = [25]
segshape  = [0.5]
segcomp = [0.5]

oaseg( fili, dbic, maskfile, mask, filo, ftype, dbsd, segscale, segshape, segcomp)
Back to top

References

For more information about region-growing segmentation, scale, shape, and compactness parameters, see the following published works:

© PCI Geomatics Enterprises, Inc.®, 2024. All rights reserved.