Environments | PYTHON :: EASI |
Quick links | Description :: Parameters :: Parameter descriptions :: Details :: Example :: References :: Related |
Back to top |
Back to top |
oaseg(fili, dbic, maskfile, mask, filo, ftype, dbsd, segscale, segshape, segcomp)
Name | Type | Caption | Length | Value range |
---|---|---|---|---|
FILI* | str | Input file name | 1 - | |
DBIC | List[int] | List of input channels | 0 - | |
MASKFILE | str | Name of input AOI file | 0 - | |
MASK | List[int] | Layer number of AOI | 0 - 1 | |
FILO | str | Output file name | 0 - | |
FTYPE | str | Output file type | 0 - 3 | PIX | SHP Default: PIX |
DBSD | str | Output segment description | 0 - 64 | Default: Segmented Layer |
SEGSCALE | List[float] | Scale of objects | 0 - 1 | 5 - 1000 |
SEGSHAPE | List[float] | Shape of objects | 0 - 1 | 0.1 - 0.9 |
SEGCOMP | List[float] | Compactness of objects | 0 - 1 | 0.1 - 0.9 |
Back to top |
FILI
The name of a GDB-supported file that contains the image channels to use for segmentation.
DBIC
A list of channels in the raster image to process.
By default, all channels are processed.
MASKFILE
The file that has the vector layer containing the area of interest (AOI) to process.
This parameter is optional.
MASK
The number of the vector layer that contains the AOI to process.
A vector layer can contain multiple AOIs. Each must be a closed polygon.
If you specify a value for MASKFILE, you must specify a value for this parameter.
FILO
The name of the output file to which to write the segmentation.
If you do not specify a value, a new vector later containing the segmentation is written to the input file you specified.
FTYPE
The format of the output file.
The default is PIX.
DBSD
Describes (in up to 64 characters) the contents or origins of the output layer.
SEGSCALE
The scale parameter of the multiresolution segmentation algorithm.
Scale is a unitless parameter that controls the average object size during the segmentation process. A small scale results in a greater number of objects at the expense of the generalization. A large scale forces the segmentation to create fewer objects, but various types of land cover, for example, might be merged together.
SEGSHAPE
The shape parameter of the multiresolution segmentation algorithm.
A low shape value, such as 0.1, places a high emphasis on color; that is, pixel intensity, which is typically the most important aspect of creating meaningful objects.
SEGCOMP
The compactness parameter of the multiresolution segmentation algorithm.
A high compactness weighting, such as 0.9, produces more compact object boundaries, such as with crop fields or buildings.
Back to top |
OASEG uses an open-source segmentation method, known as a multiresolution segmentation algorithm.
The first step in object-based image classification is segmentation: calculating discrete regions of image objects. This is achieved by stratification of an image. These image-objects are used further as the basic unit of analysis for developing image-analysis strategies, including classification and change detection.
To date, various image-segmentation techniques have been developed with mixed results. The earlier developments in segmentation techniques, however, were in the machine-vision domain and aimed at the analysis of patterns, the delineation of discontinuities on materials or artificial surfaces, and the quality control of products. Later, these concepts were used in remote sensing (RS) for attribute identification.
Functionally, the segmentation process partitions an image into statistically homogenous regions or objects (segments) that are more uniform within themselves and differ from their adjacent neighbors. Another conceptually important aspect of image segmentation is its relevance to spatial-scale theory in RS, which describes how local variance of image data in relation to the spatial resolution can be used to select the appropriate scale for mapping individual land-use classes. Image segmentation defines the size and the shape of image objects and influences the quality of the follow-on analysis: classification.
However, a single segmentation algorithm does not yet exist that can reproduce the same objects as a human can identify intuitively. To attempt to overcome this issue, image objects are created that are as homogeneous as possible and grouped together by using a classification process to create objects that can come close to human perception. This can be an iterative process in which various segmentation algorithms are used, or a specific algorithm is used with various parameters, to achieve a suitable result.
A data set of input imagery contains multiple channels. The underlying algorithm is a hierarchical step-wise region grown by using random "seeds" spread over the entire image. This method can be classified as a "bottom-up" optimization routine that starts with a pixel and ends with segments that are groups of like pixels.
The criteria that defines the growth of a region can be based on the difference between the pixel intensity and the mean of the region. The algorithm assesses the local homogeneity based on the spectral and spatial characteristics. The size of the object is controlled with the value for scale, which you specify. The larger the scale, the larger will be the output object. Other homogeneity criteria are based on shape and compactness.
The polygons, at this level of abstraction, can be considered image objects in the image domain and do not necessarily represent objects in the real world.
Back to top |
In the following example, the green (2%), red (3%) and infrared spectral bands (4%, 5%) of a Landsat-7 image are used. The segmentation is performed only on the part of the image corresponding to the AOI specified for the MASKFILE parameter.
from pci.oaseg import oaseg fili="l7_ms.pix" dbic=[2,3,4,5] #Input channels maskfile="l7_ms_OA_Area_Of_Interest.pix" mask= [2] filo="l7_ms_seg_25_0.5_0.5.pix" ftype = "" dbsd = "Segmented layer" segscale = [25] segshape = [0.5] segcomp = [0.5] oaseg( fili, dbic, maskfile, mask, filo, ftype, dbsd, segscale, segshape, segcomp)
Back to top |
© PCI Geomatics Enterprises, Inc.®, 2024. All rights reserved.