Skip navigation links

Package com.luciad.imaging

Provides a domain model for working with pixel data and a framework for performing image processing on such data.

See: Description

Package com.luciad.imaging Description

Provides a domain model for working with pixel data and a framework for performing image processing on such data.

Domain model

The class ALcdBasicImage provides an opaque representation of a two-dimensional pixel grid. It is the basic building block for modeling pixel data. A number of concepts are layered on top of ALcdBasicImage: All the aforementioned classes derive from ALcdImage, which defines common properties for all image types, such as the image's geographical reference.

You can obtain an image from one of the model decoders or using one of the available builders:

Image operators

The operator subpackage defines image processing operators that can be applied to an image. An ALcdImageOperator is a function which creates an image based on a given set of input parameters. As an example, the convolve operator takes an image and a convolution kernel as input, and produces the convolved version of the image as output.

An ALcdImageOperatorChain can combine one or more ALcdImageOperators. Like an operator, a chain produces a processed image as output. However, it only takes a single image as input, which is automatically propagated through the operators that comprise the chain. This allows the same processing steps to be applied to multiple input images with minimal code.

Working with processed images

To read pixel values out of an image, an ALcdImagingEngine is required. This class abstracts away the details of the internal representation of images and the implementation of the various operators. It provides factory methods to create engine instances.

To visualize ALcdImage objects in a GXY view, see TLcdGXYImagePainter. For Lightspeed views, use a TLspRasterLayerBuilder with optionally a TLspImageProcessingStyle.

Image processing model

Image processing applies a number of operations to pixels in an image. A single pixel is represented by an array of samples, one for each of the image's bands.

During image processing samples are always represented as 32-bit floating point values. The mapping from data samples to the floating point samples is called normalization. After processing the normalized values are converted back to data samples, this is called de-normalization and is simply the inverse process of normalization. So conceptually the steps for processing an image are:

  1. Read pixel(s) from input image
  2. Normalize pixel(s)
  3. Apply image operations to the normalized pixel(s)
  4. De-normalize pixel(s)
  5. Write de-normalized pixel(s) the output raster


The normalization of a pixel consists of the following steps: handle 'no data' values and re-scale samples.

1. The 'no data' (e.g. unknown) samples are mapped to NaN. Each band indicates how a no data value is defined.

2. The samples are rescaled based on the normalized range of its band. This is equivalent to the following pseudo-code:

   float normalize(sample) {
     if(normalized) {
       return sample;
     } else {
       return (sample - type_min) / (type_max - type_min) * (normalized_range_max - normalized_range_min) + normalized_range_min;
Where type_min and type_max depend on the data type and #significant bits. In practice for measurements bands the samples are typically unchanged (so the kernel works directly on the measurement values) and color bands are typically converted to [0,1].

Note that if the image contains color values (see ALcdBandColorSemantics for details), the bands should usually be ordered according to the ColorSpace. This is relevant for some operations such as the TLcdColorConvertOp which work only on color images.

Skip navigation links