Some panoramic sensors provide not just static imagery but can also record videos. Various security cameras and drones can register video footage, for example. These videos typically don’t span the entire 360 degrees around the sensor but are oriented in a specific direction. The sensors can also move during recording. You can visualize this data in much the same way as regular panoramic imagery, but you need to take care of some important details.

ria video panorama
Figure 1. Panoramic video footage of the Luciad Leuven office area visualized in LuciadRIA.

Providing the video

Similarly to regular panoramas, you provide the imagery through a PanoramaModel. You must make the model aware that the provided imagery consists of videos. To do so, set the imageryType property of the PanoramaDescriptor to PanoramaImageryType.VIDEO. To provide the actual video, you supply an HTMLVideoElement which serves as the source of the video footage in the model’s getPanoramicImage function.

If you have a single-level, single-tile panoramic video recorded in pinhole projection, you can use the convenience function in VideoPanoramaModel to create the model. You only need to provide the HTMLVideoElement.

Program: Provide the video
import {createVideoPanoramaModel} from "@luciad/ria/model/tileset/VideoPanoramaModel.js";

// ...

const panoramaModel = await createVideoPanoramaModel(video);

Orienting the video

When drawing a panorama using a painter, you can use a PanoramaStyle to style the panorama in various ways. For panoramic video footage, setting the orientation property of PanoramaStyle is key to correct visualization. To orient the video as required, construct a PanoramaOrientation literal with field-of-view, yaw, pitch, and roll values.

These values may be embedded in a data stream in some format. They may also come from a file hosted next to the video, for example. The source data may express the orientation in another convention and need conversion. Regardless of the input form, the final result must end up in a PanoramaOrientation literal that you must supply when you call drawPanorama on a GeoCanvas instance.

Program: Draw a panorama using a FeaturePainter
const painter = new FeaturePainter();
painter.paintBody = function(geoCanvas, feature, shape, map, layer, state) {
  const panoramaStyle = {
    orientation: {
      fovX: 50,
      fovY: 28,
      yaw: 5,
      pitch: -60,
      roll: 0
    }
  };
  geoCanvas.drawPanorama(shape.focusPoint!, panoramaStyle);
};

Moving the sensor

The panorama model works in tandem with the feature model to provide the model data to the layer. This means that you can move or update the feature shape in the feature model like you would any regular feature.

Synchronizing the sensor metadata with the video frames

If the sensor is moving and changing orientation often, it’s important to keep the metadata synchronized with the video frames. In other words, make sure that the video frames are shown or projected in the correct location at the right time. You can take a look at how the LuciadRIA Video Panorama sample does this for inspiration, or implement your own approach.

The Video Panorama sample collects the metadata and indexes it by timestamp in a class called TimedProperties. When the video is playing, its currentTime property advances and is used to update an instance of TimedProperties to the latest timestamp. This means looking up the corresponding metadata and signaling to any listeners that the current properties have been updated. A listener then updates the location and properties of the sensor to align with the new metadata. Another listener could for example update some UI elements with the new information. The new properties can then also be used in the painter, so that the panorama visualization follows accordingly.