This article explains how you can use LuciadFusion in the Amazon Web Services (AWS) cloud environment. It helps you understand:

  • Running LuciadFusion Platform

  • Accessing data on S3

Running LuciadFusion Platform

All guidelines from Deploying the LuciadFusion Platform also apply in an environment like the AWS cloud. As described in that article, the LuciadFusion Platform is a server application that provides access to published data. Because it’s a long-running application, the most natural fit is to run LuciadFusion Platform on one or more EC2 instances.

For scaling, the guidelines in the Installing, configuring, and running LuciadFusion on multiple nodes apply as well.

Still, you may want to reorganize the LuciadFusion Platform responsibilities to make better use of different aspects of the environment.

Adding data

Normally, LuciadFusion Platform assumes that the data to serve is available in the local file system. That includes network locations mounted locally. You configure the directories containing your data as data roots in LuciadFusion Studio, and LuciadFusion Platform then recursively crawls those directories to find what data is available. You can repeat the crawling periodically to keep the list of available data up-to-date.

You can store the data to serve on the attached storage of an EC2, but that severely limits the amount of data that you can serve. When handling large amounts of data, you typically want to store the data on S3 instead. Being an object store, S3 is fundamentally different from a file system in the sense that it doesn’t have the concept of directories. Objects are stored in buckets, and are identified by an opaque key. The objects only relate to each other through conventions that you encode in the keys.

The properties of S3 make it impractical to perform the normal LuciadFusion crawling process on buckets. Instead, we recommend using the LuciadFusion Studio REST API to add Data items one by one.
Your approach could be to list the objects in your bucket once, and to call the REST API for each object. After this initial seeding, use S3 events to trigger additional REST API calls.
Of course, any other method of triggering the right REST API calls is fine too.

To make direct data serving from S3 possible, you must set up LuciadFusion Platform correctly. Accessing data on S3 describes how to do that.

Pre-processing

When you add data of certain types or create specific types of services, the application pre-processes the data. It does so to improve data serving performance. The pre-processing happens on the instance that’s running the LuciadFusion Platform application.

It’s usually preferable to separate the pre-processing workload from the data serving, though. To this end, LuciadFusion comes with standalone components that can perform an individual pre-processing task outside the LuciadFusion Platform application. LuciadFusion Platform can then serve the data produced by these tools without additional pre-processing.

Because the pre-processing tools are standalone components, it’s possible to run them on distinct instances. Pre-processing becomes a task that you can scale separately, according to the resources available. Just add the pre-processed data to LuciadFusion Studio when it’s ready.

Table Table 1, “Standalone preprocessing tools” shows an overview of some tools that come with LuciadFusion. LuciadFusion provides the tools as sample code so that you can adapt them to your needs.

Table 1. Standalone preprocessing tools
Tool Purpose

fusion.engine

Pre-render vector and raster data into tile pyramids

fusion.pointcloud

Organize point cloud data into tile pyramids

panorama.converter

Extract panoramic imagery into tile pyramids

meshup 3D tiles Processing Engine

Optimize 3D mesh data into tile pyramids

For recommendations on pre-processing various formats, see Pre-process data or serve it on-the-fly?

Avoid automatic pre-processing because that stores the result in the local file system.

Accessing data on S3

For LuciadFusion Platform to serve data, two things must work:

  • LuciadFusion model decoders must be able to read the data

  • LuciadFusion Platform must be able to index the data

To work with data on S3, you must configure both the model decoders and the platform.

Data decoding support

Decoding data from AWS S3 describes how to configure the model decoders to access S3.

In short, you must add an appropriate input stream factory to the classpath and configure the environment to provide credentials and the region settings for the AWS SDK’s S3 client.

Data indexing support

LuciadFusion Platform can index data from any location supported by a resource connector. Resource connectors interpret the file location provided for the added data, can check whether the file exists, and can provide file metadata to be indexed.

By default, LuciadFusion Platform only has an active resource connector that understands file paths in the local file system. LuciadFusion Platform also provides a connector that supports objects stored in S3, but it’s turned off by default. To turn on this resource connector, you must opt in by activating the fusion.aws Spring profile. Note that this connector expects to find a correctly configured input stream factory on the classpath, as described in Data decoding support.

The entire S3 resource connector is made available as sample code, in the package samples.fusion.platform.resource.aws. If needed, you can change the sample code to your liking, and replace the provided S3 resource connector with your own version.