3D Delivery Solution

Umbra Composit enables massive 3D datasets to be used in real-time rendering. Built as a true cloud-scale platform, there are no limits on input data size and resolution, and Umbra’s proprietary spatial representation of 3D data effortlessly delivers the content to any platform, engine or rendering pipeline. Entirely written in low-level C++, Composit is easy to integrate and readily available today.




Better real-world performance for autonomous vehicles

Photorealistic 3D digital twins of the real world environments are ideal for simulating LiDAR and camera sensor feedback.

Single data repository for all simulation agents

All simulation agents access the required 3D content on-demand from Umbra Composit. This significantly reduces the data transfer and performance overhead.

Scalable content pipeline for 3D reality captures

Umbra Composit includes a perfectly parallel computation infrastructure to handle large input, fast throughput and efficient storage for massive 3D data sets.

Real-time visualization made easy

Once Umbrafied, the same 3D data can be delivered to any real-time visualization application regardless of platform. Visualizing a failed simulation scenario in a mobile browser has never been easier.

Simple to integrate anywhere

C++ APIs for data ingestion and on-demand delivery to all 3D authoring tools, rendering engines and end-user applications.

About the Demo

The demo data was captured during a 30-minute window on a sunny March day in San Francisco, using a hand-held Canon 6D Mark II with a 14mm wide-angle lens.

The 1900 26Mpix photos were used as input to construct a dense point cloud with a third-party photogrammetry software. This point cloud was then Umbrafied.


1) Change the camera by clicking on the camera windows in the top left corner of the demo.

2) You can choose to display different types of information from the settings menu, on the bottom right corner of the demo window.

Visualization Guide:


Umbra bakes point classifications from the input data into a separate texture which can be queried and visualized in real-time.

Simulated Sensor Input (Point Cloud)

You can efficiently simulate LiDAR and other scanning technologies by performing raycasts against our collision meshes.

Collision Meshes

In addition to querying currently visible objects at appropriate LOD levels, you can also query collision meshes in a certain area. Here the car is querying collision meshes nearby its surroundings.

Car Point-of-view

Visualizes the 3D-data streamed in from the simulation agent's point-of-view.


Shows the wireframe of the Umbrafied mesh.

Want to know more?

Fill out the form and we will be in touch in the next 48 hours.