Version 3.4.26

Highlights
Expand full change log
Cloud Features
  • The cloud features are experimental.

    This version adds optional support for performing cloud Umbra computations in AWS using AWS Deadline Cloud. Deadline Cloud is a service for creating virtual render farms in AWS. It is easy to set up, use and scale.

    Umbra tome computations can be submitted to AWS Deadline Cloud using the SDK API, Command line utility (umbracli) and the Umbra Debugger GUI application.

    See doc/articles/computing_in_aws_using_deadline_cloud.md for details on how to set up and try the feature.

  • New executable CloudWorker allows easy integration and distribution of Umbra computations for example to a local or cloud build farm.
  • Deadline Cloud and CloudWorker computations are able to produce optimized job graphs that adapt to available computation resources and limitations.
  • Deadline Cloud and CloudWorker computations support optional automatic shared cloud computation cache in AWS S3. Sharing a remote cache allows faster content iteration times by reusing partial computation results.
General
  • Includes support for iOS.
  • Includes support for macOS. Both 64-bit x86 and 64-bit ARM architectures are supported.
  • The command line utility, Umbra Debugger and other tools are now built for Windows using Visual Studio 2019. The path to these tools in the release package is bin\win64_vs2019.
  • Add builds for Amazon Linux 2023, which is the Linux distribution used by AWS Deadline Cloud
CloudWorker
  • CloudWorker is a new executable suitable for performing distributed computations. It allows easy integration and distribution of Umbra computations for example to a local or cloud build farm.

    CloudWorker can be used to generate a number of jobs for an Umbra computation, which can then be processed in a distributed fashion, for example, by a build farm. CloudWorker behavior can be configured to adapt the generated job count and job graph layout to build farm or service restrictions and available resources.

    CloudWorker has optional support for using shared computation cache using AWS S3.

    See doc/articles/build_farm_integration_using_cloudworker.md for details.

Optimizer
  • Add new computation class DeadlineCloudComputation, which is similar to and compatible with LocalComputation. Allows performing Umbra tome computations using Deadline Cloud.
  • Add optional Deadline Cloud support to Umbra::Task computation API.
  • Add a new class DeadlineCloudComputationOptionEnumerator, which can be used to query lists of various Deadline Cloud AWS resources and settings. This feature is intended for optionally implementing UI autofill feature for Deadline Cloud settings.
  • Add support for serializing full or partial TomeBuildGraphs. See TomeBuildGraph::serialize and TomeBuildGraph::deserialize.
  • TomeBuildGraph generates cost estimate for each BuildGraphNode.
  • Update internal computation cache implementation to work better with remote distributed cache.

    This version is not compatible with cache files from old versions. They will be ignored if present. You can delete such old cache files, but the computation will also overwrite them if needed, and delete the oldest ones when the total size grows over the configured limit. The number and size of cache files generated will be slightly larger than with the previous versions.

  • Fix cache cleanup on Linux. Previously oldest cache files above the size threshold were not being automatically deleted on Linux.
  • Umbra::Scene can now be partially serialized with an AABB filter, so that only objects that overlap with the AABB are included.
  • Command line utility (umbracli) can be optinally used to perform Deadline Cloud computations. It is also now able to query some Deadline Cloud AWS resources and settings.
Umbra Debugger GUI tool
  • Add support for Deadline Cloud computations. From the tools menu, select Deadline Cloud to configure your Deadline Cloud settings and enable the feature. When Deadline Cloud computations are enabled, the Compute button in the computation tab will submit a Deadline Cloud computation.
  • Fix UI scaling issues with high DPIs.
Unreal Engine plugin
  • Add UE5 support.
  • A new variant of the plugin supports submitting Umbra builds to AWS Deadline Cloud. The variant is based on the non-streaming version of the plugin. Please see the plugin README.md for details.
  • Polish the plugin UI.
  • Add button to clear the visibility data to the non-streaming version.

    Note that the Unreal Engine plugin is not included in the SDK release package, but comes in a separate one.


Version 3.4.25

Highlights
Expand full change log
Other
  • Add 64-bit ARM Linux libraries of both the Runtime and the Optimizer. This includes builds for both generic ARM64/AArch64 Linux and dedicated Graviton Linux builds. Graviton is a fast 64-bit ARM-based processor available for cloud use in Amazon Web Services (AWS).
  • Implemented ARM optimizations specific to Graviton.
Runtime
  • Volume-to-volume query improvements for queryAABBVisibility, querySphereVisibility and queryStaticObjectVisibility. Optimized the volume-to-volume queries especially with server-side uses in mind.
    • Generic algorithm optimizations for all platforms.
    • Optimizations specific to X86, ARM and Graviton platforms.
    • The queries use more memory if increased amount is provided using QueryExt::setWorkMem. More memory allows increased query performance with complicated inputs and data.
    • Fix intersection bug which could cause the AABB and object queries to be too conservative in certain situations.
    • Improve behavior near exit portals.
    • Fix cases where the query returns different results when the ends are swapped.
  • Optimize portal access on ARM64 NEON and X86 SSE3, which has a minor effect on all queries.
Optimizer
  • Sometimes the license reminder dialog was be displayed too often if storing its status failed. The license reminder dialog was introduced in the previous release. If the user chooses to be reminded on the first dialog, it is displayed a few times shortly before the license expires. See the previous change log entry. Setting the environment variable UMBRA_SUPPRESS_LICENSE_DIALOG will disable both the reminder and the expiration dialog.

Version 3.4.24

Highlights
Expand full change log
Runtime
  • Add two new variants of QueryExt::queryLocalLights for conical spotlights and frustums. These new queries can be used to intersect these volumes against a cluster list, for example a list of visible clusters.

    The queries perform a "flood fill" to find connected clusters inside each shape, and test if these clusters are found in the input cluster list.

    Note that this is not the same as occlusion culling from the light's point of view. The light won't be visible from clearly disconnected space, but is visible from around a corner that's inside the light. However, this implementation is faster than a more accurate operation would be. QueryPortalVisibility can generate a more accurate occlusion culled cluster list for a camera.

  • Fix rare false negatives with QueryExt::querySphereVisibility. The query can now take somewhat longer due to more accurate and correct algorithm.
  • Query::queryFrustumVisibility now also supports the atomic output mode of Visibility::setOutputObjectMask.
  • Fix a rare TomeCollection crash when loading tomes computed using Umbra 3.3.x.
Optimizer
  • Improve culling behavior close to occluder geometry. In rare cases the system could assign the camera to the wrong cell, leading to invalid culling results. While Umbra doesn't guarantee correct behavior very close to occluders (i.e. closer than collision radius when that feature is used), this issue would happen slightly farther than that. Reviewed the viewtree simplification logic to avoid such situations.
  • Review portal optimization behavior, so that certain cell and portal configurations do not result in invalid culling results. Also improved the algorithm to produce less portals in some cases. The net effect to total portal count can be slightly positive or negative.
  • Initializing the computation is now multithreaded. While the actual tome computation has always been multithreaded, the initialization step has not been until now. Multithreaded initialization helps with computation times especially when iterating on content with a large amount of geometry. With only local changes in a scene, most computation results can simply be reused from the cache. This means that the initialization can become relatively large part of the total computation effort, especially with a large amount of geometry.

    When using the LocalComputation or Task APIs to compute, they now automatically use multiple threads for this purpose.

    If you're instead using the low level TomeBuildGraph API, see the new numThreads parameter of TomeBuildGraph::create. This value sets how many additional threads the function is allowed to create. -1 means automatic, which is now the default. With 0 the function is single threaded and behaves like before. Note that with multiple threads, the SceneDataProvider implementation used must support creating multiple GeometryInterface and MarkupInterface instances, each of which is accessed from a single separate thread.

  • New Umbra::Scene API function insertObjectCollection allows inserting multiple SceneObjects sharing the same ID. These "object collections" share the same ID and flags, but have multiple SceneModels. Each SceneModel has its own transformation. Objects inserted this way are considered a single entity by the tome and the runtime.

    The purpose of this feature is to make Umbra::Scene require less memory when parts of identical geometry are reused in several objects. Each object collection (with a single object ID) can consist of multiple SceneModels, which can be shared among different object collections. If a large part of your content is set up this way, consider using the new insertObjectCollection function.

    For example, you might have a wall with many identical windows, which are rendered in groups of varying number of windows. In this example Umbra also culls entire groups rather than individual windows. With this update you're able to have a single window SceneModel, which participates to all groups. If you only have a few such objects, this might not have significant effect on Umbra::Scene size.

  • Change tome allocation behavior of Computation/LocalComputation API function waitForResult.

    Firstly, the allocator used to allocate the tome is now a member of the Computation::Params class instead of a function parameter.

    Secondly, calling waitForResult repeatedly will always return the same tome instance if the computation is finished, instead of a new one each time. The tome memory gets automatically deallocated when LocalComputation is released. If you want the tome memory to have a longer lifetime than LocalComputation, use a tome allocator implementation where the deallocate function does nothing.

  • Adjust cell optimization logic slightly, which can slightly improve computation times.
  • Fix a bug which caused computation to be non-deterministic. Repeated computations with identical inputs are expected generate identical results and computation cache.
  • Report the correct error code when the license is expired.
  • On Windows the Optimizer will display a dialog reminding you that your license is about to expire in two weeks, one week or one day. The dialog being visible will not block the computation or process exit. You can choose on the dialog to not be reminded again.
  • Add a note to the documentation on how to suppress both the license expired and the new reminder dialogs: setting the environment variable UMBRA_SUPPRESS_LICENSE_DIALOG will do this.
  • Fix a crash that could happen with very large scale computations.
Debugger
  • New recording feature in the test positions dialog now allows easier recoding of periodic camera positions along the main camera's path.
  • Fix the voxel size visualization pattern on some GPUs. The pattern is displayed both when an object is selected and by the "Viewcell boundary" visualization.
Cluster PVS sample
  • PVS volumes restrict the computation now more strictly to the space inside the volume. This can speed up the cluster PVS computation, especially where very large clusters are generated in non-occluding space. The generated PVS is only valid inside the volume.
  • The cluster PVS sample is now split into a static library and a sample application.
  • The ClusterPVSSolver can now use multiple threads to compute. If multiple threads are used, the clusters are computed in an order that likely produces good thread utilization.

For technical issues, licensing inquries or to get access to release packages, reach out to us via umbra-support.

Stay tuned for future updates from Umbra!