The cloud features are experimental.
This version adds optional support for performing cloud Umbra computations in AWS using AWS Deadline Cloud. Deadline Cloud is a service for creating virtual render farms in AWS. It is easy to set up, use and scale.
Umbra tome computations can be submitted to AWS Deadline Cloud using the SDK API, Command line utility (umbracli) and the Umbra Debugger GUI application.
See doc/articles/computing_in_aws_using_deadline_cloud.md for details on how to set up and try the feature.
CloudWorker is a new executable suitable for performing distributed computations. It allows easy integration and distribution of Umbra computations for example to a local or cloud build farm.
CloudWorker can be used to generate a number of jobs for an Umbra computation, which can then be processed in a distributed fashion, for example, by a build farm. CloudWorker behavior can be configured to adapt the generated job count and job graph layout to build farm or service restrictions and available resources.
CloudWorker has optional support for using shared computation cache using AWS S3.
See doc/articles/build_farm_integration_using_cloudworker.md for details.
Update internal computation cache implementation to work better with remote distributed cache.
This version is not compatible with cache files from old versions. They will be ignored if present. You can delete such old cache files, but the computation will also overwrite them if needed, and delete the oldest ones when the total size grows over the configured limit. The number and size of cache files generated will be slightly larger than with the previous versions.
Add button to clear the visibility data to the non-streaming version.
Note that the Unreal Engine plugin is not included in the SDK release package, but comes in a separate one.
Add two new variants of QueryExt::queryLocalLights for conical spotlights and frustums. These new queries can be used to intersect these volumes against a cluster list, for example a list of visible clusters.
The queries perform a "flood fill" to find connected clusters inside each shape, and test if these clusters are found in the input cluster list.
Note that this is not the same as occlusion culling from the light's point of view. The light won't be visible from clearly disconnected space, but is visible from around a corner that's inside the light. However, this implementation is faster than a more accurate operation would be. QueryPortalVisibility can generate a more accurate occlusion culled cluster list for a camera.
Initializing the computation is now multithreaded. While the actual tome computation has always been multithreaded, the initialization step has not been until now. Multithreaded initialization helps with computation times especially when iterating on content with a large amount of geometry. With only local changes in a scene, most computation results can simply be reused from the cache. This means that the initialization can become relatively large part of the total computation effort, especially with a large amount of geometry.
When using the LocalComputation or Task APIs to compute, they now automatically use multiple threads for this purpose.
If you're instead using the low level TomeBuildGraph API, see the new numThreads parameter of TomeBuildGraph::create. This value sets how many additional threads the function is allowed to create. -1 means automatic, which is now the default. With 0 the function is single threaded and behaves like before. Note that with multiple threads, the SceneDataProvider implementation used must support creating multiple GeometryInterface and MarkupInterface instances, each of which is accessed from a single separate thread.
New Umbra::Scene API function insertObjectCollection allows inserting multiple SceneObjects sharing the same ID. These "object collections" share the same ID and flags, but have multiple SceneModels. Each SceneModel has its own transformation. Objects inserted this way are considered a single entity by the tome and the runtime.
The purpose of this feature is to make Umbra::Scene require less memory when parts of identical geometry are reused in several objects. Each object collection (with a single object ID) can consist of multiple SceneModels, which can be shared among different object collections. If a large part of your content is set up this way, consider using the new insertObjectCollection function.
For example, you might have a wall with many identical windows, which are rendered in groups of varying number of windows. In this example Umbra also culls entire groups rather than individual windows. With this update you're able to have a single window SceneModel, which participates to all groups. If you only have a few such objects, this might not have significant effect on Umbra::Scene size.
Change tome allocation behavior of Computation/LocalComputation API function waitForResult.
Firstly, the allocator used to allocate the tome is now a member of the Computation::Params class instead of a function parameter.
Secondly, calling waitForResult repeatedly will always return the same tome instance if the computation is finished, instead of a new one each time. The tome memory gets automatically deallocated when LocalComputation is released. If you want the tome memory to have a longer lifetime than LocalComputation, use a tome allocator implementation where the deallocate function does nothing.
For technical issues, licensing inquries or to get access to release packages, reach out to us via umbra-support.
Stay tuned for future updates from Umbra!