Caching Data with Timbr
Timbr includes a sophisticated cache engine that materializes data within its ontology-based semantic layer to improve performance and reduce costly computations.
As part of the virtual knowledge graph, caching allows frequently accessed semantic queries to be optimized while maintaining consistent business definitions across SQL, BI tools, and AI systems.
The cache engine in Timbr can materialize the data into one of four available tiers, depending on the setup and license tier.
• Local Database: The database used in the data model. This option is available in all Timbr deployments.
• Data Lake: This option requires adding the Data Lake as a secondary database in the model and pointing the materialization to it. The Data Lake option is available on all customer-side deployments and in the advanced SaaS tiers.
• SSD: Requires allocating memory to the Timbr cluster. This option is only available for customer-side deployments.
• In-Memory: A built-in In-Memory database requires defining a schema for the materializations. This option is only available in customer-side deployments.
All the materializations can be refreshed automatically based on recurring time intervals or specific triggers. Materialization can also be triggered externally to the Timbr platform. Users with more than one cache option can promote or demote the materializations between them according to changing workloads.
If your license does not include the Scheduled Jobs component, you can learn how to use the cache engine and materialize your data here.
If your license does include the Scheduled Jobs component, you can learn how to create and manage any type of job here.