FAQ: What is the Query Cache?

The objective of the Query Cache is to reuse the intermediate state from previous queries in a manner where the new query updates the previous state with events that have arrived since the previous query ran, and avoids re-processing the full set of events that went into the previous result. The objective is to save resources in the cluster, and provide faster responses to queries that get submitted repeatedly.

When a query completes, the cluster decides if the result is relevant for caching. Part of the decision is whether the cost of the query was sufficiently high so that very simple queries do not fill (and thus flush) the cache of the ones worth keeping.

Caching is fully automatic, with configs available for setting the threshold of when to consider a query too simple (cheap in terms of workload) to cache, and the amount of disk space allowed for the cache on all nodes.

Users typically do not notice caching affecting their responses, but the very observant user may notice the oldest part of a timechart arrive fully populated right away when submitting a query, while the most recent (i.e. since the cache entry) part is still in progress.

Dashboards are a great example of queries that are repeated and requested repeatedly.