Class LRUQueryCache

java.lang.Object
org.apache.lucene.search.LRUQueryCache
All Implemented Interfaces:
QueryCache, Accountable

public class LRUQueryCache extends Object implements QueryCache, Accountable
A QueryCache that evicts queries using a LRU (least-recently-used) eviction policy in order to remain under a given maximum size and number of bytes used.

This class is thread-safe.

Note that query eviction runs in linear time with the total number of segments that have cache entries so this cache works best with caching policies that only cache on "large" segments, and it is advised to not share this cache across too many indices.

A default query cache and policy instance is used in IndexSearcher. If you want to replace those defaults it is typically done like this:

   final int maxNumberOfCachedQueries = 256;
   final long maxRamBytesUsed = 50 * 1024L * 1024L; // 50MB
   // these cache and policy instances can be shared across several queries and readers
   // it is fine to eg. store them into static variables
   final QueryCache queryCache = new LRUQueryCache(maxNumberOfCachedQueries, maxRamBytesUsed);
   final QueryCachingPolicy defaultCachingPolicy = new UsageTrackingQueryCachingPolicy();
   indexSearcher.setQueryCache(queryCache);
   indexSearcher.setQueryCachingPolicy(defaultCachingPolicy);
 
This cache exposes some global statistics (hit count, miss count, number of cache entries, total number of DocIdSets that have ever been cached, number of evicted entries). In case you would like to have more fine-grained statistics, such as per-index or per-query-class statistics, it is possible to override various callbacks: onHit(java.lang.Object, org.apache.lucene.search.Query), onMiss(java.lang.Object, org.apache.lucene.search.Query), onQueryCache(org.apache.lucene.search.Query, long), onQueryEviction(org.apache.lucene.search.Query, long), onDocIdSetCache(java.lang.Object, long), onDocIdSetEviction(java.lang.Object, int, long) and onClear(). It is better to not perform heavy computations in these methods though since they are called synchronously and under a lock.
See Also:
  • Field Details

    • maxSize

      private final int maxSize
    • maxRamBytesUsed

      private final long maxRamBytesUsed
    • leavesToCache

      private final Predicate<LeafReaderContext> leavesToCache
    • uniqueQueries

      private final Map<Query,Query> uniqueQueries
    • mostRecentlyUsedQueries

      private final Set<Query> mostRecentlyUsedQueries
    • cache

    • lock

      private final ReentrantLock lock
    • skipCacheFactor

      private final float skipCacheFactor
    • ramBytesUsed

      private volatile long ramBytesUsed
    • hitCount

      private volatile long hitCount
    • missCount

      private volatile long missCount
    • cacheCount

      private volatile long cacheCount
    • cacheSize

      private volatile long cacheSize
  • Constructor Details

    • LRUQueryCache

      public LRUQueryCache(int maxSize, long maxRamBytesUsed, Predicate<LeafReaderContext> leavesToCache, float skipCacheFactor)
      Expert: Create a new instance that will cache at most maxSize queries with at most maxRamBytesUsed bytes of memory, only on leaves that satisfy leavesToCache.

      Also, clauses whose cost is skipCacheFactor times more than the cost of the top-level query will not be cached in order to not slow down queries too much.

    • LRUQueryCache

      public LRUQueryCache(int maxSize, long maxRamBytesUsed)
      Create a new instance that will cache at most maxSize queries with at most maxRamBytesUsed bytes of memory. Queries will only be cached on leaves that have more than 10k documents and have more than half of the average documents per leave of the index. This should guarantee that all leaves from the upper tier will be cached. Only clauses whose cost is at most 100x the cost of the top-level query will be cached in order to not hurt latency too much because of caching.
  • Method Details