Home  >  Article  >  Database  >  A brief analysis of 8 elimination strategies in Redis cache

A brief analysis of 8 elimination strategies in Redis cache

青灯夜游
青灯夜游forward
2021-11-08 10:04:539624browse

This article will take you to talk about 8 elimination strategies in Redis and see how to use them. I hope it will be helpful to everyone!

A brief analysis of 8 elimination strategies in Redis cache

We know that Redis cache uses memory to save data, but the memory size is limited after all. As the amount of data to be cached increases, the limit The cache space will inevitably be filled up. At this time, the cache elimination strategy is needed to delete the data. [Related recommendations: Redis Video Tutorial]

Redis Cache Elimination Strategy

Redis’ elimination strategy can be determined based on whether data will be eliminated. They are divided into two categories:

  • The only strategy that does not eliminate data is noeviction.
  • 7 other strategies that will be eliminated.

There are 7 strategies for elimination. We can further divide them into two categories according to the scope of the elimination candidate data set:

  • After setting expiration Eliminated from the time data, including volatile-random, volatile-ttl, volatile-lru, volatile-lfu (newly added after Redis 4.0).

  • Elimination is performed in all data ranges, including allkeys-lru, allkeys-random, and allkeys-lfu (newly added after Redis 4.0).

A brief analysis of 8 elimination strategies in Redis cache

Before redis3.0, the default is volatile-lru; after redis3.0 (including 3.0), the default The elimination strategy is noeviction

noeviction strategy

noeviction means not to eliminate data. When the cache data is full and new write requests come in, Redis no longer provides services, but returns errors directly.

Elimination strategy based on expiration time

The four strategies of volatile-random, volatile-ttl, volatile-lru, and volatile-lfu are for those with expiration time set Key-value pairs. When the expiration time of the key-value pair is reached or the Redis memory usage reaches the maxmemory threshold, Redis will eliminate the key-value pair according to these policies;

  • volatile-ttl in When filtering, the key-value pairs with expiration time set will be deleted according to the order of expiration time. The earlier the expiration time is, the earlier it will be deleted.
  • volatile-random, just like its name, randomly deletes key-value pairs with expiration time set.
  • volatile-lru will use the LRU algorithm to filter key-value pairs with an expiration time set.
  • volatile-lfu will use the LFU algorithm to select key-value pairs with an expiration time set.

Elimination strategies within all data ranges

allkeys-lru, allkeys-random, allkeys-lfu The data range eliminated by these three strategies is expanded to all Key-value pairs, regardless of whether these key-value pairs have an expiration time set, the rules for filtering data for elimination are:

  • allkeys-random strategy, randomly select and Delete data;

  • allkeys-lru strategy uses the LRU algorithm to filter all data.

  • allkeys-lfu strategy, uses the LFU algorithm to filter across all data.

About the LRU algorithm

The LRU algorithm is the most commonly used algorithm recently, because LRU uses a linked list to maintain the used data list. When more data is used, it will take more time to move elements, which will inevitably affect the Redis main thread. For this reason, Redis has made some simplifications of the lru algorithm.

The core idea of ​​the LRU strategy: If a piece of data has just been accessed, then the data must be hot data and will be accessed again.

According to this core idea, the LRU strategy in Redis will set an lru field in the RedisObject structure corresponding to each data to record the access timestamp of the data. When performing data elimination, the LRU strategy will eliminate the data with the smallest lru field value (that is, the data with the longest access time) in the candidate data set.

So, in business scenarios where data is frequently accessed, the LRU strategy can indeed effectively retain the data with the latest access time. Moreover, because the retained data will be accessed again, the access speed of business applications can be improved.

The specific method is that when accessing a key-value pair, redis will record the timestamp of the most recent access. When redis decides to eliminate data, it will randomly select N data, use them as a candidate set, and filter out the smallest timestamp. When data is eliminated next time, data with a timestamp value smaller than that of the candidate set selected for the first time will be selected and entered into a new candidate set. When the data reaches maxmemory-samples, the smallest value is eliminated.

You can set the number of selected candidate sets through this commandCONFIG SET maxmemory-samples N

Usage recommendations

Based on the strategy With its characteristics, different strategies can be selected to eliminate data for different scenarios.

  • When the cached data is not obviously hot or cold, that is, the access frequency of the data is not very different, it is recommended to use allkeys-random random strategy to eliminate the data;
  • When the data There are obvious differences between hot and cold. It is recommended to use the allkeys-lru or volatile-lru algorithm to keep the most recently accessed data in the cache data;
  • When there is a demand for top-level data in the business, that is, data that will not expire, generally no expiration time will be set, and the volatile-lru strategy can be used. In this way, this type of data will not be eliminated, but other data can be eliminated according to LRU rules.

For more programming-related knowledge, please visit: Introduction to Programming! !

The above is the detailed content of A brief analysis of 8 elimination strategies in Redis cache. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:juejin.cn. If there is any infringement, please contact admin@php.cn delete