Home >Database >Redis >An article explaining the LRU algorithm in Redis in detail

An article explaining the LRU algorithm in Redis in detail

青灯夜游
青灯夜游forward
2021-10-13 10:33:304799browse

This article will introduce you to LRU (Least Recently Used) in Redis. I hope it will be helpful to you!

An article explaining the LRU algorithm in Redis in detail

Redis is a key-value database based on memory storage. We know that although memory is fast, it has small space. When the physical memory reaches the upper limit, the system will run. It is very slow because the swap mechanism will transfer part of the memory data to the swap partition, and the exchange with swap ensures that the system continues to run; however, swap is a hard disk storage, and the speed is far slower than that of memory, especially for Redis. For a service with very high QPS, this situation is unacceptable. (Note that if the swap partition memory is also full, an error will occur in the system!) [Related recommendations: Redis Video Tutorial]

Linux operating system can view swap through free -m Size: \

An article explaining the LRU algorithm in Redis in detail

So how to prevent this from happening in Redis is very important (the interviewer almost never asks about this knowledge point when asked about Redis).

2. maxmemory configuration

Redis provides maxmemory configuration for the above problems. This configuration can specify the maximum data set of Redis storage. It is usually configured in the redis.conf file, as well. One-time configuration can be performed at runtime using the CONFIG SET command.
Schematic diagram of the configuration items in the redis.conf file: \

An article explaining the LRU algorithm in Redis in detail

The maxmemory configuration item is not enabled by default. Redis officially introduces 64-bit operation The system has no memory limit by default. The 32-bit operating system defaults to 3GB implicit memory configuration. If maxmemory is 0, it means that the memory is not limited.

Therefore, when we are building a cache architecture, we must make appropriate maxmemory configurations based on hardware resources and business requirements.

3. What to do if the memory reaches maxmemory

Obviously the maximum memory is configured. When maxmemory reaches the maximum limit, it is impossible for Redis to stop working. So how does Redis deal with it? What about this problem? This is the focus of this article. Redis provides a maxmemory-policy elimination strategy (This article only talks about LRU and does not involve LFU, LFU will be described in the next article), which deletes keys that meet the conditions to say goodbye to the old and welcome the new.
maxmemory-policy Elimination policy:

  • noeviction: When the memory limit is reached and the client attempts to execute a command that may cause more memory to be used An error is returned. To put it simply, read operations are still allowed, but new data is not allowed to be written. del (delete) requests can.
  • allkeys-lru: From all keys, eliminate them through the lru (Least Recently Used - Least Recently Used) algorithm
  • allkeys-random: Randomly eliminate from all keys
  • volatile-lru: Eliminate from all keys with expiration time set through the lru (Least Recently Used - Least Recently Used) algorithm. This ensures that data that does not have an expiration time set and needs to be persisted will not be selected for elimination
  • volatile-random: Randomly eliminate all keys with an expiration time set
  • volatile-ttl: From all the keys with expiration time set, by comparing the remaining expiration time TTL value of the key, the smaller the TTL, the earlier it will be eliminated

Also, volatile-lfu/allkeys-lfu will be discussed below. The two algorithms are different!

Random random elimination only needs to randomly select some keys to delete and release memory space; the ttl expiration time is small and the key is deleted first by comparing the size of the ttl, and the key with the small ttl value is deleted. Just free up memory space.
So how is LRU implemented? How does Redis know which key has been used recently and which key has not been used recently?

4. LRU algorithm implementation

We first use Java containers to implement a simple LRU algorithm. We use ConcurrentHashMap to do the mapping relationship of key-value result storage elements, and use ConcurrentLinkedDeque to Maintain the access sequence of keys.
LRU implementation code:

package com.lizba.redis.lru;

import java.util.Arrays;
import java.util.List;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.ConcurrentLinkedDeque;

/**
 * <p>
 *      LRU简单实现
 * </p>
 *
 * @Author: Liziba
 * @Date: 2021/9/17 23:47
 */
public class SimpleLru {

    /** 数据缓存 */
    private ConcurrentHashMap<string> cacheData;
    /** 访问顺序记录 */
    private ConcurrentLinkedDeque<string> sequence;
    /** 缓存容量 */
    private int capacity;

    public SimpleLru(int capacity) {
        this.capacity = capacity;
        cacheData = new ConcurrentHashMap(capacity);
        sequence = new ConcurrentLinkedDeque();
    }


    /**
     * 设置值
     *
     * @param key
     * @param value
     * @return
     */
    public Object setValue(String key, Object value) {
        // 判断是否需要进行LRU淘汰
        this.maxMemoryHandle();
        // 包含则移除元素,新访问的元素一直保存在队列最前面
        if (sequence.contains(key)) {
            sequence.remove();
        }
        sequence.addFirst(key);
        cacheData.put(key, value);
        return value;
    }


    /**
     * 达到最大内存,淘汰最近最少使用的key
     */
    private void maxMemoryHandle() {
        while (sequence.size() >= capacity) {
            String lruKey = sequence.removeLast();
            cacheData.remove(lruKey);
            System.out.println("key: " + lruKey + "被淘汰!");
        }
    }


    /**
     * 获取访问LRU顺序
     *
     * @return
     */
    public List<string> getAll() {
        return Arrays.asList(sequence.toArray(new String[] {}));
    }
}复制代码</string></string></string>

Test code:

package com.lizba.redis.lru;

/**
 * <p>
 *      测试最近最少使用
 * </p>
 *
 * @Author: Liziba
 * @Date: 2021/9/18 0:00
 */
public class TestSimpleLru {

    public static void main(String[] args) {
        SimpleLru lru = new SimpleLru(8);
        for (int i = 0; i <p><strong>Test result: </strong>\</p><p><img src="https://img.php.cn/upload/image/775/594/750/163409191823487An%20article%20explaining%20the%20LRU%20algorithm%20in%20Redis%20in%20detail" title="163409191823487An article explaining the LRU algorithm in Redis in detail" alt="An article explaining the LRU algorithm in Redis in detail"></p><p> It can be seen from the above test results that the key0 and key1 added first were eliminated, and the last key added is also the latest key saved at the head of the sequence. <br>Through this solution, the LRU algorithm can be easily implemented; but the shortcomings are also very obvious. The solution requires the use of additional data structures to save the key access sequence, which will increase Redis memory consumption and itself is used to optimize memory. The solution consumes a lot of memory, which is obviously not feasible. <br></p><h2 data-id="heading-4">5. Approximate LRU of Redis</h2><p>In response to this situation, Redis uses the approximate LRU algorithm, which is not completely accurate in eliminating the least frequently used keys recently, but the overall accuracy is also Guaranteed. <br>The approximate LRU algorithm is very simple. In the Redis key object, 24 bits are added to store the system timestamp of the latest access. When the client sends a key write-related request to the Redis server, it is found that the memory reaches maxmemory. At this time, lazy deletion is triggered; the Redis service selects 5 keys that meet the conditions through random sampling (note that this random sampling <strong>allkeys-lru</strong> is randomly sampled from all keys, <strong>volatile-lru</strong>It is randomly sampled from all keys with expiration time set), compares with the latest access timestamp recorded in the key object, and eliminates the oldest key among these 5 keys; if the memory is still not enough, continue to repeat this step . <br></p><p><strong>Note that 5 is the default random sampling value size of Redis, which can be configured through maxmemory_samples in redis.conf: </strong>\</p><p><img src="https://img.php.cn/upload/image/866/655/134/163409192913128An%20article%20explaining%20the%20LRU%20algorithm%20in%20Redis%20in%20detail" title="163409192913128An article explaining the LRU algorithm in Redis in detail" alt="An article explaining the LRU algorithm in Redis in detail"></p><p><strong> Regarding the above random LRU algorithm, Redis officially provides a data chart for test accuracy: </strong></p>
  • The light gray on the top layer indicates the eliminated key , Figure 1 is a schematic diagram of the standard LRU algorithm elimination
  • The dark gray layer in the middle represents the old key that has not been eliminated
  • The light green layer at the bottom represents the recently accessed key

An article explaining the LRU algorithm in Redis in detail

In Redis 3.0 when maxmemory_samples is set to 10, Redis's approximate LRU algorithm is very close to the real LRU algorithm, but obviously setting maxmemory_samples to 10 is more expensive than setting maxmemory_samples to 5 CPU calculation time, because the sample data sampled each time increases, the calculation time will also increase.
The LRU algorithm of Redis3.0 is more accurate than the LRU algorithm of Redis2.8, because Redis3.0 adds an elimination pool of the same size as maxmemory_samples. Every time a key is eliminated, it is first eliminated with The keys waiting to be eliminated in the pool are compared, and the oldest key is finally eliminated. In fact, the keys selected for elimination are put together and compared, and the oldest among them is eliminated.

6. Problems

The LRU algorithm seems to be easier to use, but there are also unreasonable places. For example, the two keys A and B were the same one hour before the elimination. Time is added to Redis. A was visited 1,000 times in the first 49 minutes, but was not visited in the last 11 minutes; B was only visited once in the 59th minute in this hour; if the LRU algorithm is used at this time, if A, B are all selected by Redis sampling, and A will be eliminated. Obviously this is unreasonable.
In response to this situation, Redis 4.0 added the LFU algorithm, (Least frequently used), which is the least frequently used. This algorithm is more reasonable than LRU. We will learn the elimination algorithm together below. If necessary, please pay attention to my column.

For more programming-related knowledge, please visit: Programming Teaching! !

The above is the detailed content of An article explaining the LRU algorithm in Redis in detail. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:juejin.cn. If there is any infringement, please contact admin@php.cn delete