Home > Backend Development > Golang > Elements are incorrectly evicted from eBPF LRU hashmap

Elements are incorrectly evicted from eBPF LRU hashmap

PHPz
Release: 2024-02-06 09:36:11
forward
624 people have browsed it

元素被错误地从 eBPF LRU 哈希图中逐出

Question content

I observed that elements in the ebpf lru hash map (bpf_map_type_lru_hash) were being evicted incorrectly. In the code below, I insert an lru hash map of size 8 and print its contents every second:

package main

import (
    "fmt"
    "github.com/cilium/ebpf"
    "log"
    "time"
)

func main() {
    spec := ebpf.mapspec{
        name:       "test_map",
        type:       ebpf.lruhash,
        keysize:    4,
        valuesize:  8,
        maxentries: 8,
    }

    hashmap, err := ebpf.newmap(&spec)
    if err != nil {
        log.fatalln("could not create map:", err)
    }

    var insertkey uint32

    for range time.tick(time.second) {
        err = hashmap.update(insertkey, uint64(insertkey), ebpf.updateany)
        if err != nil {
            log.printf("update failed. insertkey=%d|value=%d|err=%s", insertkey, insertkey, err)
        }

        var key uint32
        var value uint64
        count := 0
        elementsstr := ""

        iter := hashmap.iterate()

        for iter.next(&key, &value) {
            elementsstr += fmt.sprintf("(%d, %d) ", key, value)
            count++
        }

        log.printf("total elements: %d, elements: %s", count, elementsstr)

        insertkey++
    }
}
Copy after login

When I run the above program, I see this:

2023/03/29 17:32:29 total elements: 1, elements: (0, 0) 
2023/03/29 17:32:30 total elements: 2, elements: (1, 1) (0, 0) 
2023/03/29 17:32:31 total elements: 3, elements: (1, 1) (0, 0) (2, 2) 
2023/03/29 17:32:32 total elements: 3, elements: (3, 3) (0, 0) (2, 2) 
...
Copy after login

Since the map has eight entries, I expected the fourth row to show four values, but it only shows three because entry (1, 1) has been evicted.

If I change max_entries to 1024, I noticed that this problem occurs after inserting the 200th element, but sometimes it happens after that. Inconsistent.

This issue is not limited to creating/inserting maps from user space, as I observed this issue in an xdp program that created a map and inserted it into the map; the above reproduces the issue I observed in my actual program. In my real program which also has 1024 entries, I noticed that this problem occurred after inserting 16 elements.

I tested this on a production server running linux kernel 5.16.7.

I tested on a linux vm and upgraded the kernel to 6.2.8 and I noticed a difference in the eviction policy. For example, when max_entries is 8, I observe:

2023/03/29 20:38:02 Total elements: 1, elements: (0, 0)
2023/03/29 20:38:03 Total elements: 2, elements: (0, 0) (1, 1)
2023/03/29 20:38:04 Total elements: 3, elements: (0, 0) (2, 2) (1, 1)
2023/03/29 20:38:05 Total elements: 4, elements: (0, 0) (2, 2) (1, 1) (3, 3)
2023/03/29 20:38:06 Total elements: 5, elements: (4, 4) (0, 0) (2, 2) (1, 1) (3, 3)
2023/03/29 20:38:07 Total elements: 6, elements: (4, 4) (0, 0) (2, 2) (1, 1) (5, 5) (3, 3)
2023/03/29 20:38:08 Total elements: 7, elements: (4, 4) (0, 0) (2, 2) (1, 1) (6, 6) (5, 5) (3, 3)
2023/03/29 20:38:09 Total elements: 8, elements: (7, 7) (4, 4) (0, 0) (2, 2) (1, 1) (6, 6) (5, 5) (3, 3)
2023/03/29 20:38:10 Total elements: 1, elements: (8, 8)
...
Copy after login

When max_entries is 1024, I noticed that after adding the 1025th element, the total elements are 897. I was unable to test using kernel 6.2.8 on our production server.


Correct answer


The LRU hash map does not guarantee exactly the maximum number of items, and the implementation is clearly designed to work well beyond 8 Provides good performance for multiple projects. I took a quick look at the code and saw this:

  1. The LRU is divided into two parts: the "active list" and the "inactive list", and its task is to periodically move elements from one part to the other based on whether they have been visited recently. It's not true LRU (items don't move to the head on every access).

  2. When the map is full and something needs to be evicted before a new item can be inserted, the code will evict up to 128 items from the inactive list in one pass; only if the inactive list is empty , it will evict a single item from the active list.

  3. There is also a per-CPU "local free list" that contains allocated items waiting to be filled with data; when it runs empty, it tries to pull from the global free list, and if that list is empty, it will enter the eviction path. The target size of the local free list is 4 entries.

So the behavior in 6.2.8 looks simple and consistent: presumably all your keys are on the "inactive list" (not too surprising for a scan type access pattern, or maybe just None of them had a chance to get promoted), and then everyone was kicked out. I don't know much about 5.16, but it might have something to do with the local freelist and all updates running from the same CPU.

Basically, I think the data type is not meant to be used the way you are using it, and the error is what you expect. If you don't agree, I think you'll have to discuss it with the kernel developers.

The above is the detailed content of Elements are incorrectly evicted from eBPF LRU hashmap. For more information, please follow other related articles on the PHP Chinese website!

source:stackoverflow.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template