One article to learn about the cache library freecache in Golang

青灯夜游
Release: 2022-05-05 21:14:36
forward
5666 people have browsed it

This article will take you to understand Golang cache, and introduce the cache library freecache in Golang in a simple way. I hope it will be helpful to everyone!

One article to learn about the cache library freecache in Golang

# Go development cache scenarios generally use map or cache framework. For thread safety, sync.Map or thread-safe cache framework will be used.

If the amount of data in the cache scenario is greater than one million levels, special consideration needs to be given to the impact of the data type on gc (note that the bottom layer of the string type is the pointer Len Cap, so it is also a pointer type). If the cache key and value are both non- For pointer types, there is no need to worry about it. [Related recommendations: Go Video Tutorial]

But in actual application scenarios, it is very common for key and value to be (contain) pointer type data, so when using the cache framework, special attention must be paid to its use GC impact, from the perspective of whether it affects GC, cache frameworks are roughly divided into two categories:

  • Zero GC overhead: such as freecache or bigcache, the bottom layer is based on ringbuf, reducing the number of pointers;
  • There is GC overhead: a caching framework implemented directly based on Map.

For map, all key/value pairs will be scanned during gc. If they are all basic types, gc will not scan them again.

The following uses freecache as an example to analyze its implementation principle. The code example is as follows:

func main() {
   cacheSize := 100 * 1024 * 1024
   cache := freecache.NewCache(cacheSize)

   for i := 0; i < N; i++ {
      str := strconv.Itoa(i)
      _ = cache.Set([]byte(str), []byte(str), 1)
   }

   now := time.Now()
   runtime.GC()
   fmt.Printf("freecache, GC took: %s\n", time.Since(now))
   _, _ = cache.Get([]byte("aa"))

   now = time.Now()
   for i := 0; i < N; i++ {
      str := strconv.Itoa(i)
      _, _ = cache.Get([]byte(str))
   }
   fmt.Printf("freecache, Get took: %s\n\n", time.Since(now))
}
Copy after login

1 Initialization

freecache.NewCache will initialize the local cache, and size represents storage Regarding the space size, freecache will initialize 256 segments. Each segment is an independent storage unit. The locking dimension of freecache is also based on the segment. Each segment has a ringbuf with an initial size of size/256. The reason why freecache claims zero GC is that its pointers are fixed, with only 512, and each segment has 2, namely rb and slotData (Note that the slice is a pointer type).

type segment struct {
   rb            RingBuf // ring buffer that stores data
   segId         int
   _             uint32  // 占位
   missCount     int64
   hitCount      int64
   entryCount    int64
   totalCount    int64      // number of entries in ring buffer, including deleted entries.
   totalTime     int64      // used to calculate least recent used entry.
   timer         Timer      // Timer giving current time
   totalEvacuate int64      // used for debug
   totalExpired  int64      // used for debug
   overwrites    int64      // used for debug
   touched       int64      // used for debug
   vacuumLen     int64      // up to vacuumLen, new data can be written without overwriting old data.
   slotLens      [256]int32 // The actual length for every slot.
   slotCap       int32      // max number of entry pointers a slot can hold.
   slotsData     []entryPtr // 索引指针
}

func NewCacheCustomTimer(size int, timer Timer) (cache *Cache) {
    cache = new(Cache)
    for i := 0; i < segmentCount; i++ {
        cache.segments[i] = newSegment(size/segmentCount, i, timer)
    }
}
func newSegment(bufSize int, segId int, timer Timer) (seg segment) {
    seg.rb = NewRingBuf(bufSize, 0)
    seg.segId = segId
    seg.timer = timer
    seg.vacuumLen = int64(bufSize)
    seg.slotCap = 1
    seg.slotsData = make([]entryPtr, 256*seg.slotCap) // 每个slotData初始化256个单位大小
}
Copy after login

2 Reading and writing process

The key and value of freecache are both []byte arrays. You need to serialize and deserialize yourself when using them. If you cache complex objects The impact of serialization and deserialization cannot be ignored. First, look at the Set process:

_ = cache.Set([]byte(str), []byte(str), 1)
Copy after login

Set process first hashes the key, the hashVal type is uint64, and its low 8-bit segID Corresponding to the segment array, the low 8-15 bits indicate that slotId corresponds to the slotsData subscript, and the high 16 bits indicate the []entryPtr certain data corresponding to the slotsData subscript. A search operation is required here. Note that []entryPtrThe array size is slotCap (initially 1), and slotCap will be doubled when the capacity is expanded.

Each segment corresponds to a lock (sync.Mutex), so it can support a large amount of concurrency, unlike sync.Map which only has one lock.

func (cache *Cache) Set(key, value []byte, expireSeconds int) (err error) {
   hashVal := hashFunc(key)
   segID := hashVal & segmentAndOpVal // 低8位
   cache.locks[segID].Lock() // 加锁
   err = cache.segments[segID].set(key, value, hashVal, expireSeconds)
   cache.locks[segID].Unlock()
}

func (seg *segment) set(key, value []byte, hashVal uint64, expireSeconds int) (err error) {
   slotId := uint8(hashVal >> 8)
   hash16 := uint16(hashVal >> 16)
   slot := seg.getSlot(slotId)
   idx, match := seg.lookup(slot, hash16, key)

   var hdrBuf [ENTRY_HDR_SIZE]byte
   hdr := (*entryHdr)(unsafe.Pointer(&hdrBuf[0]))
   if match { // 有数据更新操作
      matchedPtr := &slot[idx]
      seg.rb.ReadAt(hdrBuf[:], matchedPtr.offset)
      hdr.slotId = slotId
      hdr.hash16 = hash16
      hdr.keyLen = uint16(len(key))
      originAccessTime := hdr.accessTime
      hdr.accessTime = now
      hdr.expireAt = expireAt
      hdr.valLen = uint32(len(value))
      if hdr.valCap >= hdr.valLen {
         // 已存在数据value空间能存下此次value大小
         atomic.AddInt64(&seg.totalTime, int64(hdr.accessTime)-int64(originAccessTime))
         seg.rb.WriteAt(hdrBuf[:], matchedPtr.offset)
         seg.rb.WriteAt(value, matchedPtr.offset+ENTRY_HDR_SIZE+int64(hdr.keyLen))
         atomic.AddInt64(&seg.overwrites, 1)
         return
      }
      // 删除对应entryPtr,涉及到slotsData内存copy,ringbug中只是标记删除
      seg.delEntryPtr(slotId, slot, idx)
      match = false
      // increase capacity and limit entry len.
      for hdr.valCap < hdr.valLen {
         hdr.valCap *= 2
      }
      if hdr.valCap > uint32(maxKeyValLen-len(key)) {
         hdr.valCap = uint32(maxKeyValLen - len(key))
      }
   } else { // 无数据
      hdr.slotId = slotId
      hdr.hash16 = hash16
      hdr.keyLen = uint16(len(key))
      hdr.accessTime = now
      hdr.expireAt = expireAt
      hdr.valLen = uint32(len(value))
      hdr.valCap = uint32(len(value))
      if hdr.valCap == 0 { // avoid infinite loop when increasing capacity.
         hdr.valCap = 1
      }
   }
   
   // 数据实际长度为 ENTRY_HDR_SIZE=24 + key和value的长度    
   entryLen := ENTRY_HDR_SIZE + int64(len(key)) + int64(hdr.valCap)
   slotModified := seg.evacuate(entryLen, slotId, now)
   if slotModified {
      // the slot has been modified during evacuation, we need to looked up for the &#39;idx&#39; again.
      // otherwise there would be index out of bound error.
      slot = seg.getSlot(slotId)
      idx, match = seg.lookup(slot, hash16, key)
      // assert(match == false)
   }
   newOff := seg.rb.End()
   seg.insertEntryPtr(slotId, hash16, newOff, idx, hdr.keyLen)
   seg.rb.Write(hdrBuf[:])
   seg.rb.Write(key)
   seg.rb.Write(value)
   seg.rb.Skip(int64(hdr.valCap - hdr.valLen))
   atomic.AddInt64(&seg.totalTime, int64(now))
   atomic.AddInt64(&seg.totalCount, 1)
   seg.vacuumLen -= entryLen
   return
}
Copy after login

seg.evacuate will evaluate whether ringbuf has enough space to store key/value. If there is not enough space, it will start scanning from the end of the free space (that is, the starting position of the data to be eliminated). (oldOff := seg.rb.End() seg.vacuumLen - seg.rb.Size()), if the corresponding data has been logically deleted or expired, then the block of memory can be directly recycled. If If the recycling conditions are not met, the entry will be swapped from the head of the ring to the end of the ring, and then the index of the entry will be updated. If this cycle does not work for 5 times, then the current oldHdrBuf needs to be recycled to meet memory needs.

The space required after executing seg.evacuate must be sufficient, and then the index and data are written. insertEntryPtr is the writing index operation. When the number of elements in []entryPtr When it is larger than seg.slotCap (initial 1), expansion operation is required. For the corresponding method, see seg.expand, which will not be described here.

To write to ringbuf, just execute rb.Write.

func (seg *segment) evacuate(entryLen int64, slotId uint8, now uint32) (slotModified bool) {
   var oldHdrBuf [ENTRY_HDR_SIZE]byte
   consecutiveEvacuate := 0
   for seg.vacuumLen < entryLen {
      oldOff := seg.rb.End() + seg.vacuumLen - seg.rb.Size()
      seg.rb.ReadAt(oldHdrBuf[:], oldOff)
      oldHdr := (*entryHdr)(unsafe.Pointer(&oldHdrBuf[0]))
      oldEntryLen := ENTRY_HDR_SIZE + int64(oldHdr.keyLen) + int64(oldHdr.valCap)
      if oldHdr.deleted { // 已删除
         consecutiveEvacuate = 0
         atomic.AddInt64(&seg.totalTime, -int64(oldHdr.accessTime))
         atomic.AddInt64(&seg.totalCount, -1)
         seg.vacuumLen += oldEntryLen
         continue
      }
      expired := oldHdr.expireAt != 0 && oldHdr.expireAt < now
      leastRecentUsed := int64(oldHdr.accessTime)*atomic.LoadInt64(&seg.totalCount) <= atomic.LoadInt64(&seg.totalTime)
      if expired || leastRecentUsed || consecutiveEvacuate > 5 {
      // 可以回收
         seg.delEntryPtrByOffset(oldHdr.slotId, oldHdr.hash16, oldOff)
         if oldHdr.slotId == slotId {
            slotModified = true
         }
         consecutiveEvacuate = 0
         atomic.AddInt64(&seg.totalTime, -int64(oldHdr.accessTime))
         atomic.AddInt64(&seg.totalCount, -1)
         seg.vacuumLen += oldEntryLen
         if expired {
            atomic.AddInt64(&seg.totalExpired, 1)
         } else {
            atomic.AddInt64(&seg.totalEvacuate, 1)
         }
      } else {
         // evacuate an old entry that has been accessed recently for better cache hit rate.
         newOff := seg.rb.Evacuate(oldOff, int(oldEntryLen))
         seg.updateEntryPtr(oldHdr.slotId, oldHdr.hash16, oldOff, newOff)
         consecutiveEvacuate++
         atomic.AddInt64(&seg.totalEvacuate, 1)
      }
   }
}
Copy after login

Freecache's Get process is relatively simple. Find the corresponding segment through hash, find the corresponding index slot through slotId, and then search for data through binary traversal. If not found, it will directly return ErrNotFound, otherwise update some time indicators. . The Get process will also update cache hit rate related indicators.

func (cache *Cache) Get(key []byte) (value []byte, err error) {
   hashVal := hashFunc(key)
   segID := hashVal & segmentAndOpVal
   cache.locks[segID].Lock()
   value, _, err = cache.segments[segID].get(key, nil, hashVal, false)
   cache.locks[segID].Unlock()
   return
}
func (seg *segment) get(key, buf []byte, hashVal uint64, peek bool) (value []byte, expireAt uint32, err error) {
   hdr, ptr, err := seg.locate(key, hashVal, peek) // hash+定位查找
   if err != nil {
      return
   }
   expireAt = hdr.expireAt
   if cap(buf) >= int(hdr.valLen) {
      value = buf[:hdr.valLen]
   } else {
      value = make([]byte, hdr.valLen)
   }

   seg.rb.ReadAt(value, ptr.offset+ENTRY_HDR_SIZE+int64(hdr.keyLen))
}
Copy after login

After locating the data, just read the ringbuf. Note that generally the value read is the newly created memory space, so it involves the copy operation of []byte data. .

3 Summary

Judging from the stress test performance of several common cache frameworks, the Set performance difference is large but not at the quantitative level. The Get performance difference is not big, so for most In most scenarios, you don’t need to pay too much attention to Set/Get performance. The focus should be on whether the function meets business needs and gc impact. For a comparison of performance stress testing, see: https://golang2.eddycjy.com/posts/ch5/04-performance/

There is a special scenario for caching, which is to cache all data in memory. When it comes to updates, it is fully updated (replaced). In this scenario, freecache is used. If the size is not set properly, some data may be eliminated. It is not in line with expectations, so you must pay attention to this. In order to use freecache to avoid this problem, you need to set the size to "large enough", but you must also pay attention to its memory space usage.

For more programming-related knowledge, please visit: Programming Teaching! !

The above is the detailed content of One article to learn about the cache library freecache in Golang. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:juejin.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!