Recently, we want to build a level-one caching mechanism for our system. But I always feel like something is missing.
Environment:
Load balancing, master-slave separation, Redis standalone (can be multiple machines in the future)
Now the initial idea:
<code>浏览器缓存-》本地文件缓存-》内存缓存(Redis)-》Db </code>
After the user accesses the web application, set the browser cache for it, and then set the local file cache and memory cache.
After other users visit, I think the steps are as follows:
Check if there is browser cache
Retrieve whether the local machine has a file cache
Memory cache
Db
My question is:
But I feel that something is missing in a certain step, or I feel that the (multi-level) cache expiration time is difficult to choose.
Also, is the local file cache {checking expiration time, reading (deleting, generating) files} worth it compared to a single connection that jumps directly to the memory cache (Redis)?
So I would like to consult my basic caching mechanism to see if it is suitable or if there are any shortcomings that can be improved. Thank you!
Recently, we want to build a level-one caching mechanism for our system. But I always feel like something is missing.
Environment:
Load balancing, master-slave separation, Redis standalone (can be multiple machines in the future)
Now the initial idea:
<code>浏览器缓存-》本地文件缓存-》内存缓存(Redis)-》Db </code>
After the user accesses the web application, set the browser cache for it, and then set the local file cache and memory cache.
After other users visit, I think the steps are as follows:
Check if there is browser cache
Retrieve whether the local machine has a file cache
Memory cache
Db
My question is:
But I feel that there is something missing in a certain step, or I feel that the (multi-level) cache expiration time is difficult to choose.
Also, is the local file cache {checking expiration time, reading (deleting, generating) files} worth it compared to a single connection that jumps directly to the memory cache (Redis)?
So I would like to consult my basic caching mechanism to see if it is suitable or if there are any shortcomings that can be improved. Thank you!
Multi-level cache can reduce the pressure on the system and greatly reduce RT. However, one aspect that needs to be considered is the management of multi-level cache. This is also mentioned by the author in the article. This is avoided by using multi-level cache. Can't solve the problem. As for how to invalidate multi-level cache, you can try to use a local timer to refresh the cache at intervals
In fact, the file cache can be replaced by a local memory cache. It is also possible to design it as a file cache. However, when the volume of local disk I/O is large, I am afraid that it cannot handle it. As for which one is more efficient and network overhead, it needs to be based on the actual situation. Go and take a pressure test
Multi-level caching is more about cache penetration and program robustness. When there is a problem with the centralized cache, our application can continue to run; some hot data is made into a memory cache, so there is no need to access the centralized cache. , which can reduce the pressure on centralized cache. So in this aspect, file caching is better than Redis’s centralized caching