Home > Database > Redis > How to achieve persistence in redis

How to achieve persistence in redis

silencement
Release: 2019-06-04 17:05:06
Original
3458 people have browsed it

How to achieve persistence in redis

#Redis is an indispensable service for current web programming. Its characteristics are obvious. Compared with memcached, it can be cached and restarted without losing data, which is very easy to use. So the question is, how does it do it?

RDB

RDB is a means of persistence, which writes data in memory to disk under certain conditions. So under what conditions is it written? It is impossible to write without thinking. Writing one by one will affect the performance. You can't wait for a long time before writing one. If there is a downtime in the middle and all the data is lost, it is better to use memcached. There is such a configuration in the redis configuration:

save 900 1

save 300 10

save 60 10000

A very critical piece of configuration, this is the core of RDB persistence. Meaning:

1. If 1 key changes (insert or update) in 900 seconds, I will synchronize it to the disk

2. If in 300 seconds, there is If 10 keys change (insert or update), I will synchronize it to the disk

3. If there are 10,000 key changes (insert or update) in 60 seconds, I will synchronize it to the disk

How do we know these time points and the number of changes? There are two other extremely critical things at this time, one is called the dirty counter, and the other is called lastsave (the time of the last save). The dirty counter specifically records the time since the last save. Change the number of keys. Lastsave records the time when the save is executed. For example, the initial time is time1 and dirty is 0. At this time, 20 keys have changed, dirty is 20, and then the current time is time2, time2-time1 > ;= 300, if the second condition is met, the data in the memory will be saved, and dirty is cleared to 0, and then wait for the condition to be triggered.

If I have 100,000 keys within 60 seconds, then the problem comes. When a large number of disk IO comes, the redis main process will be blocked, and all the commands during the period will not be executed. How can this be done? , so there came a one called bgsave, which is a sub-process forked out of the main redis process and is specialized in performing RDB persistence work.

The saved file format is in binary format. If the database goes down, human intervention is not required for recovery, and redis will automatically read the disk file.

AOF

Different from RDB, AOF stores the commands you execute. When the aof function is turned on, the executed update command will not be written directly to the aof file. Instead of writing to an aof buf first, we know that we cannot always write to buf. buf is also memory, so when can it be synchronized to the disk? There is also such a configuration in redis

appendfsync always

appendfsync everysec

appendfsync no

means:

1. As long as there is an update I will synchronize the command

2. If the last synchronization time is more than one second from now, synchronize

3. If it is not synchronized, wait for the operating system to judge by itself (I will synchronize when I am free) )

Under analysis, the first type of io is frequent and has high io pressure, but the probability of losing data is the smallest. The second type of io is not very stressful and only loses about 1 second of data at most. The third type of io is There is little pressure and the probability of losing data is too high. All things considered, generally the second option. But there is still a question. I executed INCR num 100 times. Logically, num is 100. There are 100 same commands in aof. There is nothing wrong with it. So what is the difference between executing INCR num 100 times and SET num 100? The same result. The former takes 99 times more space, which is very wasteful, so AOF rewriting appeared. How is it done? It's very simple: first read the current value from the database, and then replace it with a record. This is the principle of AOF rewriting. Rewriting takes time, so it is handled by a child process. During the rewriting process, what should I do if a new command comes? The old method is to write the buf buffer. After the rewriting is completed, append the commands in the buf to the new aof, and then replace the old aof with the new aof. Implemented the rewrite.

This article comes from redis tutorial, welcome to learn.

The above is the detailed content of How to achieve persistence in redis. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template