Home  >  Article  >  Database  >  Let’s talk about Redis-based distributed locks in distributed systems

Let’s talk about Redis-based distributed locks in distributed systems

青灯夜游
青灯夜游forward
2021-10-29 10:52:111553browse

Locked, are there any concurrency issues? RedisDo you really understand distributed locks? The following article will talk to you about Redis-based distributed locks in distributed systems. I hope it will be helpful to you!

Let’s talk about Redis-based distributed locks in distributed systems

#In newly taken over projects, there will occasionally be problems with accounts being uneven. The explanation given by the previous technical boss before leaving was: After investigation, the cause was not found. After that, I was too busy and did not solve it. It may be the reason of the framework...

Now that the project is delivered, such problems are necessary To be solved. After sorting out all the accounting processing logic, we finally found the reason: it was caused by concurrent database operations on hot accounts. Regarding this issue, let’s talk about distributed locks based on Redis in distributed systems. By the way, we also break down the causes and solutions to the problem. [Related recommendations: Redis video tutorial]

Cause analysis

The system concurrency is not high, and there are hot accounts, but it is not that big serious. The root of the problem lies in the system architecture design, which artificially creates concurrency. The scenario is as follows: the merchant imports a batch of data in batches, and the system will perform pre-processing and increase or decrease the account balance.

At this time, another scheduled task will also scan and update the account. Moreover, operations on the same account are distributed among various systems, and hot accounts appear.

To solve this problem, from the architectural level, we can consider detaching the accounting system and centralizing it in one system for processing. All database transactions and execution sequences will be coordinated and processed by the accounting system. From a technical perspective, hotspot accounts can be locked through a lock mechanism.

This article explains in detail the implementation of distributed locks for hot accounts.

Analysis of locks

In Java's multi-threaded environment, there are usually several types of locks that can be used:

  • JVM memory model level Locks, commonly used ones include: synchronized, Lock, etc.;
  • Database locks, such as optimistic locks, pessimistic locks, etc.;
  • Distributed locks;

JVM memory Level locks can ensure thread security under a single service, such as when multiple threads access/modify a global variable. But when the system is deployed in a cluster, JVM-level local locks are powerless.

Pessimistic Lock and Optimistic Lock

Like the above case, the hot account is a shared resource in the distributed system, and we usually use Database lock or distributed lock to solve the problem.

Database lock is divided into optimistic lock and pessimistic lock.

Pessimistic lock is implemented based on the exclusive lock provided by the database (Mysql's InnoDB). When performing transaction operations, through the select ... for update statement, MySQL will add an exclusive lock to each row of data in the query result set, and other threads will block the update and delete operations of the record. In order to achieve the sequential execution (modification) of shared resources;

Optimistic lock is relative to pessimistic lock. Optimistic lock assumes that the data will generally not cause conflicts, so the data is submitted and updated. Only then will the data conflicts be formally detected. If there is a conflict, exception information is returned to the user, allowing the user to decide what to do. Optimistic locking is suitable for scenarios where there is more reading and less writing, which can improve the throughput of the program. Optimistic locking is usually implemented based on recording status or adding versions.

Pessimistic lock failure scenario

The pessimistic lock was used in the project, but the pessimistic lock failed. This is also a common misunderstanding when using pessimistic locking. Let’s analyze it below.

Normal use of pessimistic locking process:

  • Lock the record through select... for update;
  • Calculate the new balance, modify the amount and store it;
  • Release the lock after execution is completed;

Processing process for frequent mistakes:

  • Query the account balance and calculate the new balance;
  • Through select .. . for update lock record;
  • Modify the amount and store it;
  • Release the lock after the execution is completed;

In the wrong process, such as what is queried by A and B services The balances are all 100, A deducts 50, B deducts 40, then A locks the record and updates the database to 50; after A releases the lock, B locks the record and updates the database to 60. Obviously, the latter has overwritten the updates of the former. The solution is to expand the scope of the lock and advance the lock before calculating the new balance.

Usually pessimistic locks put a lot of pressure on the database. In practice, optimistic locks or distributed locks are usually used according to the scenario.

Let’s get to the point and talk about the distributed lock implementation based on Redis.

Redis Distributed Lock Practical Exercise

Here takes Spring Boot, Redis, and Lua scripts as examples to demonstrate the implementation of distributed locks. In order to simplify the processing, Redis in the example assumes both the function of distributed lock and the function of database.

Scenario construction

In a cluster environment, operate the amount of the same account. Basic steps:

  • Read the user amount from the database;
  • The program modifies the amount;
  • Then the latest amount is stored in the database;

The following does not lock and process synchronously from the beginning, and gradually deduce the final distribution Lock.

Basic integration and class construction

Prepare a basic business environment without locking.

First introduce relevant dependencies into the Spring Boot project:

<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
  <groupId>org.springframework.boot</groupId>
  <artifactId>spring-boot-starter-web</artifactId>
</dependency>

The account corresponding entity class UserAccount:

public class UserAccount {

  //用户ID
  private String userId;
  //账户内金额
  private int amount;

  //添加账户金额
  public void addAmount(int amount) {
    this.amount = this.amount + amount;
  }
  // 省略构造方法和getter/setter 
}

Create a thread implementation class AccountOperationThread:

public class AccountOperationThread implements Runnable {

  private final static Logger logger = LoggerFactory.getLogger(AccountOperationThread.class);

  private static final Long RELEASE_SUCCESS = 1L;

  private String userId;

  private RedisTemplate<Object, Object> redisTemplate;

  public AccountOperationThread(String userId, RedisTemplate<Object, Object> redisTemplate) {
    this.userId = userId;
    this.redisTemplate = redisTemplate;
  }

  @Override
  public void run() {
    noLock();
  }

  /**
   * 不加锁
   */
  private void noLock() {
    try {
      Random random = new Random();
      // 模拟线程进行业务处理
      TimeUnit.MILLISECONDS.sleep(random.nextInt(100) + 1);
    } catch (InterruptedException e) {
      e.printStackTrace();
    }
    //模拟数据库中获取用户账号
    UserAccount userAccount = (UserAccount) redisTemplate.opsForValue().get(userId);
    // 金额+1
    userAccount.addAmount(1);
    logger.info(Thread.currentThread().getName() + " : user id : " + userId + " amount : " + userAccount.getAmount());
    //模拟存回数据库
    redisTemplate.opsForValue().set(userId, userAccount);
  }
}

The instantiation of RedisTemplate is handed over to Spring Boot:

@Configuration
public class RedisConfig {

  @Bean
  public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
    RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<>();
    redisTemplate.setConnectionFactory(redisConnectionFactory);
    Jackson2JsonRedisSerializer<Object> jackson2JsonRedisSerializer =
        new Jackson2JsonRedisSerializer<>(Object.class);
    ObjectMapper objectMapper = new ObjectMapper();
    objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
    objectMapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
    jackson2JsonRedisSerializer.setObjectMapper(objectMapper);
    // 设置value的序列化规则和 key的序列化规则
    redisTemplate.setValueSerializer(jackson2JsonRedisSerializer);
    redisTemplate.setKeySerializer(new StringRedisSerializer());
    redisTemplate.afterPropertiesSet();
    return redisTemplate;
  }
}

Finally, prepare a TestController to trigger multi-threaded operation:

@RestController
public class TestController {

  private final static Logger logger = LoggerFactory.getLogger(TestController.class);

  private static ExecutorService executorService = Executors.newFixedThreadPool(10);

  @Autowired
  private RedisTemplate<Object, Object> redisTemplate;

  @GetMapping("/test")
  public String test() throws InterruptedException {
    // 初始化用户user_001到Redis,账户金额为0
    redisTemplate.opsForValue().set("user_001", new UserAccount("user_001", 0));
    // 开启10个线程进行同步测试,每个线程为账户增加1元
    for (int i = 0; i < 10; i++) {
      logger.info("创建线程i=" + i);
      executorService.execute(new AccountOperationThread("user_001", redisTemplate));
    }

    // 主线程休眠1秒等待线程跑完
    TimeUnit.MILLISECONDS.sleep(1000);
    // 查询Redis中的user_001账户
    UserAccount userAccount = (UserAccount) redisTemplate.opsForValue().get("user_001");
    logger.info("user id : " + userAccount.getUserId() + " amount : " + userAccount.getAmount());
    return "success";
  }
}

Execute the above program, normally 10 threads, Each thread adds 1 and the result should be 10. But if you execute it several times, you will find that the results vary greatly, and are basically smaller than 10.

[pool-1-thread-5] c.s.redis.thread.AccountOperationThread  : pool-1-thread-5 : user id : user_001 amount : 1
[pool-1-thread-4] c.s.redis.thread.AccountOperationThread  : pool-1-thread-4 : user id : user_001 amount : 1
[pool-1-thread-3] c.s.redis.thread.AccountOperationThread  : pool-1-thread-3 : user id : user_001 amount : 1
[pool-1-thread-1] c.s.redis.thread.AccountOperationThread  : pool-1-thread-1 : user id : user_001 amount : 1
[pool-1-thread-1] c.s.redis.thread.AccountOperationThread  : pool-1-thread-1 : user id : user_001 amount : 2
[pool-1-thread-2] c.s.redis.thread.AccountOperationThread  : pool-1-thread-2 : user id : user_001 amount : 2
[pool-1-thread-5] c.s.redis.thread.AccountOperationThread  : pool-1-thread-5 : user id : user_001 amount : 2
[pool-1-thread-4] c.s.redis.thread.AccountOperationThread  : pool-1-thread-4 : user id : user_001 amount : 3
[pool-1-thread-1] c.s.redis.thread.AccountOperationThread  : pool-1-thread-1 : user id : user_001 amount : 4
[pool-1-thread-3] c.s.redis.thread.AccountOperationThread  : pool-1-thread-3 : user id : user_001 amount : 5
[nio-8080-exec-1] c.s.redis.controller.TestController      : user id : user_001 amount : 5

Taking the above log as an example, the first four threads all changed the value to 1, which means that the next three threads overwrote the previous modifications, resulting in the final result not being 10, but only 5. This is obviously problematic.

Redis synchronization lock implementation

In view of the above situation, in the same JVM, we can complete it through thread locking. However, in a distributed environment, JVM-level locks cannot be implemented. Redis synchronization locks can be used here.

Basic idea: When the first thread enters, a record is entered in Redis. When subsequent threads come to request, it is judged whether the record exists in Redis. If it exists, it means that it is in a locked state and waits or returns. If it does not exist, follow-up business processing will be performed.

  /**
   * 1.抢占资源时判断是否被锁。
   * 2.如未锁则抢占成功且加锁,否则等待锁释放。
   * 3.业务完成后释放锁,让给其它线程。
   * <p>
   * 该方案并未解决同步问题,原因:线程获得锁和加锁的过程,并非原子性操作,可能会导致线程A获得锁,还未加锁时,线程B也获得了锁。
   */
  private void redisLock() {
    Random random = new Random();
    try {
      TimeUnit.MILLISECONDS.sleep(random.nextInt(1000) + 1);
    } catch (InterruptedException e) {
      e.printStackTrace();
    }
    while (true) {
      Object lock = redisTemplate.opsForValue().get(userId + ":syn");
      if (lock == null) {
        // 获得锁 -> 加锁 -> 跳出循环
        logger.info(Thread.currentThread().getName() + ":获得锁");
        redisTemplate.opsForValue().set(userId + ":syn", "lock");
        break;
      }
      try {
        // 等待500毫秒重试获得锁
        TimeUnit.MILLISECONDS.sleep(500);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
    try {
      //模拟数据库中获取用户账号
      UserAccount userAccount = (UserAccount) redisTemplate.opsForValue().get(userId);
      if (userAccount != null) {
        //设置金额
        userAccount.addAmount(1);
        logger.info(Thread.currentThread().getName() + " : user id : " + userId + " amount : " + userAccount.getAmount());
        //模拟存回数据库
        redisTemplate.opsForValue().set(userId, userAccount);
      }
    } finally {
      //释放锁
      redisTemplate.delete(userId + ":syn");
      logger.info(Thread.currentThread().getName() + ":释放锁");
    }
  }

In the while code block, first determine whether the corresponding user ID exists in Redis. If it does not exist, perform set locking. If it exists, jump out of the loop and continue waiting.

The above code seems to implement the locking function, but when the program is executed, you will find that there are still concurrency problems as if it were not locked. The reason is: the operations of acquiring and locking are not atomic. For example, two threads find that the locks are both null and lock them. At this time, the concurrency problem still exists.

Redis Atomic Synchronization Lock

To address the above problems, the process of acquiring and locking can be atomicized. It can be implemented based on the atomization API provided by spring-boot-data-redis:

// 该方法使用了redis的指令:SETNX key value
// 1.key不存在,设置成功返回value,setIfAbsent返回true;
// 2.key存在,则设置失败返回null,setIfAbsent返回false;
// 3.原子性操作;
Boolean setIfAbsent(K var1, V var2);

The atomization operation of the above method is an encapsulation of the setnx command of Redis. The use of setnx in Redis is as follows:

redis> SETNX mykey "Hello"
(integer) 1
redis> SETNX mykey "World"
(integer) 0
redis> GET mykey
"Hello"

The first time, when setting mykey, it does not exist, then 1 is returned, indicating that the setting is successful; when the second time, mykey is set, it already exists, and 0 is returned, indicating that the setting failed. Query the value corresponding to mykey again and you will find that it is still the value set for the first time. In other words, redis's setnx ensures that a unique key can only be set successfully by one service.

After understanding the above API and underlying principles, let’s take a look at the implementation method code in the thread as follows:

  /**
   * 1.原子操作加锁
   * 2.竞争线程循环重试获得锁
   * 3.业务完成释放锁
   */
  private void atomicityRedisLock() {
    //Spring data redis 支持的原子性操作
    while (!redisTemplate.opsForValue().setIfAbsent(userId + ":syn", "lock")) {
      try {
        // 等待100毫秒重试获得锁
        TimeUnit.MILLISECONDS.sleep(100);
      } catch (InterruptedException e) {
        e.printStackTrace();
      }
    }
    logger.info(Thread.currentThread().getName() + ":获得锁");
    try {
      //模拟数据库中获取用户账号
      UserAccount userAccount = (UserAccount) redisTemplate.opsForValue().get(userId);
      if (userAccount != null) {
        //设置金额
        userAccount.addAmount(1);
        logger.info(Thread.currentThread().getName() + " : user id : " + userId + " amount : " + userAccount.getAmount());
        //模拟存回数据库
        redisTemplate.opsForValue().set(userId, userAccount);
      }
    } finally {
      //释放锁
      redisTemplate.delete(userId + ":syn");
      logger.info(Thread.currentThread().getName() + ":释放锁");
    }
  }

Execute the code again and you will find that the result is correct, which means that the distribution can be successfully The thread is locked.

Deadlock of Redis distributed lock

Although the execution result of the above code is fine, if the application crashes abnormally, there will be no time to execute the method of releasing the lock in finally. Then other threads will never be able to obtain this lock.

The overloaded method of setIfAbsent can be used at this time:

Boolean setIfAbsent(K var1, V var2, long var3, TimeUnit var5);

Based on this method, the expiration time of the lock can be set. In this way, even if the thread that obtained the lock goes down, other threads can obtain the lock normally after the data in Redis expires.

The sample code is as follows:

private void atomicityAndExRedisLock() {
    try {
      //Spring data redis 支持的原子性操作,并设置5秒过期时间
      while (!redisTemplate.opsForValue().setIfAbsent(userId + ":syn",
          System.currentTimeMillis() + 5000, 5000, TimeUnit.MILLISECONDS)) {
        // 等待100毫秒重试获得锁
        logger.info(Thread.currentThread().getName() + ":尝试循环获取锁");
        TimeUnit.MILLISECONDS.sleep(1000);
      }
      logger.info(Thread.currentThread().getName() + ":获得锁--------");
      // 应用在这里宕机,进程退出,无法执行 finally;
      Thread.currentThread().interrupt();
      // 业务逻辑...
    } catch (InterruptedException e) {
      e.printStackTrace();
    } finally {
      //释放锁
      if (!Thread.currentThread().isInterrupted()) {
        redisTemplate.delete(userId + ":syn");
        logger.info(Thread.currentThread().getName() + ":释放锁");
      }
    }
  }

Business timeout and daemon thread

The timeout period of Redis is added above, which seems to solve the problem, but New problems have been introduced.

For example, under normal circumstances, thread A can complete the business within 5 seconds, but occasionally it may take more than 5 seconds. If the timeout is set to 5 seconds, thread A obtains the lock, but the business logic processing takes 6 seconds. At this time, thread A is still performing normal business logic, and thread B has obtained the lock. When thread A finishes processing, it is possible to release thread B's lock.

There are two problems in the above scenario:

  • First, thread A and thread B may be executed at the same time, causing concurrency problems.
  • Second, thread A may release thread B's lock, causing a series of vicious cycles.

Of course, you can determine whether the lock belongs to thread A or thread B by setting the value in Redis. However, careful analysis will reveal that the essence of this problem is that thread A takes longer to execute business logic than the lock timeout.

Then there are two solutions:

  • First, set the timeout long enough to ensure that the business code can be executed before the lock is released;
  • Second, add a daemon thread for the lock and add time for the lock that is about to expire but has not been released;

The first method requires the time-consuming business logic of the entire bank in most cases. Timeout setting.

The second method is to dynamically increase the lock timeout through the following daemon thread method.

public class DaemonThread implements Runnable {
  private final static Logger logger = LoggerFactory.getLogger(DaemonThread.class);

  // 是否需要守护 主线程关闭则结束守护线程
  private volatile boolean daemon = true;
  // 守护锁
  private String lockKey;

  private RedisTemplate<Object, Object> redisTemplate;

  public DaemonThread(String lockKey, RedisTemplate<Object, Object> redisTemplate) {
    this.lockKey = lockKey;
    this.redisTemplate = redisTemplate;
  }

  @Override
  public void run() {
    try {
      while (daemon) {
        long time = redisTemplate.getExpire(lockKey, TimeUnit.MILLISECONDS);
        // 剩余有效期小于1秒则续命
        if (time < 1000) {
          logger.info("守护进程: " + Thread.currentThread().getName() + " 延长锁时间 5000 毫秒");
          redisTemplate.expire(lockKey, 5000, TimeUnit.MILLISECONDS);
        }
        TimeUnit.MILLISECONDS.sleep(300);
      }
      logger.info(" 守护进程: " + Thread.currentThread().getName() + "关闭 ");
    } catch (InterruptedException e) {
      e.printStackTrace();
    }
  }

  // 主线程主动调用结束
  public void stop() {
    daemon = false;
  }
}

上述线程每隔300毫秒获取一下Redis中锁的超时时间,如果小于1秒,则延长5秒。当主线程调用关闭时,守护线程也随之关闭。

主线程中相关代码实现:

private void deamonRedisLock() {
    //守护线程
    DaemonThread daemonThread = null;
    //Spring data redis 支持的原子性操作,并设置5秒过期时间
    String uuid = UUID.randomUUID().toString();
    String value = Thread.currentThread().getId() + ":" + uuid;
    try {
      while (!redisTemplate.opsForValue().setIfAbsent(userId + ":syn", value, 5000, TimeUnit.MILLISECONDS)) {
        // 等待100毫秒重试获得锁
        logger.info(Thread.currentThread().getName() + ":尝试循环获取锁");
        TimeUnit.MILLISECONDS.sleep(1000);
      }
      logger.info(Thread.currentThread().getName() + ":获得锁----");
      // 开启守护线程
      daemonThread = new DaemonThread(userId + ":syn", redisTemplate);
      Thread thread = new Thread(daemonThread);
      thread.start();
      // 业务逻辑执行10秒...
      TimeUnit.MILLISECONDS.sleep(10000);
    } catch (InterruptedException e) {
      e.printStackTrace();
    } finally {
      //释放锁 这里也需要原子操作,今后通过 Redis + Lua 讲
      String result = (String) redisTemplate.opsForValue().get(userId + ":syn");
      if (value.equals(result)) {
        redisTemplate.delete(userId + ":syn");
        logger.info(Thread.currentThread().getName() + ":释放锁-----");
      }
      //关闭守护线程
      if (daemonThread != null) {
        daemonThread.stop();
      }
    }
  }

其中在获得锁之后,开启守护线程,在finally中将守护线程关闭。

基于Lua脚本的实现

在上述逻辑中,我们是基于spring-boot-data-redis提供的原子化操作来保证锁判断和执行的原子化的。在非Spring Boot项目中,则可以基于Lua脚本来实现。

首先定义加锁和解锁的Lua脚本及对应的DefaultRedisScript对象,在RedisConfig配置类中添加如下实例化代码:

@Configuration
public class RedisConfig {

  //lock script
  private static final String LOCK_SCRIPT = " if redis.call(&#39;setnx&#39;,KEYS[1],ARGV[1]) == 1 " +
      " then redis.call(&#39;expire&#39;,KEYS[1],ARGV[2]) " +
      " return 1 " +
      " else return 0 end ";
  private static final String UNLOCK_SCRIPT = "if redis.call(&#39;get&#39;, KEYS[1]) == ARGV[1] then return redis.call" +
      "(&#39;del&#39;, KEYS[1]) else return 0 end";

  // ... 省略部分代码
  
  @Bean
  public DefaultRedisScript<Boolean> lockRedisScript() {
    DefaultRedisScript<Boolean> defaultRedisScript = new DefaultRedisScript<>();
    defaultRedisScript.setResultType(Boolean.class);
    defaultRedisScript.setScriptText(LOCK_SCRIPT);
    return defaultRedisScript;
  }

  @Bean
  public DefaultRedisScript<Long> unlockRedisScript() {
    DefaultRedisScript<Long> defaultRedisScript = new DefaultRedisScript<>();
    defaultRedisScript.setResultType(Long.class);
    defaultRedisScript.setScriptText(UNLOCK_SCRIPT);
    return defaultRedisScript;
  }
}

再通过在AccountOperationThread类中新建构造方法,将上述两个对象传入类中(省略此部分演示)。然后,就可以基于RedisTemplate来调用了,改造之后的代码实现如下:

  private void deamonRedisLockWithLua() {
    //守护线程
    DaemonThread daemonThread = null;
    //Spring data redis 支持的原子性操作,并设置5秒过期时间
    String uuid = UUID.randomUUID().toString();
    String value = Thread.currentThread().getId() + ":" + uuid;
    try {
      while (!redisTemplate.execute(lockRedisScript, Collections.singletonList(userId + ":syn"), value, 5)) {
        // 等待1000毫秒重试获得锁
        logger.info(Thread.currentThread().getName() + ":尝试循环获取锁");
        TimeUnit.MILLISECONDS.sleep(1000);
      }
      logger.info(Thread.currentThread().getName() + ":获得锁----");
      // 开启守护线程
      daemonThread = new DaemonThread(userId + ":syn", redisTemplate);
      Thread thread = new Thread(daemonThread);
      thread.start();
      // 业务逻辑执行10秒...
      TimeUnit.MILLISECONDS.sleep(10000);
    } catch (InterruptedException e) {
      logger.error("异常", e);
    } finally {
      //使用Lua脚本:先判断是否是自己设置的锁,再执行删除
      // key存在,当前值=期望值时,删除key;key存在,当前值!=期望值时,返回0;
      Long result = redisTemplate.execute(unlockRedisScript, Collections.singletonList(userId + ":syn"), value);
      logger.info("redis解锁:{}", RELEASE_SUCCESS.equals(result));
      if (RELEASE_SUCCESS.equals(result)) {
        if (daemonThread != null) {
          //关闭守护线程
          daemonThread.stop();
          logger.info(Thread.currentThread().getName() + ":释放锁---");
        }
      }
    }
  }

其中while循环中加锁和finally中的释放锁都是基于Lua脚本来实现了。

Redis锁的其他因素

除了上述实例,在使用Redis分布式锁时,还可以考虑以下情况及方案。

Redis锁的不可重入

当线程在持有锁的情况下再次请求加锁,如果一个锁支持一个线程多次加锁,那么这个锁就是可重入的。如果一个不可重入锁被再次加锁,由于该锁已经被持有,再次加锁会失败。Redis可通过对锁进行重入计数,加锁时加 1,解锁时减 1,当计数归 0时释放锁。

可重入锁虽然高效但会增加代码的复杂性,这里就不举例说明了。

等待锁释放

有的业务场景,发现被锁则直接返回。但有的场景下,客户端需要等待锁释放然后去抢锁。上述示例就属于后者。针对等待锁释放也有两种方案:

  • 客户端轮训:当未获得锁时,等待一段时间再重新获取,直到成功。上述示例就是基于这种方式实现的。这种方式的缺点也很明显,比较耗费服务器资源,当并发量大时会影响服务器的效率。
  • 使用Redis的订阅发布功能:当获取锁失败时,订阅锁释放消息,获取锁成功后释放时,发送释放消息。

集群中的主备切换和脑裂

在Redis包含主从同步的集群部署方式中,如果主节点挂掉,从节点提升为主节点。如果客户端A在主节点加锁成功,指令还未同步到从节点,此时主节点挂掉,从节点升为主节点,新的主节点中没有锁的数据。这种情况下,客户端B就可能加锁成功,从而出现并发的场景。

当集群发生脑裂时,Redis master节点跟slave 节点和 sentinel 集群处于不同的网络分区。sentinel集群无法感知到master的存在,会将 slave 节点提升为 master 节点,此时就会存在两个不同的 master 节点。从而也会导致并发问题的出现。Redis Cluster集群部署方式同理。

小结

通过生产环境中的一个问题,排查原因,寻找解决方案,到最终对基于Redis分布式的深入研究,这便是学习的过程。

同时,每当面试或被问题如何解决分布式共享资源时,我们会脱口而出”基于Redis实现分布式锁“,但通过本文的学习会发现,Redis分布式锁并不是万能的,而且在使用的过程中还需要注意超时、死锁、误解锁、集群选主/脑裂等问题。

Redis以高性能著称,但在实现分布式锁的过程中还是存在一些问题。因此,基于Redis的分布式锁可以极大的缓解并发问题,但要完全防止并发,还是得从数据库层面入手。

源码地址:https://github.com/secbr/springboot-all/tree/master/springboot-redis-lock

更多编程相关知识,请访问:编程入门!!

The above is the detailed content of Let’s talk about Redis-based distributed locks in distributed systems. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:juejin.cn. If there is any infringement, please contact admin@php.cn delete