Home > Database > Mysql Tutorial > Detailed examples of data asymmetry between MySQL and Elasticsearch

Detailed examples of data asymmetry between MySQL and Elasticsearch

小云云
Release: 2017-12-22 13:42:12
Original
2403 people have browsed it

jdbc-input-plugin can only implement database appending and incremental writing for elasticsearch, but often the database at the jdbc source end may perform database deletion or update operations. As a result, there is an asymmetry between the database and the search engine database. This article mainly introduces relevant information on solutions to the data asymmetry problem between MySQL and Elasticsearch. For elasticsearch incremental writing, the database on the jdbc source end may often perform database deletion or update operations. Solutions are provided here. Friends in need can refer to them. Next, I hope it can help everyone.

Of course, if you have a development team, you can write a program to synchronize search engine operations when deleting or updating. If you don't have this ability, you can try the following method.

There is a data table article. The mtime field defines ON UPDATE CURRENT_TIMESTAMP, so the time of mtime will change every time it is updated.


mysql> desc article;
+-------------+--------------+------+-----+--------------------------------+-------+
| Field    | Type     | Null | Key | Default            | Extra |
+-------------+--------------+------+-----+--------------------------------+-------+
| id     | int(11)   | NO  |   | 0               |    |
| title    | mediumtext  | NO  |   | NULL              |    |
| description | mediumtext  | YES |   | NULL              |    |
| author   | varchar(100) | YES |   | NULL              |    |
| source   | varchar(100) | YES |   | NULL              |    |
| content   | longtext   | YES |   | NULL              |    |
| status   | enum('Y','N')| NO  |   | 'N'              |    |
| ctime    | timestamp  | NO  |   | CURRENT_TIMESTAMP       |    |
| mtime    | timestamp  | YES |   | ON UPDATE CURRENT_TIMESTAMP  |    |
+-------------+--------------+------+-----+--------------------------------+-------+
7 rows in set (0.00 sec)
Copy after login

logstash query to increase mtime Rule


jdbc {
  jdbc_driver_library => "/usr/share/java/mysql-connector-java.jar"
  jdbc_driver_class => "com.mysql.jdbc.Driver"
  jdbc_connection_string => "jdbc:mysql://localhost:3306/cms"
  jdbc_user => "cms"
  jdbc_password => "password"
  schedule => "* * * * *" #定时cron的表达式,这里是每分钟执行一次
  statement => "select * from article where mtime > :sql_last_value"
  use_column_value => true
  tracking_column => "mtime"
  tracking_column_type => "timestamp" 
  record_last_run => true
  last_run_metadata_path => "/var/tmp/article-mtime.last"
 }
Copy after login

Create a recycle bin table, which is used to solve the problem of database deletion or disabling status = 'N'.


CREATE TABLE `elasticsearch_trash` (
 `id` int(11) NOT NULL,
 `ctime` timestamp NULL DEFAULT CURRENT_TIMESTAMP,
 PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8
Copy after login

Create a trigger for the article table


CREATE DEFINER=`dba`@`%` TRIGGER `article_BEFORE_UPDATE` BEFORE UPDATE ON `article` FOR EACH ROW
BEGIN
 -- 此处的逻辑是解决文章状态变为 N 的时候,需要将搜索引擎中对应的数据删除。
 IF NEW.status = 'N' THEN
 insert into elasticsearch_trash(id) values(OLD.id);
 END IF;
 -- 此处逻辑是修改状态到 Y 的时候,方式elasticsearch_trash仍然存在该文章ID,导致误删除。所以需要删除回收站中得回收记录。
  IF NEW.status = 'Y' THEN
 delete from elasticsearch_trash where id = OLD.id;
 END IF;
END

CREATE DEFINER=`dba`@`%` TRIGGER `article_BEFORE_DELETE` BEFORE DELETE ON `article` FOR EACH ROW
BEGIN
 -- 此处逻辑是文章被删除同事将改文章放入搜索引擎回收站。
 insert into elasticsearch_trash(id) values(OLD.id);
END
Copy after login

Next we need to write a simple Shell to run every minute Once, retrieve the data from the elasticsearch_trash data table, and then use the curl command to call the elasticsearch restful interface to delete the recovered data.

You can also develop related programs. Here is a Spring boot scheduled task example.

Entity


package cn.netkiller.api.domain.elasticsearch;

import java.util.Date;

import javax.persistence.Column;
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.Table;

@Entity
@Table
public class ElasticsearchTrash {
 @Id
 private int id;

 @Column(columnDefinition = "TIMESTAMP DEFAULT CURRENT_TIMESTAMP")
 private Date ctime;

 public int getId() {
 return id;
 }

 public void setId(int id) {
 this.id = id;
 }

 public Date getCtime() {
 return ctime;
 }

 public void setCtime(Date ctime) {
 this.ctime = ctime;
 }

}
Copy after login

Warehouse


##

package cn.netkiller.api.repository.elasticsearch;

import org.springframework.data.repository.CrudRepository;

import com.example.api.domain.elasticsearch.ElasticsearchTrash;

public interface ElasticsearchTrashRepository extends CrudRepository<ElasticsearchTrash, Integer>{


}
Copy after login

Timed task


package cn.netkiller.api.schedule;

import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.rest.RestStatus;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.stereotype.Component;

import com.example.api.domain.elasticsearch.ElasticsearchTrash;
import com.example.api.repository.elasticsearch.ElasticsearchTrashRepository;

@Component
public class ScheduledTasks {
 private static final Logger logger = LoggerFactory.getLogger(ScheduledTasks.class);

 @Autowired
 private TransportClient client;

 @Autowired
 private ElasticsearchTrashRepository alasticsearchTrashRepository;

 public ScheduledTasks() {
 }

 @Scheduled(fixedRate = 1000 * 60) // 60秒运行一次调度任务
 public void cleanTrash() {
 for (ElasticsearchTrash elasticsearchTrash : alasticsearchTrashRepository.findAll()) {
  DeleteResponse response = client.prepareDelete("information", "article", elasticsearchTrash.getId() + "").get();
  RestStatus status = response.status();
  logger.info("delete {} {}", elasticsearchTrash.getId(), status.toString());
  if (status == RestStatus.OK || status == RestStatus.NOT_FOUND) {
  alasticsearchTrashRepository.delete(elasticsearchTrash);
  }
 }
 }
}
Copy after login

Spring boot starts the main program.


package cn.netkiller.api;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.scheduling.annotation.EnableScheduling;

@SpringBootApplication
@EnableScheduling
public class Application {

 public static void main(String[] args) {
 SpringApplication.run(Application.class, args);
 }
}
Copy after login
Related recommendations:


What is Elasticsearch? Where can Elasticsearch be used?

Elasticsearch index and document operation example tutorial

Detailed example tutorial of using Elasticsearch in spring

The above is the detailed content of Detailed examples of data asymmetry between MySQL and Elasticsearch. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template