Home  >  Article  >  Database  >  An article to talk about how to quickly migrate MySQL data

An article to talk about how to quickly migrate MySQL data

青灯夜游
青灯夜游forward
2023-01-29 18:13:572132browse

How to quickly migrate data in MySQL? The following article will talk about two ways to quickly migrate MySQL data. I hope it will be helpful to you!

An article to talk about how to quickly migrate MySQL data

#We usually encounter a scenario where we need to migrate data from a database to a database server with more powerful performance. What we need to do at this time is to quickly migrate the database data.

So, how can we quickly migrate the data in the database? Today we will talk about this topic.

There are two ways to migrate database data, one is physical migration, and the other is logical migration.

First, we generate 50,000 pieces of test data. The details are as follows:

-- 1. 准备表
create table s1(
  id int,
  name varchar(20),
  gender char(6),
  email varchar(50)
);

-- 2. 创建存储过程,实现批量插入记录
delimiter $$
create procedure auto_insert1()
BEGIN
    declare i int default 1;
    while(i<50000)do
        insert into s1 values(i,&#39;shanhe&#39;,&#39;male&#39;,concat(&#39;shanhe&#39;,i,&#39;@helloworld&#39;));
        set i=i+1;
        select concat(&#39;shanhe&#39;,i,&#39;_ok&#39;);
    end while;
END$$
delimiter ;

-- 3. 查看存储过程
show create procedure auto_insert1\G 

-- 4. 调用存储过程
call auto_insert1()

Logical migration

The principle of logical migration is toconvert the data and table structure in the MySQL database into SQL files. Commonly used migration tools that use this principle are mysqldump.

Let’s test it below:

[root@dxd ~]# mysqldump -h172.17.16.2 -uroot -pTest123!  s1 s1 --result-file=/opt/s1.sql

[root@dxd ~]# ll /opt/
-rw-r--r--  1 root root 2684599 5月  10 00:24 s1.sql

What we can see is that the corresponding SQL is generated. Now we migrate to another database with the generated SQL.

mysql> use s2;
Database changed

mysql> source /opt/s1.sql

Through simple time accumulation calculation, it takes about 1 second, but as the database increases, the migration time will also increase accordingly. At this time, if the data in the data table that needs to be migrated is large enough (assuming tens of millions of entries), mysqldump is likely to burst the memory and cause the migration to fail. Therefore, when migrating such a data table, we can simply optimize mysqldump, as follows.

  • --add-locks=0: This parameter means not to add LOCK TABLES s1.s1 when migrating data WRITE;, which means the data table is not locked when importing data.
  • --single-transaction: Indicates that the data table is not locked when exporting data.
  • --set-gtid-purged=OFF: Indicates that GTID-related information will not be output when importing data.

Adding these three parameters is mainly to reduce unnecessary IO caused by all operations, as follows:

[root@dxd ~]# mysqldump -h172.17.16.2 -uroot -pTest123! --add-locks=0 --single-transaction --set-gtid-purged=OFF s1 s1 --result-file=/opt/s1.sql

Through the above case, we look at the final result, optimized The effect is minimal. Therefore, this method of logical optimization is not advisable when the amount of data is relatively large (more than one million records).

File migration

File migration, as the name suggests, is directly migrating the storage files of the database. Compared with the logical migration method, this migration method has much higher performance and rarely bursts the memory; When migrating data in scenarios with large amounts of data, it is recommended to use file migration. The method is as follows:

mysql> select * from s1 into outfile &#39;/var/lib/mysql-files/1.txt&#39;;
Query OK, 55202 rows affected (0.04 sec)

What we can see is that it only took about 0.04 seconds to export more than 50,000 pieces of data to the file. Compared with mysqldump, it is more than twice as fast.

Note: Data exported in this way can only be exported to the directory of the MySQL database. The parameter to configure this directory is secure_file_priv. If you do not do this, the database will report an ERROR 1290 (HY000): The MySQL server is running with the --secure-file-priv option so it cannot Error in execute this statement.

After exporting the data, we will import the data in the file into the database and see the effect, as follows:

mysql> load data infile &#39;/var/lib/mysql-files/1.txt&#39; into table s3.s1;
Query OK, 55202 rows affected (0.27 sec)
Records: 55202  Deleted: 0  Skipped: 0  Warnings: 0

Note: into outfile will not The table structure is generated, so before importing data, you need to create the table structure manually.

We can see that the total time spent on importing is 0.27 seconds, which is more than twice as fast as mysqldump.

This method mainly saves each piece of data directly in the file with \nline breaks.

When importing, it will first determine whether the fields of the imported data table are consistent with the number of columns of the data in each row. If they are consistent, they will be imported row by row. If they are inconsistent, an error will be reported directly.

There is a problem here that we need to pay attention to. If our database is a master-slave architecture database, a problem is likely to arise here. Before talking about this issue, we must first explain a little bit about the principle of master-slave replication.

The principle of master-slave replication mainly relies on binlog logs. The specific steps of binlog logs are as follows:

  • 主库上执行 SQL ,并且把修改的数据保存在 binlog 日志之中;
  • 由主库上的 dump 线程转发给从库;
  • 由从库中的 IO 线程接收主库发送过来的 binlog 日志;
  • 将 binlog 日志数据写入中继日志之中;
  • 通过从库上的 SQL 线程从中继日志中重放 binlog 日志,进而达到主从数据一致。

在这个过程之中,我相信仔细阅读本小册第 15 篇文章的朋友一定有一个疑问,当 binlog 日志的工作模式为 STATEMENT 时,在主库上执行上面的 SQL load data infile '/var/lib/mysql-files/1.txt' into table s3.s1; 时,就会导致从库无法重复上方 SQL 的结果,这是因为从库中并没有 /var/lib/mysql-files/1.txt 这个文件。具体步骤如下:

  • 主库执行 load data infile '/var/lib/mysql-files/1.txt' into table s3.s1;

  • binlog 日志的工作模式如果是 STATEMENT 时,将在 binlog 中记录上方的 SQL;

  • 然后在从库中重新执行 binlog 中记录上方的 SQL。

很显然,从库上执行该 SQL 时,会立即报错,这个时候怎么办呢?

这个时候我需要再介绍上方 SQL 的 load 关键字:

  • 如果增加 local 关键字,则该条 SQL 会在本地寻找 /var/lib/mysql-files/1.txt
  • 如果不加 local 关键字,则该条 SQL 会在主库端寻找 /var/lib/mysql-files/1.txt

所以,在主从架构中,要使用文件迁移的方式迁移数据,不加 local 关键字即可。

物理迁移

物理迁移也是迁移文件,所不同是物理迁移一般是直接迁移 MySQL 的数据文件。这种迁移方式性能很好但是操作过程麻烦,容易出错。具体我们来详细解释一下

首先是非常干脆的迁移方式迁移,就是直接 MySQL 数据库的数据文件打包迁移,下面我们做一个案例:

-- 我们将s1数据库中的所有数据迁移到s4数据库之中
[root@dxd mysql]# pwd
/var/lib/mysql
[root@dxd mysql]# cp -r s1 s4
[root@dxd mysql]# chown -R mysql.mysql s4

-- 重启数据库
[root@dxd mysql]# systemctl restart mysqld

-- 查看该表数据
mysql> select count(*) from s1;
ERROR 1146 (42S02): Table &#39;s4.s1&#39; doesn&#39;t exist

我们可以看到的是查询数据的时候报了一个 1146 的错误,这是因为 INnoDB 存储引擎中的数据表是需要在 MySQL 数据库的数据字典中注册的,我们直接将数据文件复制过去的时候并没有在数据字典中注册,换句话说就是在把数据复制过去之后,还需要在数据字典中注册数据库系统才能正常识别。

下面我们就来介绍一下在数据字典中该如何注册,具体步骤如下。

注:物理迁移数据表数据实际上最主要的就是迁移表空间,因为对于 InnoDB 存储引擎来说,数据是存储在数据表空间中的,也就是.idb文件。

1、我们在迁移到的数据库中创建与需要迁移的数据表完全相同的数据表。

mysql> create database t1;
Query OK, 1 row affected (0.01 sec)

mysql> use t1;
Database changed

mysql> CREATE TABLE s1 (

->   `id` int(11) DEFAULT NULL,
->   `name` varchar(20) DEFAULT NULL,
->   `gender` char(6) DEFAULT NULL,
->   `email` varchar(50) DEFAULT NULL
-> ) ENGINE=InnoDB DEFAULT CHARSET=utf8;

Query OK, 0 rows affected (0.04 sec)

2、删除新创建的数据表的表空间,这是因为新创建的数据库的表空间没有数据且会跟迁移过来的数据表空间冲突,我们提前删除,具体删除步骤如下:

mysql> alter table t1.s1 discard tablespace;
Query OK, 0 rows affected (0.01 sec)

3、创建一个原有数据表的配置文件,这样做的目的是将原有数据表的一些配置复制过来(注意:这一步会自动将数据表上锁)。

mysql> use s1;
Database changed

mysql> flush table s1 for export;
Query OK, 0 rows affected (0.01 sec)

查看是否已经创建 .cfg 文件

[root@dxd mysql]# pwd
/var/lib/mysql
[root@dxd mysql]# ll s1/
总用量 12312
-rw-r——- 1 mysql mysql 65 5月 10 00:26 db.opt
-rw-r——- 1 mysql mysql 520 5月 10 15:15 s1.cfg
-rw-r——- 1 mysql mysql 8652 5月 10 00:27 s1.frm
-rw-r——- 1 mysql mysql 12582912 5月 10 00:27 s1.ibd

将配置文件和表空间文件迁移至新的数据库。

复制文件的方式可以灵活多变

[root@dxd mysql]# cp s1/s1.cfg t1/
[root@dxd mysql]# cp s1/s1.ibd t1/

设置权限,很重要,如果权限不一致会导致数据读取表空间数据失败

[root@dxd mysql]# chown -R mysql.mysql t1/
  • 将原有数据表解锁。

mysql> use s1;
Database changed

mysql> unlock tables;
Query OK, 0 rows affected (0.00 sec)
  • 载入新的表空间。

mysql> use t1;

mysql> alter table s1 import tablespace;
Query OK, 0 rows affected (0.09 sec)
  • 测试。

mysql> select count( 
) from s1;
+—————+
| count(
 ) |
+—————+
| 55202 |
+—————+
1 row in set (0.03 sec)

我们看到此时就实现了数据迁移。

这种数据迁移虽然性能很好,但是过程非常麻烦,很容易出现操作失误的情况。

总结

今天,我们介绍了三种数据库迁移的方式,分别是:逻辑迁移、文件迁移和物理迁移。

The logical migration method is mainly to use the mysqldump command to migrate. The principle is to generate SQL files from the data and structures in the database and then import them. This migration method is mainly suitable for scenarios where the amount of data is relatively small and the server performance is good , such as scenarios where there are less than 5 million data connections.

The file migration method is actually a category of logical migration. It mainly saves the data in the file through commands and then imports it into the database. This migration method will not migrate. It has a table structure, so you need to manually create the table structure before importing data. The principle is the same as the logical migration method.

The physical migration method is suitable for scenarios where the amount of data is relatively large. This scenario is not likely to cause the server to crash due to excessive resource usage, but the operation process is troublesome and the original data table will be locked. .

In actual application process, we usually choose to use mysqldump for data migration; if the amount of data is large, our preferred method should be to improve the performance of the server so that it can handle the performance of processing the corresponding amount of data; If migration is necessary, consider using a third-party professional data migration tool.

[Related recommendations: mysql video tutorial]

The above is the detailed content of An article to talk about how to quickly migrate MySQL data. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:juejin.cn. If there is any infringement, please contact admin@php.cn delete