Home>

mysql's own mysqldump tool supports single-threaded work, Export multiple tables one by one,Without a parallel machine, this makes it impossible to quickly back up data.

mydumper as a utility,Can well support multi-threaded work, You can read data from tables and write to different files at the same time in multiple threads, which makes it faster than traditional mysqldump. One of its characteristics is that the list needs to be locked during processing,So if we need to perform backups during business hours,Then it will cause dml to block. But generally mysql now has master and slave,Backups are also mostly done from above,So the problem of locks can be ignored.In this way, mydumper can better complete the backup task.

mydumper features

Multi-threaded backup Because it is a multi-threaded logical backup,Multiple backup files will be generated after backup Applying ftwrl (flush tables with read lock) to myisam table during backup will block dml statements Guarantee the consistency of backup data Support file compression Support for exporting binlog Supports multi-threaded recovery Support working in daemon mode,Timed snapshots and continuous binary logs Support for dicing backup files

mydumper backup mechanism

mydumper workflow

Summary of main steps

Main thread flush tables with read lock, apply global read-only lock,To prevent dml statements from being written,Guarantee data consistency Read the binary log file name and the location where the log was written at the current point in time and record it in the metadata file for recovery n (the number of threads can be specified,The default is 4) dump thread start transaction with consistent snapshot; dump non-innodb tables, first export non-innodb tables After the main thread unlocks the non-thing engine backup,Release global read-only lock dump innodb tables, export innodb tables based on transactions End of things Back up generated files

All backup files are in one directory,Directory can be specified by yourself

Directory contains a metadata file

Records the binary log file name of the backup database at the backup time point,Where the log is written,

If you are backing up from a library,Binary log files and write locations synchronized to the main library during backup are also recorded

There are two backup files for each table:

database.table-schema.sql table structure file database.table.sql table data file

If the table file is fragmented,Multiple backup data files will be generated,You can specify the number of rows or the size of the shard

Installation use case

Suppose there are two existing db servers, one for business a and one for business b.Need to perform hot backup for 1 db (taeoss) of a business,There is about 40g of data, and the db server of business b is used as a standby machine.The servers are distributed as follows:

10.137.143.151 a Business

10.137.143.152 b Business

Assume the requirements to be met are:

When exporting the db (taeoss) of the a service, it must not affect the a service.When performing recovery on the db server of the b service at the same time,Nor can it have a greater impact,Try to keep it within 1 minute.

Program adopted:

1, mysqldump:a logical backup,There will be lock tables,But considering the large amount of data,The lock time will be longer,Business is not allowed,pass off

2. xtrabackup:belongs to physical backup,No lock table exists,But considering that the 2 dbs are using shared tablespaces,At the same time when the database of service b is restored,First, it takes a long time.The second is that the data must be incorrect.pass off (tested);

3. mydumper:belongs to logical backup,It is a multi-threaded, high-performance data logical backup and recovery tool.And the lock time is very short (40g data, less than 10 minutes), and the binlog file and pos will be recorded at the same time, which is acceptable for business.

mydumper has the following features:

(1) The task speed is more than 6 times faster than mysqldump;

(2) Consistent snapshots of transactional and non-transactional tables (applicable to versions above 0.2.2);

(3) Fast file compression;

(4) Support export binlog;

(5), multi-threaded recovery (applicable to versions above 0.2.1);

(6) The working mode of the daemon,Periodic snapshots and continuous binary logs (for versions above 0.5.0).

mydumper installation:

https://launchpad.net/mydumper/0.6/0.6.2/+download/mydumper-0.6.2.tar.gz

#yum install glib2-devel mysql-devel zlib-devel pcre-devel
#tar zxvf mydumper-0.6.2.tar.gz
#cd mydumper-0.6.2
#cmake.
#make
#make install

The parameters are as follows:

Since db is deployed on an older suse linux 10 server,There are more libraries to rely on when installing mydumper.Will be more tedious,If you use local backup,It will also take up a lot of disk i/o, so we choose another centos 6.4 (10.137.143.156) server on the same network segment for backup.

Proceed as follows:

1. Temporarily authorize "10.137.143.156" on "10.137.143.151, 10.137.143.152"

#mysql -uroot -e "grant all privileges on *. * to" backup "@" 10.137.143.156 "identified by" backup2015 ";"
#mysql -uroot -e "flush privileges;"

2. Back up the "db.taeoss" of "10.137.143.151" on "10.137.143.156"

#mydumper -h 10.137.143.151 -u backup -p backup2015 -b taeoss -t 8 -o/data/rocketzhang

3. Restore the backup data to "10.137.143.152"

#myloader -h 10.137.143.152 -u backup -p backup2015 -b taeoss -t 8 -o -d/data/rocketzhang

4. Establish master-slave relationship:10.137.143.151 (master), 10.137.143.152 (slave)

Create an authorized account at "10.137.143.151":

#mysql -uroot -e "grant replication slave on *. * to" repl "@" 10.137.143.152 "identified by" repl123456 ";"
#mysql -uroot -e "flush privileges;"

Check the recorded binlog information at "10.137.143.156":

Do the following at "10.137.143.152":

#vim /etc/my.cnf
...
replicate-do-table=taeoss.%
replicate-wild-do-table=taeoss.%
...
#service mysqld reload
#mysql -uroot -e "change master to master_host =" 10.137.143.151 ", master_user =" repl ", master_password =" repl123456 ", master_log_file =" mysql-bin.002205 ", master_log_pos=456584891;"
#mysql -uroot -e "start slave;"
#mysql -uroot -e "show slave status \ g;"

The following message appears:

It seems that there is a primary key conflict,Causes the master-slave replication to fail.

problem analysis:

Execute on the main db (10.137.143.151):

#mysqlbinlog --no-defaults -v -v --base64-output=decode-rows mysql-bin.002205>mysql-bin.002205.txt
#grep -c 8 529864938 mysql-bin.002205.txt

It roughly means,When an insert operation on the t_evil_detect_uin_blacklist table exists on the primary db, a primary key conflict occurs.When synchronizing on the slave side,There is also a primary key conflict,As a result, master-slave synchronization fails.

Workaround:

Export the table taeoss.t_evil_detect_uin_blacklist from the side

#mysqldump -uroot --opt taeoss t_evil_detect_uin_blacklist>taeoss.t_evil_detect_uin_blacklist.sql

Remove the primary key statement in taeoss.t_evil_detect_uin_blacklist.sql:

Then import:

#mysql -uroot taeoss<taeoss.t_evil_detect_uin_blacklist.sql
#mysql -uroot -e "stop slave;"
#mysql -uroot -e "start slave;"
#mysql -uroot -e "show slave status \ g;"
  • Previous Tutorial on basic creation and use of temporary tables in MySQL
  • Next 16 best PHP libraries you must know as a programmer