When attempting to move table data from one MySQL database to another, limitations in file size may occur. If the output from mysqldump exceeds the allowed size for upload, a method is required to split the output into smaller files.
One solution involves using the --extended-insert=FALSE option with mysqldump. This creates a .sql file that can be split using split(1) with a suitable --lines option. However, if cat(1) cannot be used on the remote server, this method becomes infeasible.
Alternatively, a bash script can be employed to split the dumpfile into separate files for each table. The script uses csplit to divide the file based on specific patterns, such as table structure or table names.
Here is an example script:
#!/bin/bash START="/-- Table structure for table/" if [ $# -lt 1 ] || [[ == "--help" ]] || [[ == "-h" ]]; then echo "USAGE: extract all tables:" echo " DUMP_FILE" echo "extract one table:" echo " DUMP_FILE [TABLE]" exit fi if [ $# -ge 2 ]; then # extract one table csplit -s -ftable "/-- Table structure for table/" "%-- Table structure for table \`\`%" "/-- Table structure for table/" "%40103 SET TIME_ZONE=@OLD_TIME_ZONE%1" else # extract all tables csplit -s -ftable "$START" {*} fi [ $? -eq 0 ] || exit mv table00 head FILE=`ls -1 table* | tail -n 1` if [ $# -ge 2 ]; then # cut off all other tables mv $FILE foot else # cut off the end of each file csplit -b '%d' -s -f$FILE $FILE "/40103 SET TIME_ZONE=@OLD_TIME_ZONE/" {*} mv ${FILE}1 foot fi for FILE in `ls -1 table*`; do NAME=`head -n1 $FILE | cut -d$'\x60' -f2` cat head $FILE foot > "$NAME.sql" done rm head foot table*
This script assumes that the input dumpfile contains table structures and data. It splits the file at table boundaries and assigns each table's data to a separate file with the appropriate filename.
The above is the detailed content of How to Split Mysqldump Outputs into Smaller Files for Easier Server Upload?. For more information, please follow other related articles on the PHP Chinese website!