When the amount of data increases sharply, everyone will choose database table hashing and other methods to optimize data reading and writing speed. The author made a simple attempt, with 100 million pieces of data divided into 100 tables. The specific implementation process is as follows:
First create 100 tables:
$i=0; while($i<=99){ echo "$newNumber \r\n"; $sql="CREATE TABLE `code_".$i."` ( `full_code` char(10) NOT NULL, `create_time` int(10) unsigned NOT NULL, PRIMARY KEY (`full_code`), ) ENGINE=MyISAM DEFAULT CHARSET=utf8"; mysql_query($sql); $i++;
Let’s talk about my table splitting rules. Full_code is used as the primary key. We hash the full_code
The function is as follows:
$table_name=get_hash_table('code',$full_code); function get_hash_table($table,$code,$s=100){ $hash = sprintf("%u", crc32($code)); echo $hash; $hash1 = intval(fmod($hash, $s)); return $table."_".$hash1; }
Get the data through get_hash_table before inserting the data. The name of the stored table.
Finally, we use the merge storage engine to implement a complete code table
1 CREATE TABLE IF NOT EXISTS `code` ( 2 `full_code` char(10) NOT NULL,3 `create_time` int(10) unsigned NOT NULL,4 INDEX(full_code) 5 ) TYPE=MERGE UNION=(code_0,code_1,code_2.......) INSERT_METHOD=LAST ;
In this way, we can get all the full_code data by selecting * from code.
The above is the content of implementing 100 Mysql database tables in PHP with 100 million pieces of data. For more related content, please pay attention to the PHP Chinese website (m.sbmmt.com)!