If you get a uid list with more than one million rows, the format is as follows:
Copy the code The code is as follows:
10001000
10001001
10001002
......
10001000
.... .
10001111
Copy code The code is as follows:
< ;?php
//Define an array to store the results after deduplication
$result = array();
//Read the uid list file
$fp = fopen('test.txt', 'r') ;
while(!feof($fp))
{
$uid = fgets($fp);
$uid = trim($uid);
$uid = trim($uid, "r");
$uid = trim($uid, "n");
if($uid == '')
{
continue;
}
//Us uid as the key to see if the value exists
if(empty($result[$ uid]))
{
$result[$uid] = 1;
}
}
fclose($fp);
//Save the result to the file
$content = '';
foreach($result as $k => $v)
{
$content .= $k."n";
}
$fp = fopen('result.txt', 'w');
fwrite($fp, $content);
fclose($fp);
?>
Copy code Code As follows:
//Define an array to store the results after deduplication
$result = array();
//Read the first uid list file and put it into $result_1
$fp = fopen('test_1.txt', 'r');
while(!feof($fp))
{
$uid = fgets($fp);
$uid = trim($uid);
$uid = trim($uid, "r");
$uid = trim($uid, "n");
if($uid == '')
{
continue;
}
//Write with uid as key $result, if there is a duplicate, it will be overwritten
$result[$uid] = 1;
}
fclose($fp);
//Read the second uid list file and perform deduplication operation
$fp = fopen ('test_2.txt', 'r');
while(!feof($fp))
{
$uid = fgets($fp);
$uid = trim($uid);
$uid = trim( $uid, "r");
$uid = trim($uid, "n");
if($uid == '')
{
continue;
}
//Use uid as the key to see the value Does it exist?
if(empty($result[$uid]))
{
$result[$uid] = 1;
}
}
fclose($fp);
//The ones saved in $result will be deduplicated. The results can be output to a file, and the code is omitted
?>
The above introduces the implementation code of PHP array to eliminate duplicate data from millions of data, including the content of PHP array. I hope it will be helpful to friends who are interested in PHP tutorials.