Webmaster friends who have a little contact with SEO should know the robots protocol (also known as crawler protocol, crawler rules, robot protocol, etc.), which is the robots that are usually added to the root directory of the website. .txt file, its function is to tell search engines which pages can be crawled and which pages cannot be crawled, thereby optimizing the inclusion results and weight of the website.
Recommended tutorial: wordpress tutorial
## If your website root directory does not yet have robots.txt, you can create it One, please refer to Baidu Encyclopedia for specific writing methods. The following is a basic WordPress robots protocol:User-agent: * Disallow: /feed/ Disallow: /trackback/ Disallow: /wp-admin/ Disallow: /wp-content/ Disallow: /wp-includes/ Disallow: /xmlrpc.php Disallow: /wp- Allow: /wp-content/uploads/ Sitemap: http://example.com/sitemap.xml
functions.php :
/** * 为你的 WordPress 站点添加 robots.txt * https://www.wpdaxue.com/add-robots-txt.html */add_filter( 'robots_txt', 'robots_mod', 10, 2 );function robots_mod( $output, $public ) { $output .= "Disallow: /user/"; // 禁止收录链接中包含 /user/ 的页面 return $output;}
NOTE: If you want to add more rules, please copy the above Line 7 of the code and then modify it.
Visit http://domain name/robots.txt We can see the following content:User-agent: * Disallow: /wp-admin/ Disallow: /wp-includes/ Disallow: /user/
The above is the detailed content of How to use robots for wordpress multi-sites. For more information, please follow other related articles on the PHP Chinese website!