PHP crawler practice: crawling MOOC course information
With the development of the Internet, crawler technology has played an increasingly important role in modern data collection, data analysis and business decision-making. Learning how to use crawler technology will greatly improve the efficiency and accuracy of our data processing. In this article, we will use PHP to write a crawler to crawl course information on MOOC.
The tools that will be used in this article are as follows:
- PHP programming language, version is PHP 7.0
- Third-party library Guzzle HTTP Client, used to send HTTP requests and receive HTTP response
- A simple MySQL database used to store the course information we captured
1. Preparation work
First, we need to create a local environment Install PHP 7.0 version, the installation process is omitted.
Guzzle HTTP Client is a commonly used HTTP client tool library, we can use Composer to install it. Switch to a blank directory on the command line, then create a new composer.json file and add the following content:
{
"require": { "guzzlehttp/guzzle": "^6.3" }
}
Then at the same level Execute composer install in the directory. After the execution is completed, we have successfully installed Guzzle HTTP Client.
2. Analyze the structure of the target website
Before we start writing code, we need to analyze the structure of the target website. We chose the Python course on MOOC.com (www.imooc.com). The information we need to capture includes course name, course number, course difficulty, course duration and course link.
After opening the target website and performing certain operations (such as searching for "Python" courses), we can view the response content returned by the website. We can use the browser's development tools to view the response content and web page structure.
We can see that the list of Python courses on MOOC is dynamically loaded through AJAX. In order to facilitate data crawling, we can directly look up the URL and parameters of the AJAX request, and then construct our own HTTP request to obtain the data.
By looking at the XHR request of the target website, we can find that the actual requested URL for the Python course is http://www.imooc.com/course/AjaxCourseMore?&page=1.
The page in the request parameter indicates the page number currently to be accessed. We can send a request to the URL through the HTTP GET method and parse it based on the returned results.
3. Write a crawler program
In the previous step we have obtained the list URL of the Python course of the target website. Now we only need to write PHP code, use Guzzle HTTP Client to send HTTP requests, and then parse Just return the result.
First, we need to introduce the Guzzle HTTP Client library. Add the following code at the top of the PHP file:
require 'vendor/autoload.php';
Then create a Guzzle HTTP Client object:
$client = new GuzzleHttpClient( );
Next, we can use this object to send an HTTP request:
$response = $client->request('GET', 'http://www.imooc.com /course/AjaxCourseMore?&page=1');
In the above code, we use the request() method of the Guzzle HTTP Client object, specifying that the request method is GET, and the requested URL is what we specified in the previous step The URL obtained.
Finally, we need to get the course information we need from the HTTP response. By inspecting the response content, we can see that the course information is contained in an HTML tag with the class attribute of course-card-container.
We can use PHP's DOMDocument class to traverse HTML tags and parse out the tags that meet the conditions.
The final code implementation is as follows:
require 'vendor/autoload.php';
use GuzzleHttpClient;
$client = new Client([
'base_uri' => 'http://www.imooc.com'
]);
$response = $client->request('GET', '/course/AjaxCourseMore?&page=1');
if ($ response->getStatusCode() == 200) {
$dom = new DOMDocument(); @$dom->loadHTML($response->getBody()); $xpath = new DOMXPath($dom); $items = $xpath->query("//div[@class='course-card-container']"); foreach ($items as $item) { $courseName = trim($xpath->query(".//h3[@class='course-card-name']/a", $item)->item(0)->textContent); $courseId = trim($xpath->query(".//div[@class='clearfix']/a[@class='course-card'], $item)->item(0)->getAttribute('href')); $courseDifficulty = trim($xpath->query(".//p[@class='course-card-desc']", $item)->item(0)->textContent); $courseDuration = trim($xpath->query(".//div[@class='course-card-info']/span[@class='course-card-time']", $item)->item(0)->textContent); $courseLink = trim($xpath->query(".//h3[@class='course-card-name']/a", $item)->item(0)->getAttribute('href')); // 将抓取到的数据存储到MySQL数据库中 // ... echo "课程名称:" . $courseName . "
";
echo "课程编号:" . $courseId . "
";
echo "课程难度:" . $courseDifficulty . "
";
echo "课程时长:" . $courseDuration . "
";
echo "课程链接:" . $courseLink . "
";
}
}
We use DOMDocument to read the HTML response content, and then use DOMXPath to traverse the tags. Finally, we print the captured information to the screen.
4. Store data
Now we have successfully captured the information of the Python course and printed the information to the screen. However, it is not practical to print the data to the screen. We The data needs to be saved to the database.
In the MySQL database, we created a table to store information about Python courses. The table structure is as follows:
CREATE TABLE python_courses
(
id
int(11) unsigned NOT NULL AUTO_INCREMENT,
course_name
varchar(255) NOT NULL DEFAULT '',
course_id
varchar(255) NOT NULL DEFAULT '',
course_difficulty
varchar(255) NOT NULL DEFAULT '',
course_duration
varchar(255) NOT NULL DEFAULT '',
course_link
varchar(255) NOT NULL DEFAULT '',
PRIMARY KEY (id
)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
In the code, we use PDO to connect to the MySQL database, and use the prepare() method and execute() method to perform the insertion operation. The final code is as follows:
require 'vendor/autoload.php';
use GuzzleHttpClient;
$client = new Client([
'base_uri' => 'http://www.imooc.com'
] );
$response = $client->request('GET', '/course/AjaxCourseMore?&page=1');
if ($response->getStatusCode() == 200) {
$dom = new DOMDocument(); @$dom->loadHTML($response->getBody()); $xpath = new DOMXPath($dom); $items = $xpath->query("//div[@class='course-card-container']"); $dsn = 'mysql:host=localhost;dbname=test'; $username = 'root'; $password = ''; $pdo = new PDO($dsn, $username, $password, [PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION]); $stmt = $pdo->prepare("INSERT INTO `python_courses` (`course_name`, `course_id`, `course_difficulty`, `course_duration`, `course_link`) VALUES (:course_name, :course_id, :course_difficulty, :course_duration, :course_link)"); foreach ($items as $item) { $courseName = trim($xpath->query(".//h3[@class='course-card-name']/a", $item)->item(0)->textContent); $courseId = trim($xpath->query(".//div[@class='clearfix']/a[@class='course-card']", $item)->item(0)->getAttribute('href')); $courseDifficulty = trim($xpath->query(".//p[@class='course-card-desc']", $item)->item(0)->textContent); $courseDuration = trim($xpath->query(".//div[@class='course-card-info']/span[@class='course-card-time']", $item)->item(0)->textContent); $courseLink = trim($xpath->query(".//h3[@class='course-card-name']/a", $item)->item(0)->getAttribute('href')); $stmt->bindParam(':course_name', $courseName); $stmt->bindParam(':course_id', $courseId); $stmt->bindParam(':course_difficulty', $courseDifficulty); $stmt->bindParam(':course_duration', $courseDuration); $stmt->bindParam(':course_link', $courseLink); $stmt->execute(); }
}
现在,我们已经成功的构建了一个简单的PHP爬虫,用于抓取慕课网上的Python课程信息。经过这个例子的介绍,你应该可以使用PHP编写你自己的爬虫程序,并获取到你需要的数据了。
The above is the detailed content of PHP crawler practice: crawling MOOC course information. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

The core method of building social sharing functions in PHP is to dynamically generate sharing links that meet the requirements of each platform. 1. First get the current page or specified URL and article information; 2. Use urlencode to encode the parameters; 3. Splice and generate sharing links according to the protocols of each platform; 4. Display links on the front end for users to click and share; 5. Dynamically generate OG tags on the page to optimize sharing content display; 6. Be sure to escape user input to prevent XSS attacks. This method does not require complex authentication, has low maintenance costs, and is suitable for most content sharing needs.

To realize text error correction and syntax optimization with AI, you need to follow the following steps: 1. Select a suitable AI model or API, such as Baidu, Tencent API or open source NLP library; 2. Call the API through PHP's curl or Guzzle and process the return results; 3. Display error correction information in the application and allow users to choose whether to adopt it; 4. Use php-l and PHP_CodeSniffer for syntax detection and code optimization; 5. Continuously collect feedback and update the model or rules to improve the effect. When choosing AIAPI, focus on evaluating accuracy, response speed, price and support for PHP. Code optimization should follow PSR specifications, use cache reasonably, avoid circular queries, review code regularly, and use X

1. Maximizing the commercial value of the comment system requires combining native advertising precise delivery, user paid value-added services (such as uploading pictures, top-up comments), influence incentive mechanism based on comment quality, and compliance anonymous data insight monetization; 2. The audit strategy should adopt a combination of pre-audit dynamic keyword filtering and user reporting mechanisms, supplemented by comment quality rating to achieve content hierarchical exposure; 3. Anti-brushing requires the construction of multi-layer defense: reCAPTCHAv3 sensorless verification, Honeypot honeypot field recognition robot, IP and timestamp frequency limit prevents watering, and content pattern recognition marks suspicious comments, and continuously iterate to deal with attacks.

User voice input is captured and sent to the PHP backend through the MediaRecorder API of the front-end JavaScript; 2. PHP saves the audio as a temporary file and calls STTAPI (such as Google or Baidu voice recognition) to convert it into text; 3. PHP sends the text to an AI service (such as OpenAIGPT) to obtain intelligent reply; 4. PHP then calls TTSAPI (such as Baidu or Google voice synthesis) to convert the reply to a voice file; 5. PHP streams the voice file back to the front-end to play, completing interaction. The entire process is dominated by PHP to ensure seamless connection between all links.

PHP does not directly perform AI image processing, but integrates through APIs, because it is good at web development rather than computing-intensive tasks. API integration can achieve professional division of labor, reduce costs, and improve efficiency; 2. Integrating key technologies include using Guzzle or cURL to send HTTP requests, JSON data encoding and decoding, API key security authentication, asynchronous queue processing time-consuming tasks, robust error handling and retry mechanism, image storage and display; 3. Common challenges include API cost out of control, uncontrollable generation results, poor user experience, security risks and difficult data management. The response strategies are setting user quotas and caches, providing propt guidance and multi-picture selection, asynchronous notifications and progress prompts, key environment variable storage and content audit, and cloud storage.

PHP ensures inventory deduction atomicity through database transactions and FORUPDATE row locks to prevent high concurrent overselling; 2. Multi-platform inventory consistency depends on centralized management and event-driven synchronization, combining API/Webhook notifications and message queues to ensure reliable data transmission; 3. The alarm mechanism should set low inventory, zero/negative inventory, unsalable sales, replenishment cycles and abnormal fluctuations strategies in different scenarios, and select DingTalk, SMS or Email Responsible Persons according to the urgency, and the alarm information must be complete and clear to achieve business adaptation and rapid response.

PHPisstillrelevantinmodernenterpriseenvironments.1.ModernPHP(7.xand8.x)offersperformancegains,stricttyping,JITcompilation,andmodernsyntax,makingitsuitableforlarge-scaleapplications.2.PHPintegrateseffectivelyinhybridarchitectures,servingasanAPIgateway

PHP provides an input basis for AI models by collecting user data (such as browsing history, geographical location) and pre-processing; 2. Use curl or gRPC to connect with AI models to obtain click-through rate and conversion rate prediction results; 3. Dynamically adjust advertising display frequency, target population and other strategies based on predictions; 4. Test different advertising variants through A/B and record data, and combine statistical analysis to optimize the effect; 5. Use PHP to monitor traffic sources and user behaviors and integrate with third-party APIs such as GoogleAds to achieve automated delivery and continuous feedback optimization, ultimately improving CTR and CVR and reducing CPC, and fully implementing the closed loop of AI-driven advertising system.
