Scrapy is a Python web crawler tool that can easily help us obtain various data on the Internet. Zhihu is a popular social question and answer platform. Use Scrapy to quickly capture questions, answers, user information and other data on Zhihu. This article will introduce how to use Scrapy to crawl Zhihu data.
- Installing Scrapy
First you need to install Scrapy. You can use the pip command to install it directly:
pip install scrapy
- Create Scrapy project
Enter the directory where you want to create a Scrapy project in the terminal and use the following command to create the project:
scrapy startproject zhihu
This command will create a Scrapy project named "zhihu" in the current directory.
- Create Spider
Use the following command to create a Spider file named "zhihu_spider.py" in the project directory:
scrapy genspider zhihu_spider zhihu.com
This command will Create a "zhihu_spider.py" file in the "spiders" subdirectory under the project directory. This file contains a Spider with zhihu.com as the starting URL.
- Write Spider code
Open the "zhihu_spider.py" file and add the following code:
import scrapy
class ZhihuSpider(scrapy.Spider):
name = 'zhihu'
allowed_domains = ['zhihu.com']
start_urls = ['https://www.zhihu.com/']
def parse(self, response):
passThe code defines a file named "ZhihuSpider" Spider class. The Spider class needs to define the following attributes:
- name: Spider name
- allowed_domains: Accessed domain name
- start_urls: Spider’s starting URL
In this example, Spider's starting URL is set to zhihu.com. Spider must also contain a method called "parse" for processing the data returned by the response. In this example, the "parse" method is not implemented yet, so an empty "pass" statement is added first.
- Parse page data
After completing the Spider creation, you need to add the code to parse the page data. In the "parse" method, use the following code:
def parse(self, response):
questions = response.css('div[data-type="question"]')
for question in questions:
yield {
'question': question.css('h2 a::text').get(),
'link': question.css('h2 a::attr(href)').get(),
'answers': question.css('div.zm-item-answer::text').getall(),
}This code gets the div element in the page that contains the "data-type" attribute without "question". Then, loop through each div element to extract the question title, link, and answer list.
In the above code, "yield" is a keyword in the Python language, used to generate a generator. A generator is an iterator containing elements. After each element is returned, execution is paused at the position of that element. In Scrapy, the "yield" keyword is used to return data parsed from the page into Scrapy.
- Run the crawler
After you finish writing the code, use the following command to run the crawler in the terminal:
scrapy crawl zhihu
This command will start the Scrapy framework and start Crawling Zhihu data. Scrapy will automatically access the starting URL specified in Spider and parse the returned page data through the "parse" method. The parsed data will be output to the terminal. If you need to save data, you can store the data in CSV, JSON, etc. files.
- Crawling user data
The above code can only crawl data such as questions and answers, but cannot obtain user information. If you need to crawl user data, you need to use Zhihu’s API interface. In Spider, you can use the following code to obtain the JSON format data returned by the API interface:
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.36'}
url = f'https://www.zhihu.com/api/v4/members/{user}?include=following_count,follower_count,badge[?(type=best_answerer)].topics&limit=20'
yield scrapy.Request(url, headers=headers, callback=self.parse_user)This code obtains the specified user information from the API interface. Here, the f-string formatted string is used to insert the username of the user to be obtained into the URL.
In the callback function, use the following code to extract the required data from the JSON format data:
def parse_user(self, response):
data = json.loads(response.body)['data']
following_count = data['following_count']
follower_count = data['follower_count']
best_answerer = data['badge'][0]['topics']
yield {
'user_id': data['id'],
'name': data['name'],
'headline': data['headline'],
'following_count': following_count,
'follower_count': follower_count,
'best_answerer': best_answerer,
}This code extracts the user ID, user nickname, avatar, and follow from the JSON data Number, number of fans, best answer questions and other data.
- Summary
This article introduces how to use Scrapy to crawl Zhihu data. First, you need to create a Scrapy project and create a Spider. Then, use CSS selectors to parse the data in the page and store the crawled data in the generator. Finally, store it in CSV, JSON, etc. files, or output it directly to the terminal. If you need to obtain user data, you can use the Zhihu API interface to extract relevant data from JSON data.
The above is the detailed content of How to use Scrapy to crawl Zhihu data?. For more information, please follow other related articles on the PHP Chinese website!
Python and Time: Making the Most of Your Study TimeApr 14, 2025 am 12:02 AMTo maximize the efficiency of learning Python in a limited time, you can use Python's datetime, time, and schedule modules. 1. The datetime module is used to record and plan learning time. 2. The time module helps to set study and rest time. 3. The schedule module automatically arranges weekly learning tasks.
Python: Games, GUIs, and MoreApr 13, 2025 am 12:14 AMPython excels in gaming and GUI development. 1) Game development uses Pygame, providing drawing, audio and other functions, which are suitable for creating 2D games. 2) GUI development can choose Tkinter or PyQt. Tkinter is simple and easy to use, PyQt has rich functions and is suitable for professional development.
Python vs. C : Applications and Use Cases ComparedApr 12, 2025 am 12:01 AMPython is suitable for data science, web development and automation tasks, while C is suitable for system programming, game development and embedded systems. Python is known for its simplicity and powerful ecosystem, while C is known for its high performance and underlying control capabilities.
The 2-Hour Python Plan: A Realistic ApproachApr 11, 2025 am 12:04 AMYou can learn basic programming concepts and skills of Python within 2 hours. 1. Learn variables and data types, 2. Master control flow (conditional statements and loops), 3. Understand the definition and use of functions, 4. Quickly get started with Python programming through simple examples and code snippets.
Python: Exploring Its Primary ApplicationsApr 10, 2025 am 09:41 AMPython is widely used in the fields of web development, data science, machine learning, automation and scripting. 1) In web development, Django and Flask frameworks simplify the development process. 2) In the fields of data science and machine learning, NumPy, Pandas, Scikit-learn and TensorFlow libraries provide strong support. 3) In terms of automation and scripting, Python is suitable for tasks such as automated testing and system management.
How Much Python Can You Learn in 2 Hours?Apr 09, 2025 pm 04:33 PMYou can learn the basics of Python within two hours. 1. Learn variables and data types, 2. Master control structures such as if statements and loops, 3. Understand the definition and use of functions. These will help you start writing simple Python programs.
How to teach computer novice programming basics in project and problem-driven methods within 10 hours?Apr 02, 2025 am 07:18 AMHow to teach computer novice programming basics within 10 hours? If you only have 10 hours to teach computer novice some programming knowledge, what would you choose to teach...
How to avoid being detected by the browser when using Fiddler Everywhere for man-in-the-middle reading?Apr 02, 2025 am 07:15 AMHow to avoid being detected when using FiddlerEverywhere for man-in-the-middle readings When you use FiddlerEverywhere...


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Chinese version
Chinese version, very easy to use

Atom editor mac version download
The most popular open source editor

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

Zend Studio 13.0.1
Powerful PHP integrated development environment

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software






