Home >Backend Development >Python Tutorial >How to use Scrapy to crawl Douban books and their ratings and comments?

How to use Scrapy to crawl Douban books and their ratings and comments?

WBOY
WBOYOriginal
2023-06-22 10:21:091717browse

With the development of the Internet, people increasingly rely on the Internet to obtain information. For book lovers, Douban Books has become an indispensable platform. In addition, Douban Books also provides a wealth of book ratings and reviews, allowing readers to understand a book more comprehensively. However, obtaining this information manually is tantamount to finding a needle in a haystack. At this time, we can use the Scrapy tool to crawl the data.

Scrapy is an open source web crawler framework based on Python that helps us extract data from websites efficiently. In this article, I will focus on the steps and introduce in detail how to use Scrapy to crawl Douban books and their ratings and comments.

Step One: Install Scrapy

First, you need to install Scrapy on your computer. If you have installed pip (Python package management tool), you only need to enter the following command in the terminal or command line:

pip install scrapy

In this way, Scrapy will be installed on your computer. If an error or warning occurs, it is recommended to make appropriate adjustments according to the prompts.

Step 2: Create a new Scrapy project

Next, we need to enter the following command in the terminal or command line to create a new Scrapy project:

scrapy startproject douban

This command will be in Create a folder named douban in the current directory, which contains Scrapy's basic files and directory structure.

Step 3: Write a crawler program

In Scrapy, we need to write a crawler program to tell Scrapy how to extract data from the website. Therefore, we need to create a new file named douban_spider.py and write the following code:

import scrapy

class DoubanSpider(scrapy.Spider):
    name = 'douban'
    allowed_domains = ['book.douban.com']
    start_urls = ['https://book.douban.com/top250']

    def parse(self, response):
        selector = scrapy.Selector(response)
        books = selector.xpath('//tr[@class="item"]')
        for book in books:
            title = book.xpath('td[2]/div[1]/a/@title').extract_first()
            author = book.xpath('td[2]/div[1]/span[1]/text()').extract_first()
            score = book.xpath('td[2]/div[2]/span[@class="rating_nums"]/text()').extract_first()
            comment_count = book.xpath('td[2]/div[2]/span[@class="pl"]/text()').extract_first()
            comment_count = comment_count.strip('()')
            yield {'title': title, 'author': author, 'score': score, 'comment_count': comment_count}

The above code implements two functions:

  1. Crawling the book titles, authors, ratings and number of reviews in the top 250 pages of Douban Books.
  2. Return the crawled data in the form of a dictionary.

In this program, we first need to define a DoubanSpider class and specify the name of the crawler, the domain name and starting URL that the crawler is allowed to access. In the parse method, we parse the HTML page through the scrapy.Selector object and use XPath expressions to obtain relevant information about the book.

After obtaining the data, we use the yield keyword to return the data in the form of a dictionary. The yield keyword here is to turn the function into a generator to achieve the effect of returning one data at a time. In Scrapy, we can achieve efficient crawling of website data by defining generators.

Step 4: Run the crawler program

After writing the crawler program, we need to run the following code in the terminal or command line to start the crawler program:

scrapy crawl douban -o result.json

This The function of the instruction is to start the crawler named douban and output the crawled data to the result.json file in JSON format.

Through the above four steps, we can successfully crawl Douban books and their ratings and review information. Of course, if you need to further improve the efficiency and stability of the crawler program, you will also need to make some other optimizations and adjustments. For example: setting delay time, preventing anti-crawling mechanism, etc.

In short, using Scrapy to crawl Douban books and their ratings and review information is a relatively simple and interesting task. If you are interested in data crawling and Python programming, you can further try to crawl data from other websites to improve your programming skills.

The above is the detailed content of How to use Scrapy to crawl Douban books and their ratings and comments?. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn