How to use Scrapy to crawl Kugou Music songs?

PHPz
Release: 2023-06-22 22:59:21
Original
2576 people have browsed it

With the development of the Internet, the amount of information on the Internet is increasing, and people need to crawl information on different websites to perform various analyzes and mining. Scrapy is a fully functional Python crawler framework that can automatically crawl website data and output it in a structured form. Kugou Music is one of the most popular online music platforms. Below I will introduce how to use Scrapy to crawl the song information of Kugou Music.

1. Install Scrapy

Scrapy is a framework based on the Python language, so you need to configure the Python environment first. Before installing Scrapy, you need to install Python and pip tools first. After the installation is complete, you can install Scrapy through the following command:

pip install scrapy
Copy after login

2. Create a new Scrapy project

Scrapy provides a set of command line tools to facilitate us to create new s project. Enter the following code in the command line:

scrapy startproject kuwo_music
Copy after login

After execution, a Scrapy project named "kuwo_music" will be created in the current directory. In this project, we need to create a new crawler to crawl the song information of Kugou Music.

3. Create a new crawler

In the Scrapy project, a crawler is a program used to crawl and parse data on a specific website. In the "kuwo_music" project directory, execute the following command:

scrapy genspider kuwo www.kuwo.cn 
Copy after login

The above command will create a file named "kuwo.py" in the "kuwo_music/spiders" directory, which is our crawler program code. We need to define the crawling and parsing process of website data in this file.

4. Website request and page parsing

In the newly created "kuwo.py" file, you first need to import the necessary modules:

import scrapy
from kuwo_music.items import KuwoMusicItem
from scrapy_redis.spiders import RedisSpider
from scrapy_redis import get_redis_from_settings
from scrapy.utils.project import get_project_settings
Copy after login

Through the above code, we can use various tool classes and methods provided by the Scrapy framework, as well as custom modules in the project. Before continuing to write the crawler code, we need to first analyze the web page where the Kugou Music song information is located.

Open the browser, visit www.kuwo.cn, enter the song name in the search bar and search, you will find that the web page jumps to the search results page. In the search results page, you can see relevant information about each song, such as song name, artist, playing time, etc. We need to send a request through Scrapy and parse the search results page to get the detailed information of each song.

In the crawler code, we need to implement the following two methods:

def start_requests(self):
    ...
    
def parse(self, response):
    ...
Copy after login

Among them, the start_requests() method is used to send the initial web page request, and the parsing method parse() is designated as the callback function; and the parse() method is used to parse web pages, extract data, and process responses. The specific code is as follows:

class KuwoSpider(RedisSpider):
    name = 'kuwo'
    allowed_domains = ['kuwo.cn']
    redis_cli = get_redis_from_settings(get_project_settings())

    def start_requests(self):
        keywords = ['爱情', '妳太善良', '说散就散']
        # 搜索结果页面的url
        for keyword in keywords:
            url = f'http://www.kuwo.cn/search/list?key={keyword}&rformat=json&ft=music&encoding=utf8&rn=8&pn=1'
            yield scrapy.Request(url=url, callback=self.parse)

    def parse(self, response):
        data = json.loads(response.text)
        # 获取搜索结果页面的每个歌曲信息
        song_list = data['data']['list']
        for song in song_list:
            music_id = song['musicrid'][6:]
            song_name = song['name']
            singer_name = song['artist']
            album_name = song['album']

            # 根据歌曲id获取歌曲详细信息
            url = f'http://www.kuwo.cn/url?format=mp3&rid=MUSIC_{music_id}&response=url&type=convert_url3&br=128kmp3&from=web&t=1639056420390&httpsStatus=1&reqId=6be77da1-4325-11ec-b08e-11263642326e'
            meta = {'song_name': song_name, 'singer_name': singer_name, 'album_name': album_name}
            yield scrapy.Request(url=url, callback=self.parse_song, meta=meta)

    def parse_song(self, response):
        item = KuwoMusicItem()
        item['song_name'] = response.meta.get('song_name')
        item['singer_name'] = response.meta.get('singer_name')
        item['album_name'] = response.meta.get('album_name')
        item['song_url'] = response.text.strip()
        yield item
Copy after login

In the above code, we first define the song keywords to be searched in the start_requests() method, construct the URL of each song search result page, and send the request. In the parse() method, we parse the search results page and extract relevant information about each song, including song name, artist, album, etc. Then, based on the id of each song, we construct a URL to obtain the corresponding song information, and use Scrapy's metadata (meta) mechanism to transfer the song name, singer, album and other information. Finally, we parse the song information page and extract the song playback address in the parse_song() method, and output it to the custom KuwoMusicItem object.

5. Data storage and use

In the above code, we define a custom KuwoMusicItem object to store the crawled song information. We can use the tool class RedisPipeline to store the crawled data into the Redis database:

ITEM_PIPELINES = {
    'kuwo_music.pipelines.RedisPipeline': 300,
}
Copy after login

At the same time, we can also use the tool class JsonLinesItemExporter to store the data in a local csv file:

from scrapy.exporters import JsonLinesItemExporter
import csv

class CsvPipeline(object):
    # 将数据存储到csv文件
    def __init__(self):
        self.file = open('kuwo_music.csv', 'w', encoding='utf-8', newline='')
        self.exporter = csv.writer(self.file)
        self.exporter.writerow(['song_name', 'singer_name', 'album_name', 'song_url'])

    def close_spider(self, spider):
        self.file.close()

    def process_item(self, item, spider):
        self.exporter.writerow([item['song_name'], item['singer_name'], item['album_name'], item['song_url']])
        return item
Copy after login

Finally, execute the following command on the command line to start the Scrapy crawler:

scrapy crawl kuwo
Copy after login

The above is a detailed introduction on how to use the Scrapy framework to crawl the song information of Kugou Music. I hope it can be provided for everyone. Some reference and help.

The above is the detailed content of How to use Scrapy to crawl Kugou Music songs?. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!