Home >Backend Development >Python Tutorial >Python crawler audio data example

Python crawler audio data example

PHP中文网
PHP中文网Original
2017-06-21 17:16:242260browse

1: Foreword

This time we crawled the information of each channel of all radio stations under the popular column of Himalaya and various information of each audio data in the channel, and then put the crawled data Save to mongodb for subsequent use. This time the amount of data is around 700,000. Audio data includes audio download address, channel information, introduction, etc., there are many.
I had my first interview yesterday. The other party was an artificial intelligence big data company. I was going to do an internship during the summer vacation of my sophomore year. They asked me to have crawled audio data, so I came to analyze Himalaya. The audio data crawls down. At present, I am still waiting for three interviews, or to be notified of the final interview news. (Because I can get certain recognition, I am very happy regardless of success or failure)


2: Running environment

  • IDE: Pycharm 2017

  • Python3.6

  • ##pymongo 3.4.0

  • requests 2.14.2

  • lxml 3.7.2

  • BeautifulSoup 4.5.3

##3: Example Analysis

1. First enter the main page of this crawl. You can see 12 channels on each page. There are a lot of audios under each channel, and there are many paginations in some channels. Crawl plan: Loop through 84 pages, parse each page, grab the name of each channel, picture link, and save the channel link to mongodb.

Popular Channels
2. Open the developer mode, analyze the page, and quickly get the location of the data you want. The following code realizes grabbing the information of all popular channels and saving it to mongodb.
start_urls = ['http://www.ximalaya.com/dq/all/{}'.format(num) for num in range(1, 85)]for start_url in start_urls:html = requests.get(start_url, headers=headers1).text
    soup = BeautifulSoup(html, 'lxml')for item in soup.find_all(class_="albumfaceOutter"):content = {'href': item.a['href'],'title': item.img['alt'],'img_url': item.img['src']
        }
        print(content)

Analysis Channel
3. The following is to start to obtain all the audio data in each channel. The United States was obtained through the parsing page. Channel link. For example, we analyze the page structure after entering this link. It can be seen that each audio has a specific ID, which can be obtained from the attributes in a div. Use split() and int() to convert to individual IDs.

Channel page analysis
4. Then click an audio link, enter developer mode, refresh the page and click XHR, then click a json The link to see this includes all the details of this audio.
html = requests.get(url, headers=headers2).text
numlist = etree.HTML(html).xpath('//div[@class="personal_body"]/@sound_ids')[0].split(',')for i in numlist:
    murl = 'http://www.ximalaya.com/tracks/{}.json'.format(i)html = requests.get(murl, headers=headers1).text
    dic = json.loads(html)

Audio page analysis
5. The above only analyzes all audio information on the main page of a channel, but in fact the channel The audio link has a lot of pagination.

html = requests.get(url, headers=headers2).text
ifanother = etree.HTML(html).xpath('//div[@class="pagingBar_wrapper"]/a[last()-1]/@data-page')if len(ifanother):num = ifanother[0]
    print('本频道资源存在' + num + '个页面')for n in range(1, int(num)):
        print('开始解析{}个中的第{}个页面'.format(num, n))
        url2 = url + '?page={}'.format(n)# 之后就接解析音频页函数就行,后面有完整代码说明

Paging
6. All code
Full code address github.com/rieuse/learnPython

__author__ = '布咯咯_rieuse'import jsonimport randomimport timeimport pymongoimport requestsfrom bs4 import BeautifulSoupfrom lxml import etree

clients = pymongo.MongoClient('localhost')
db = clients["XiMaLaYa"]
col1 = db["album"]
col2 = db["detaile"]

UA_LIST = []  # 很多User-Agent用来随机使用可以防ban,显示不方便不贴出来了
headers1 = {} # 访问网页的headers,这里显示不方便我就不贴出来了
headers2 = {} # 访问网页的headers这里显示不方便我就不贴出来了def get_url():
    start_urls = ['http://www.ximalaya.com/dq/all/{}'.format(num) for num in range(1, 85)]for start_url in start_urls:
        html = requests.get(start_url, headers=headers1).text
        soup = BeautifulSoup(html, 'lxml')for item in soup.find_all(class_="albumfaceOutter"):
            content = {'href': item.a['href'],'title': item.img['alt'],'img_url': item.img['src']
            }
            col1.insert(content)
            print('写入一个频道' + item.a['href'])
            print(content)
            another(item.a['href'])
        time.sleep(1)def another(url):
    html = requests.get(url, headers=headers2).text
    ifanother = etree.HTML(html).xpath('//div[@class="pagingBar_wrapper"]/a[last()-1]/@data-page')if len(ifanother):
        num = ifanother[0]
        print('本频道资源存在' + num + '个页面')for n in range(1, int(num)):
            print('开始解析{}个中的第{}个页面'.format(num, n))
            url2 = url + '?page={}'.format(n)
            get_m4a(url2)
    get_m4a(url)def get_m4a(url):
    time.sleep(1)
    html = requests.get(url, headers=headers2).text
    numlist = etree.HTML(html).xpath('//div[@class="personal_body"]/@sound_ids')[0].split(',')for i in numlist:
        murl = 'http://www.ximalaya.com/tracks/{}.json'.format(i)
        html = requests.get(murl, headers=headers1).text
        dic = json.loads(html)
        col2.insert(dic)
        print(murl + '中的数据已被成功插入mongodb')if __name__ == '__main__':
    get_url()

7. If it can be faster if it is changed to asynchronous form, just change it to the following. I tried to get nearly 100 more pieces of data per minute than normal. This source code is also in github.

asynchronous

The above is the detailed content of Python crawler audio data example. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn