Home >Backend Development >Python Tutorial >Python crawler audio data example
This time we crawled the information of each channel of all radio stations under the popular column of Himalaya and various information of each audio data in the channel, and then put the crawled data Save to mongodb for subsequent use. This time the amount of data is around 700,000. Audio data includes audio download address, channel information, introduction, etc., there are many.
I had my first interview yesterday. The other party was an artificial intelligence big data company. I was going to do an internship during the summer vacation of my sophomore year. They asked me to have crawled audio data, so I came to analyze Himalaya. The audio data crawls down. At present, I am still waiting for three interviews, or to be notified of the final interview news. (Because I can get certain recognition, I am very happy regardless of success or failure)
IDE: Pycharm 2017
Python3.6
start_urls = ['http://www.ximalaya.com/dq/all/{}'.format(num) for num in range(1, 85)]for start_url in start_urls:html = requests.get(start_url, headers=headers1).text soup = BeautifulSoup(html, 'lxml')for item in soup.find_all(class_="albumfaceOutter"):content = {'href': item.a['href'],'title': item.img['alt'],'img_url': item.img['src'] } print(content)
html = requests.get(url, headers=headers2).text numlist = etree.HTML(html).xpath('//div[@class="personal_body"]/@sound_ids')[0].split(',')for i in numlist: murl = 'http://www.ximalaya.com/tracks/{}.json'.format(i)html = requests.get(murl, headers=headers1).text dic = json.loads(html)
html = requests.get(url, headers=headers2).text ifanother = etree.HTML(html).xpath('//div[@class="pagingBar_wrapper"]/a[last()-1]/@data-page')if len(ifanother):num = ifanother[0] print('本频道资源存在' + num + '个页面')for n in range(1, int(num)): print('开始解析{}个中的第{}个页面'.format(num, n)) url2 = url + '?page={}'.format(n)# 之后就接解析音频页函数就行,后面有完整代码说明
__author__ = '布咯咯_rieuse'import jsonimport randomimport timeimport pymongoimport requestsfrom bs4 import BeautifulSoupfrom lxml import etree clients = pymongo.MongoClient('localhost') db = clients["XiMaLaYa"] col1 = db["album"] col2 = db["detaile"] UA_LIST = [] # 很多User-Agent用来随机使用可以防ban,显示不方便不贴出来了 headers1 = {} # 访问网页的headers,这里显示不方便我就不贴出来了 headers2 = {} # 访问网页的headers这里显示不方便我就不贴出来了def get_url(): start_urls = ['http://www.ximalaya.com/dq/all/{}'.format(num) for num in range(1, 85)]for start_url in start_urls: html = requests.get(start_url, headers=headers1).text soup = BeautifulSoup(html, 'lxml')for item in soup.find_all(class_="albumfaceOutter"): content = {'href': item.a['href'],'title': item.img['alt'],'img_url': item.img['src'] } col1.insert(content) print('写入一个频道' + item.a['href']) print(content) another(item.a['href']) time.sleep(1)def another(url): html = requests.get(url, headers=headers2).text ifanother = etree.HTML(html).xpath('//div[@class="pagingBar_wrapper"]/a[last()-1]/@data-page')if len(ifanother): num = ifanother[0] print('本频道资源存在' + num + '个页面')for n in range(1, int(num)): print('开始解析{}个中的第{}个页面'.format(num, n)) url2 = url + '?page={}'.format(n) get_m4a(url2) get_m4a(url)def get_m4a(url): time.sleep(1) html = requests.get(url, headers=headers2).text numlist = etree.HTML(html).xpath('//div[@class="personal_body"]/@sound_ids')[0].split(',')for i in numlist: murl = 'http://www.ximalaya.com/tracks/{}.json'.format(i) html = requests.get(murl, headers=headers1).text dic = json.loads(html) col2.insert(dic) print(murl + '中的数据已被成功插入mongodb')if __name__ == '__main__': get_url()
The above is the detailed content of Python crawler audio data example. For more information, please follow other related articles on the PHP Chinese website!