Home > Backend Development > Python Tutorial > Python crawls JD product categories and links

Python crawls JD product categories and links

高洛峰
Release: 2017-02-25 10:05:15
Original
1958 people have browsed it

Preface

The main knowledge point of this article is to use Python's BeautifulSoup to perform multi-layer traversal.

Python crawls JD product categories and links

as the picture shows. Just a simple hack, not crawling the hidden things inside.

Sample code

from bs4 import BeautifulSoup as bs
import requests
headers = {
  "host": "www.jd.com",
  "User-Agent": "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/47.0.2526.80 Safari/537.36 Core/1.47.933.400 QQBrowser/9.4.8699.400",
  "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8"
  }
session = requests.session()
def get_url():
  renspned = bs(session.get('http://www.jd.com/',headers = headers).text,'html.parser')
  for i in renspned.find("p", {"class": "dd-inner"}).find_all("a",{"target":"_blank"}):
    print(i.get_text(),':',i.get('href'))
get_url()
Copy after login

Run this code and achieve our goal.

Python crawls JD product categories and links

#Let’s interpret this code.

First we need to visit JD.com’s homepage.

Then parse the visited home page through BeautifulSoup.

At this time, we have to locate the element to get what we need.

Through F12 in the browser, we can see what is shown in the picture below:

Python crawls JD product categories and links

Let’s Take a look at the following code:

for i in renspned.find("p", {"class": "dd-inner"}).find_all("a",{"target":"_blank"})
Copy after login

This line of code completely meets our needs. First, use the find method to locate class="dd-inner" p, and then use find_all for all a tags under this tag.

Finally, I wanted to print out all product categories and corresponding links, so I used i.get_text() and i.get('href ') finally obtained the product classification and corresponding link.

Summary

It’s actually not difficult, the main thing is to use the right method. The author didn't use the correct method because I was a beginner. It took almost two days to get it done. Here I also tell you that you can use the find().find_all() method to perform multi-layer traversal. The above is some of my experience in using Python to crawl JD.com’s product categories and links. I hope it will be helpful to everyone learning python.


For more articles related to Python crawling JD.com’s product categories and links, please pay attention to the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template