Convert data captured by python crawler into PDF

Y2J
Release: 2017-05-08 16:56:05
Original
1729 people have browsed it

This article shares with you the method and code of using python crawler to convert "Liao Xuefeng's Python Tutorial" into PDF. Friends in need can refer to it.

It seems that there is no easier way to write a crawler than using Python. It's appropriate. There are so many crawler tools provided by the Python community that you will be dazzled. With various libraries that can be used directly, you can write a crawler in minutes. Today I am thinking about writing a crawler and crawling down Liao Xuefeng's Python tutorial. Create a PDF e-book for everyone to read offline.

Before we start writing the crawler, let’s first analyze the page structure of the website 1. The left side of the web page is the directory outline of the tutorial. Each URL corresponds to an article on the right. The upper right side is the article’s The title, in the middle is the text part of the article. The text content is the focus of our concern. The data we want to crawl is the text part of all web pages. Below is the user's comment area. The comment area is of no use to us, so we can ignore it.

Tool preparation

After you have figured out the basic structure of the website, you can start preparing the tool kits that the crawler depends on. requests and beautifulsoup are two artifacts of crawlers, reuqests is used for network requests, and beautifusoup is used to operate html data. With these two shuttles, we can work quickly. We don’t need crawlers like scrapyframework. Using it in small programs is like killing a chicken with a sledgehammer. In addition, since you are converting html files to pdf, you must also have corresponding library support. wkhtmltopdf is a very good tool, which can convert html to pdf for multiple platforms. pdfkit is the Python package of wkhtmltopdf. FirstInstallthe following dependency packages,

Then install wkhtmltopdf

pip install requests pip install beautifulsoup pip install pdfkit
Copy after login

Install wkhtmltopdf

Windows platform directly on the wkhtmltopdf official website 2 Download the stable version and install it. After the installation is completed, add the execution path of the program to the system environment $PATHvariable, otherwise pdfkit cannot find wkhtmltopdf and the error "No wkhtmltopdf executable found" will appear. Ubuntu and CentOS can be installed directly using the command line

$ sudo apt-get install wkhtmltopdf # ubuntu $ sudo yum intsall wkhtmltopdf # centos
Copy after login

Crawler implementation

After everything is ready, you can start coding, but you should sort out your thoughts before writing code . The purpose of the program is to save the html text parts corresponding to all URLs locally, and then use pdfkit to convert these files into a pdf file. Let's split the task. First, save the html text corresponding to a certain URL locally, and then find all URLs and perform the same operation.

Use the Chrome browser to find the tag in the body part of the page, and press F12 to find the p tag corresponding to the body:

, where p is the body content of the web page. After using requests to load the entire page locally, you can use beautifulsoup to operate the HTML dom element to extract the text content.


The specific implementation code is as follows: Use soup.find_allfunctionto find the text tag, and then save the content of the text part to the a.html file.

def parse_url_to_html(url): response = requests.get(url) soup = BeautifulSoup(response.content, "html5lib") body = soup.find_all(class_="x-wiki-content")[0] html = str(body) with open("a.html", 'wb') as f: f.write(html)
Copy after login

The second step is to parse out all the URLs on the left side of the page. Use the same method to find the left menu label

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!