Home>Article>Backend Development> How to crawl tabular data from PDF files in Python (code example)

How to crawl tabular data from PDF files in Python (code example)

不言
不言 forward
2018-10-24 17:15:18 13338browse

The content of this article is about how Python can crawl tabular data from PDF files (code examples). It has certain reference value. Friends in need can refer to it. I hope it will be helpful to you. .

This article will show a slightly different crawler.
 In the past, our crawlers crawled data from the Internet, because web pages are generally written in HTML, CSS, and JavaScript codes. Therefore, there are a large number of mature technologies to crawl various data in web pages. This time, the documents we need to crawl are PDF files. This article will show how to use Python's camelot module to crawl tabular data from PDF files.
In our daily life and work, PDF files are undoubtedly one of the most commonly used file formats. We can all see this file format, ranging from textbooks and courseware to contracts and planning documents. But how to extract tables from PDF files is a big problem. Because there is no internal representation in PDF to represent a table. This makes tabular data difficult to extract for analysis. So, how do we crawl table data from PDF?
The answer is Python’s camelot module!
 camelot is a module for Python that allows anyone to easily extract tabular data from PDF files. You can use the following command to install the camelot module (it takes a long time to install):

pip install camelot-py

The official documentation address of the camelot module is: https://camelot-py.readthedoc....
The following will show how to use the camelot module to crawl tabular data from PDF files.

Example 1

 First, let us look at a simple example: eg.pdf. The entire file has only one page, and there is only one table in this page, as follows:

How to crawl tabular data from PDF files in Python (code example)

Use the following Python code to extract the table in the PDF file:

import camelot # 从PDF文件中提取表格 tables = camelot.read_pdf('E://eg.pdf', pages='1', flavor='stream') # 表格信息 print(tables) print(tables[0]) # 表格数据 print(tables[0].data)

The output result is:

  [['ID', '姓名', '城市', '性别'], ['1', 'Alex', 'Shanghai', 'M'], ['2', 'Bob', 'Beijing', 'F'], ['3', 'Cook', 'New York', 'M']]

Analysis code, camelot.read_pdf() is camelot's function to extract data from a table. The input parameters are the path of the PDF file, the page number (pages) and the table parsing method (there are two methods, stream and lattice). For the table parsing method, the default method is lattice, and the stream method will parse the entire PDF page as a table by default. If you need to specify the area in the parsed page, you can use the table_area parameter.
The convenience of the camelot module is that it provides functions to directly convert extracted table data into pandas, csv, JSON, and html, such as tables[0].df, tables[0].to_csv() function wait. Let’s take the output csv file as an example:

import camelot # 从PDF文件中提取表格 tables = camelot.read_pdf('E://eg.pdf', pages='1', flavor='stream') # 将表格数据转化为csv文件 tables[0].to_csv('E://eg.csv')

The obtained csv file is as follows:

How to crawl tabular data from PDF files in Python (code example)

Example 2

In Example 2, we will extract table data in a certain area of the PDF page. The pages (parts) of the PDF file are as follows:

How to crawl tabular data from PDF files in Python (code example)

In order to extract the only table in the entire page, we need to locate the location of the table. The coordinate system of the PDF file is different from that of the picture. It takes the vertex of the lower left corner as the origin, the x-axis to the right, and the y-axis upward. The coordinates of the text on the entire page can be output through the following Python code:

import camelot # 从PDF中提取表格 tables = camelot.read_pdf('G://Statistics-Fundamentals-Succinctly.pdf', pages='53', \ flavor='stream') # 绘制PDF文档的坐标,定位表格所在的位置 tables[0].plot('text')

The output result is:

UserWarning: No tables found on page-53 [stream.py:292]

The entire code does not find the table. This is because the stream method treats the entire PDF page as a table by default, so the table is not found. But the image of the drawn page coordinates is as follows:

How to crawl tabular data from PDF files in Python (code example)

Carefully comparing the previous PDF pages, we can easily find that the coordinates of the upper left corner of the area corresponding to the table is (50,620), and the coordinates of the lower right corner are (500,540). We add the table_area parameter to the read_pdf() function. The complete Python code is as follows:

import camelot # 识别指定区域中的表格数据 tables = camelot.read_pdf('G://Statistics-Fundamentals-Succinctly.pdf', pages='53', \ flavor='stream', table_area=['50,620,500,540']) # 绘制PDF文档的坐标,定位表格所在的位置 table_df = tables[0].df print(type(table_df)) print(table_df.head(n=6))

The output result is:

 0 1 2 3 0 Student Pre-test score Post-test score Difference 1 1 70 73 3 2 2 64 65 1 3 3 69 63 -6 4 … … … … 5 34 82 88 6

Summary

In specific identification of PDF pages When creating a table, in addition to the parameter of specifying the area, there are also parameters such as superscript and subscript, cell merging, etc. For detailed usage, please refer to the camelot official document website: https://camelot-py.readthedoc....

The above is the detailed content of How to crawl tabular data from PDF files in Python (code example). For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:segmentfault.com. If there is any infringement, please contact admin@php.cn delete