How to read large files using Python

不言
Release: 2023-03-24 19:28:01
Original
1865 people have browsed it

This article mainly introduces the method of using Python to read large files. Friends who need it can refer to it

Background

Recently When processing a text document (the file is about 2GB in size), a memoryError error occurred and the file reading was too slow. Later, I found two faster Large File Reading methods. This article will introduce these two reading methods.

Preparation

When we talk about "text processing", we usually mean the content of the processing. Python makes it very easy to read the contents of a text file into a string variable that can be manipulated. File objects provide three "read" methods: .read(), .readline(), and .readlines(). Each method can accept a variable to limit the amount of data read each time, but they generally do not use variables. .read() reads the entire file at a time, it is usually used to put the file contents into a string variable. However, .read() generates the most direct string representation of the file's contents, but it is unnecessary for continuous line-oriented processing, and such processing is not possible if the file is larger than available memory. The following is an example of the read() method:

try:
f = open('/path/to/file', 'r')
print f.read()
finally:
if f:
f.close()
Copy after login

Calling read() will read the entire contents of the file at once. If the file is 10G, the memory will be exhausted. , so to be on the safe side, you can call the read(size) method repeatedly and read up to size bytes each time. In addition, calling readline() can read one line of content at a time, and calling readlines() can read all the content at once and return the list by line. Therefore, you need to decide how to call it according to your needs.

If the file is small, read() is the most convenient way to read it all at once; if the file size cannot be determined, it is safer to call read(size) repeatedly; if it is a configuration file, It is most convenient to call readlines():

for line in f.readlines():
process(line) #
Copy after login

 

Read in chunks

It is easy to think of processing large files by dividing the large file into several small files for processing, and releasing this part of the memory after processing each small file. Iter and yield are used here:

def read_in_chunks(filePath, chunk_size=1024*1024):
"""
Lazy function (generator) to read a file piece by piece.
Default chunk size: 1M
You can set your own chunk size
"""
file_object = open(filePath)
while True:
chunk_data = file_object.read(chunk_size)
if not chunk_data:
break
yield chunk_data
if __name__ == "__main__":
filePath = './path/filename'
for chunk in read_in_chunks(filePath):
process(chunk) # <do something with chunk>
Copy after login

Use With open()

with statements to open and close files, including throwing an inner block exception. for line in f file object f is treated as an iterator and automatically uses buffered IO and memory management, so you don't have to worry about large files.

The code is as follows:

#If the file is line based
with open(...) as f:
  for line in f:
    process(line) # <do something with line>
Copy after login

##Optimization

Facing hundreds There is no problem using with open for large data with thousands of rows, but the different parameters will also lead to different efficiencies. After testing, the efficiency when the first parameter is "rb" is 6 times that of "r". It can be seen that binary reading is still the fastest mode.

with open(filename,"rb") as f: 
  for fLine in f: 
    pass
Copy after login

Test results: rb method is the fastest, traversing 1 million rows in 2.9 seconds. It can basically meet the efficiency needs of medium and large file processing. If reading from rb (secondary system read) is changed to r (read mode), it will be 5-6 times slower.

Conclusion

When using python to read large files, you should let the system handle it, using the simplest way, Leave it to the interpreter and just mind your own work. At the same time, different reading parameters can be selected according to different needs to further obtain higher performance.

Related recommendations:

Detailed explanation of how python reads text data and converts it into DataFrame format

The above is the detailed content of How to read large files using Python. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template