Calculating the MD5 Hash of Large Files in Python
While using Python's hashlib module to compute the MD5 hash of a file is straightforward for small files, it becomes impractical for large files that exceed available memory. This article explores a practical solution to this challenge.
To circumvent the memory limit, hashlib needs to be given access to chunks of the file rather than the entire file at once. The following Python function reads a file in chunks of specified size and accumulates the partial MD5 hashes. By setting a suitable size for the block_size parameter (default: 2^20), it effectively manages file sizes beyond RAM limitations.
<code class="python">def md5_for_file(f, block_size=2**20): md5 = hashlib.md5() while True: data = f.read(block_size) if not data: break md5.update(data) return md5.digest()</code>
To ensure proper results, opening the file in binary mode with 'rb' is essential.
For a more comprehensive approach, a helper function can encapsulate all necessary steps:
<code class="python">def generate_file_md5(rootdir, filename, blocksize=2**20): m = hashlib.md5() with open(os.path.join(rootdir, filename), "rb") as f: while True: buf = f.read(blocksize) if not buf: break m.update(buf) return m.hexdigest()</code>
Cross-checking the results using tools like jacksum ensures the accuracy of the computed MD5 hashes.
The above is the detailed content of How to Compute MD5 Hashes of Large Files in Python Without Memory Limitations?. For more information, please follow other related articles on the PHP Chinese website!