Home>Article>Backend Development> How to speed up frequent writing of files in Python

How to speed up frequent writing of files in Python

尚
Original
2019-06-26 14:52:35 2893browse

How to speed up frequent writing of files in Python

Problem background: There are a batch of files that need to be processed. For each file, the same function needs to be called for processing, which is quite time-consuming.

Is there any way to speed it up? Of course there is. For example, if you divide these files into several batches, each batch calls a python script you wrote for processing, so that running several python programs at the same time can also be accelerated.

Is there an easier way? For example, a program I run is divided into multiple threads at the same time and then processed?

General idea: Divide these lists of file paths into several parts. As for how many parts to divide, it depends on how many CPU cores you have. For example, if your CPU has 32 cores, it can theoretically be accelerated by 32 times.

The code is as follows:

# -*-coding:utf-8-*- import numpy as np from glob import glob import math import os import torch from tqdm import tqdm import multiprocessing label_path = '/home/ying/data/shiyongjie/distortion_datasets/new_distortion_dataset/train/label.txt' file_path = '/home/ying/data/shiyongjie/distortion_datasets/new_distortion_dataset/train/distortion_image' save_path = '/home/ying/data/shiyongjie/distortion_datasets/new_distortion_dataset/train/flow_field' r_d_max = 128 image_index = 0 txt_file = open(label_path) file_list = txt_file.readlines() txt_file.close() file_label = {} for i in file_list: i = i.split() file_label[i[0]] = i[1] r_d_max = 128 eps = 1e-32 H = 256 W = 256 def generate_flow_field(image_list): for image_file_path in ((image_list)): pixel_flow = np.zeros(shape=tuple([256, 256, 2])) # 按照pytorch中的grid来写 image_file_name = os.path.basename(image_file_path) # print(image_file_name) k = float(file_label[image_file_name])*(-1)*1e-7 # print(k) r_u_max = r_d_max/(1+k*r_d_max**2) # 计算出畸变校正之后的对角线的理论长度 scale = r_u_max/128 # 将这个长度压缩到256的尺寸,会有一个scale,实际上这里写128*sqrt(2)可能会更加直观 for i_u in range(256): for j_u in range(256): x_u = float(i_u - 128) y_u = float(128 - j_u) theta = math.atan2(y_u, x_u) r = math.sqrt(x_u ** 2 + y_u ** 2) r = r * scale # 实际上得到的r,即没有resize到256×256的图像尺寸size,并且带入公式中 r_d = (1.0 - math.sqrt(1 - 4.0 * k * r ** 2)) / (2 * k * r + eps) # 对应在原图(畸变图)中的r x_d = int(round(r_d * math.cos(theta))) y_d = int(round(r_d * math.sin(theta))) i_d = int(x_d + W / 2.0) j_d = int(H / 2.0 - y_d) if i_d < W and i_d >= 0 and j_d < H and j_d >= 0: # 只有求的的畸变点在原图中的时候才进行赋值 value1 = (i_d - 128.0)/128.0 value2 = (j_d - 128.0)/128.0 pixel_flow[j_u, i_u, 0] = value1 # mesh中存储的是对应的r的比值,在进行畸变校正的时候,给定一张这样的图,进行找像素即可 pixel_flow[j_u, i_u, 1] = value2 # 保存成array格式 saved_image_file_path = os.path.join(save_path, image_file_name.split('.')[0] + '.npy') pixel_flow = pixel_flow.astype('f2') # 将数据的格式转换成float16类型, 节省空间 # print(saved_image_file_path) # print(pixel_flow) np.save(saved_image_file_path, pixel_flow) return if __name__ == '__main__': file_list = glob(file_path + '/*.JPEG') m = 32 n = int(math.ceil(len(file_list) / float(m))) # 向上取整 result = [] pool = multiprocessing.Pool(processes=m) # 32进程 for i in range(0, len(file_list), n): result.append(pool.apply_async(generate_flow_field, (file_list[i: i+n],))) pool.close() pool.join()

In the above code, the function

generate_flow_field(image_list)

needs to pass in a list, and then for Operate this list, and then save the results of the operation

So, you only need to divide the multiple files you need to process into lists of equal sizes as much as possible, and then open a thread for each list. Just process it

The above main function:

if __name__ == '__main__': file_list = glob(file_path + '/*.JPEG') # 将文件夹下所有的JPEG文件列成一个list m = 32 # 假设CPU有32个核心 n = int(math.ceil(len(file_list) / float(m))) # 每一个核心需要处理的list的数目 result = [] pool = multiprocessing.Pool(processes=m) # 开32线程的线程池 for i in range(0, len(file_list), n): result.append(pool.apply_async(generate_flow_field, (file_list[i: i+n],))) # 对每一个list都用上面我们定义的函数进行处理 pool.close() # 处理结束之后,关闭线程池 pool.join()

It is mainly two lines of code, one line is

pool = multiprocessing.Pool(processes=m) # 开32线程的线程池

used to open up the thread pool

In addition One line is

result.append(pool.apply_async(generate_flow_field, (file_list[i: i+n],))) # 对每一个list都用上面我们定义的函数进行处理

For the thread pool, use apply_async() to run the generate_flow_field function at the same time. The parameters passed in are: file_list[i: i n]

In fact, the function of apply_async() All threads run at the same time, so the speed is relatively fast.

For more Python related technical articles, please visit thePython Tutorialcolumn to learn!

The above is the detailed content of How to speed up frequent writing of files in Python. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn