When using python to crawl data, enable multi-thread crawling in a single process. After all, I don’t have multiple processes because of intensive IO.
code show as below
def get_downloads_url_list(self,pageNum):
FilePath='C:/RMDZY/h'+str(pageNum)
os.chdir(FilePath)
with open(FilePath+'/m3u8.txt', 'r') as f:
m3u8_txt = f.read()
download_ts_list = re.findall(r'ppvod' + r'\d{7}' + r'.ts', m3u8_txt)
download_url_list = [url + str(pageNum) + '/1000kb/hls/' + download_ts_list[i] for i in
range(len(download_ts_list))]
max_length=len(download_url_list)
dat_list=['ts'+str(i)+'.ts' for i in range(max_length)]
dat_str='+'.join(dat_list)
ts_command='copy /b '+dat_str+' new.ts'
with open('ts.bat','w') as f:
f.write(ts_command)
return download_url_list
def download_by_m3u8(self,i,pageNum):
download_list=self.get_downloads_url_list(pageNum)
ts_file = requests.get(download_list[i], verify=False)
with open('ts'+str(i)+'.ts','ab') as f:
f.write(ts_file.content)![图片描述][1]
def download_threading(self,pageNum):
download_list=self.get_downloads_url_list(pageNum)
thread_list=[]
for i in range(len(download_list)):
thread = threading.Thread(target=self.download_by_m3u8, args=[i,pageNum])
thread_list.append(thread)
thread.start()
for thread in thread_list:
thread.join()
But as long as a thread's requests do not return a value, the thread will keep waiting and will not write, so there will be a problem that the main process is not blocked.
As shown in the picture
How to deal with it, for example, set a timeout for requests.get, but what to do after it exceeds? After I set the timeout, it seems that the thread was killed directly, and I can continue to download the next target, but this one is not downloaded. I recorded it like this, can you catch this exception and reconnect? The main thing is writing, I'm not very good at it. Strange
人生最曼妙的风景,竟是内心的淡定与从容!