Home  >  Article  >  Backend Development  >  How to apply concurrent programming in Python crawlers

How to apply concurrent programming in Python crawlers

WBOY
WBOYforward
2023-05-14 14:34:061145browse

What is concurrent programming

Concurrent programming refers to a program design that can perform multiple operations within a period of time. It is usually represented by multiple tasks in the program that are started at the same time and can run and interact with each other. There will be no impact. The benefit of concurrent programming is that it can improve the performance and responsiveness of the program.

Application of concurrent programming in crawlers

Crawler programs are typical I/O-intensive tasks. For I/O-intensive tasks, multi-threading and asynchronous I/O are A good choice, because when a certain part of the program is blocked due to I/O operations, other parts of the program can still run, so we don't have to waste a lot of time waiting and blocking.

Single-threaded version

Let’s first look at the single-threaded version of the crawler program. This crawler program uses the requests library to obtain JSON data and saves the image locally through the open function.

"""
example04.py - 单线程版本爬虫
"""
import os
import requests
def download_picture(url):
    filename = url[url.rfind('/') + 1:]
    resp = requests.get(url)
    if resp.status_code == 200:
        with open(f'images/beauty/{filename}', 'wb') as file:
            file.write(resp.content)
def main():
    if not os.path.exists('images/beauty'):
        os.makedirs('images/beauty')
    for page in range(3):
        resp = requests.get(f'{page * 30}')
        if resp.status_code == 200:
            pic_dict_list = resp.json()['list']
            for pic_dict in pic_dict_list:
                download_picture(pic_dict['qhimg_url'])
if __name__ == '__main__':
    main()

On macOS or Linux systems, we can use the time command to understand the execution time of the above code and the CPU utilization, as shown below.

time python3 example04.py

The following is the result of the single-threaded crawler code executed on my computer.

python3 example04.py 2.36s user 0.39s system 12% cpu 21.578 total

Here we only need to pay attention to the total time consumption of the code which is 21.578 seconds, CPU utilization is 12%.

Multi-threaded version

We use the thread pool technology mentioned before to modify the above code into a multi-threaded version.

"""
example05.py - 多线程版本爬虫
"""
import os
from concurrent.futures import ThreadPoolExecutor
import requests
def download_picture(url):
    filename = url[url.rfind('/') + 1:]
    resp = requests.get(url)
    if resp.status_code == 200:
        with open(f'images/beauty/{filename}', 'wb') as file:
            file.write(resp.content)
def main():
    if not os.path.exists('images/beauty'):
        os.makedirs('images/beauty')
    with ThreadPoolExecutor(max_workers=16) as pool:
        for page in range(3):
            resp = requests.get(f'{page * 30}')
            if resp.status_code == 200:
                pic_dict_list = resp.json()['list']
                for pic_dict in pic_dict_list:
                    pool.submit(download_picture, pic_dict['qhimg_url'])
if __name__ == '__main__':
    main()

Execute the command shown below.

time python3 example05.py

The execution result of the code is as follows:

python3 example05.py 2.65s user 0.40s system 95% cpu 3.193 total

Asynchronous I/O version

We use aiohttp to modify the above code to the asynchronous I/O version. In order to achieve network resource acquisition and file writing operations in asynchronous I/O, we must first install the third-party libraries aiohttp and aiofile.

pip install aiohttp aiofile

The following is the asynchronous I/O version of the crawler code.

"""
example06.py - 异步I/O版本爬虫
"""
import asyncio
import json
import os
import aiofile
import aiohttp
async def download_picture(session, url):
    filename = url[url.rfind('/') + 1:]
    async with session.get(url, ssl=False) as resp:
        if resp.status == 200:
            data = await resp.read()
            async with aiofile.async_open(f'images/beauty/{filename}', 'wb') as file:
                await file.write(data)
async def main():
    if not os.path.exists('images/beauty'):
        os.makedirs('images/beauty')
    async with aiohttp.ClientSession() as session:
        tasks = []
        for page in range(3):
            resp = await session.get(f'{page * 30}')
            if resp.status == 200:
                pic_dict_list = (await resp.json())['list']
                for pic_dict in pic_dict_list:
                    tasks.append(asyncio.ensure_future(download_picture(session, pic_dict['qhimg_url'])))
        await asyncio.gather(*tasks)
if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())

Execute the command shown below.

time python3 example06.py

The execution result of the code is as follows:

python3 example06.py 0.92s user 0.27s system 290% cpu 0.420 total

Compared with the single-threaded version of the crawler program, the execution time of the multi-threaded version and the asynchronous I/O version of the crawler program has been significantly improved, and the asynchronous I/O version The /O version of the crawler performs best.

The above is the detailed content of How to apply concurrent programming in Python crawlers. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:yisu.com. If there is any infringement, please contact admin@php.cn delete