Creating and Managing Tasks with Asyncio

Asyncio allows developers to write asynchronous programs in Python without hassle. The module also provides many ways asynchronous tasks and with the multitude of ways to do it, it can become confusing on which one to use.
In this article, we will discuss the many ways you can create and manage tasks with asyncio.
What is an asyncio task?
In asyncio, a task is an object that wraps a coroutine and schedules it to run within the event loop. Simply put, a task is a way to run a coroutine concurrently with other tasks. Once a task is created, the event loop runs it, pausing and resuming it as necessary to allow other tasks to run.
Methods for Creating and Managing Asyncio Tasks
Now, we can discuss methods for creating and managing tasks. First, to create a task in Python using asyncio, you use the asyncio.create_task method which takes the following arguments:
coro (required): The coroutine object to be scheduled. This is the function you want to run asynchronously.
-
name (optional): A name for the task that can be useful for debugging or logging purposes. You can assign a string to this parameter.
- You can also set or get the name later using Task.set_name(name) and Task.get_name().
-
context (optional): Introduced in Python 3.11, this is used to set a context variable for the task, enabling task-local storage. It’s similar to thread-local storage but for asyncio tasks.
- This argument is not commonly used unless you're dealing with advanced scenarios that require context management.
Here is an example of the usage of asyncio.create_task:
import asyncio
# Define a coroutine
async def greet(name):
await asyncio.sleep(1) # Simulate an I/O-bound operation
print(f"Hello, {name}!")
async def main():
# Create tasks
task1 = asyncio.create_task(greet("Alice"), name="GreetingAlice")
task2 = asyncio.create_task(greet("Bob"), name="GreetingBob")
# Check task names
print(f"Task 1 name: {task1.get_name()}")
print(f"Task 2 name: {task2.get_name()}")
# Wait for both tasks to complete
await task1
await task2
# Run the main function
asyncio.run(main())
When you create a task, you can execute many methods such as:
.cancel(): to cancel the task.
.add_done_callback(cb): to add a callback function that runs when the task is done.
.done(): to check if the task is completed.
.result(): to retrieve the result of the task after it’s completed.
Now that we understand how to create a task, let's see how to handle waiting for one task or a multitude of tasks.
Waiting for Tasks completion
In this section, we will discuss how to wait for a task completion, for one or many tasks. Asynchronous programming is based on the fact that we can continue the execution of a program if we have an asynchronous task running. There might be times when you want to control better the flow and want to ensure that you have a result that you can work with before safely continuing the execution of the program.
To wait for a single task completion, you can use asyncio.wait_for. It takes two arguments:
awaitable (required): This is the coroutine, task, or future that you want to wait for. It can be any object that can be awaited, like a coroutine function call, an asyncio.Task, or an asyncio.Future.
timeout (optional): This specifies the maximum number of seconds to wait for the aw to complete. If the timeout is reached and the awaitable has not completed, asyncio.wait_for raises a TimeoutError. If timeout is set to None, the function will wait indefinitely for the awaitable to complete.
Here is an example where this method is used:
import asyncio
async def slow_task():
print("Task started...")
await asyncio.sleep(5) # Simulating a long-running task
print("Task finished!")
return "Completed"
async def main():
try:
# Wait for slow_task to finish within 2 seconds
result = await asyncio.wait_for(slow_task(), timeout=2)
print(result)
except asyncio.TimeoutError:
print("The task took too long and was canceled!")
asyncio.run(main())
In the code above, slow_task() is a coroutine that simulates a long-running task by sleeping for 5 seconds. The line asyncio.wait_for(slow_task(), timeout=2) waits for the task to complete but limits the wait to 2 seconds, causing a timeout since the task takes longer. When the timeout is exceeded, a TimeoutError is raised, the task is canceled, and the exception is handled by printing a message indicating the task took too long.
We can also wait for multiple or a group of tasks to complete. This is possible using asyncio.wait, asyncio.gather or asyncio.as_completed. Let's explore each method.
asyncio.wait
The asyncio.wait method waits for a collection of tasks and returns two sets: one for completed tasks and one for pending tasks. It takes the following arguments:
aws (required, iterable of awaitables): A collection of coroutine objects, tasks, or futures that you want to wait for.
timeout (float or None, optional): The maximum number of seconds to wait. If not provided, it waits indefinitely.
-
return_when (constant, optional): Specifies when asyncio.wait should return. Options include:
- asyncio.ALL_COMPLETED (default): Returns when all tasks are complete.
- asyncio.FIRST_COMPLETED: Returns when the first task is completed.
- asyncio.FIRST_EXCEPTION: Returns when the first task raises an exception.
Let's see how it is used in an example.
import asyncio
import random
async def task():
await asyncio.sleep(random.uniform(1, 3))
async def main():
tasks = [asyncio.create_task(task()) for _ in range(3)]
done, pending = await asyncio.wait(tasks, return_when=asyncio.FIRST_COMPLETED)
print(f"Done tasks: {len(done)}, Pending tasks: {len(pending)}")
asyncio.run(main())
In the code above, asyncio.wait waits for a group of tasks and returns two sets: one with completed tasks and another with those still pending. You can control when it returns, such as after the first task is completed or after all tasks are done. In the example, asyncio.wait returns when the first task is completed, leaving the rest in the pending set.
asyncio.gather
The asyncio.gather method runs multiple awaitable objects concurrently and returns a list of their results, optionally handling exceptions. Let's see the arguments it takes.
*aws (required, multiple awaitables): A variable number of awaitable objects (like coroutines, tasks, or futures) to run concurrently.
return_exceptions (bool, optional): If True, exceptions in the tasks will be returned as part of the results list instead of being raised.
Let's see how it can be used in an example.
import asyncio
import random
async def task(id):
await asyncio.sleep(random.uniform(1, 3))
return f"Task {id} done"
async def main():
results = await asyncio.gather(task(1), task(2), task(3))
print(results)
asyncio.run(main())
In the code above, asyncio.gather runs multiple awaitable objects concurrently and returns a list of their results in the order they were passed in. It allows you to handle exceptions gracefully if return_exceptions is set to True. In the example, three tasks are run simultaneously, and their results are returned in a list once all tasks are complete.
asyncio.as_completed
The asyncio.as_completed method is used to return an iterator that yields tasks as they are completed, allowing results to be processed immediately. It takes the following arguments:
aws (iterable of awaitables): A collection of coroutine objects, tasks, or futures.
timeout (float or None, optional): The maximum number of seconds to wait for tasks to complete. If not provided, it waits indefinitely.
Example
import asyncio
import random
async def task(id):
await asyncio.sleep(random.uniform(1, 3))
return f"Task {id} done"
async def main():
tasks = [task(i) for i in range(3)]
for coro in asyncio.as_completed(tasks):
result = await coro
print(result)
asyncio.run(main())
In the example above, asyncio.as_completed returns an iterator that yields results as each task completes, allowing you to process them immediately. This is useful when you want to handle results as soon as they're available, rather than waiting for all tasks to finish. In the example, the tasks are run simultaneously, and their results are printed as each one finishes, in the order they complete.
So to make a summary, you use:
asyncio.wait: when you need to handle multiple tasks and want to track which tasks are completed and which are still pending. It's useful when you care about the status of each task separately.
asyncio.gather: when you want to run multiple tasks concurrently and need the results in a list, especially when the order of results matters or you need to handle exceptions gracefully.
asyncio.as_completed: when you want to process results as soon as each task finishes, rather than waiting for all tasks to complete. It’s useful for handling results in the order they become available.
However, these methods don't take atomic task management with built-in error handling. In the next section, we will see about asyncio.TaskGroup and how to use it to manage a group of tasks.
asyncio.TaskGroup
asyncio.TaskGroup is a context manager introduced in Python 3.11 that simplifies managing multiple tasks as a group. It ensures that if any task within the group fails, all other tasks are canceled, providing a way to handle complex task management with robust error handling. The class has one method called created_task used to create and add tasks to the task group. You pass a coroutine to this method, and it returns an asyncio.Task object that is managed by the group.
Here is an example of how it is used:
import asyncio
async def task1():
await asyncio.sleep(1)
return "Task 1 done"
async def task2():
await asyncio.sleep(2)
return "Task 2 done"
async def task_with_error():
await asyncio.sleep(1)
raise ValueError("An error occurred")
async def main():
try:
async with asyncio.TaskGroup() as tg:
task1 = tg.create_task(task1())
task2 = tg.create_task(task2())
error_task = tg.create_task(task_with_error())
except Exception as e:
print(f"Error: {e}")
# Print results from completed tasks
print("Task 1 result:", task1.result())
print("Task 2 result:", task2.result())
asyncio.run(main())
asyncio.TaskGroup manages multiple tasks and ensures that if any task fails, all other tasks in the group are canceled. In the example, a task with an error causes the entire group to be canceled, and only the results of completed tasks are printed.
Usage for this can be in web scraping. You can use asyncio.TaskGroup to handle multiple concurrent API requests and ensure that if any request fails, all other requests are canceled to avoid incomplete data.
We are at the end of the article and we have learned the multiple methods asyncio provides to create and manage tasks. Here is a summary of the methods:
asyncio.wait_for: Wait for a task with a timeout.
asyncio.wait: Wait for multiple tasks with flexible completion conditions.
asyncio.gather: Aggregate multiple tasks into a single awaitable.
asyncio.as_completed: Handle tasks as they are completed.
asyncio.TaskGroup: Manage a group of tasks with automatic cancellation on failure.
Conclusion
Asynchronous programming can transform the way you handle concurrent tasks in Python, making your code more efficient and responsive. In this article, we've navigated through the various methods provided by asyncio to create and manage tasks, from simple timeouts to sophisticated task groups. Understanding when and how to use each method—asyncio.wait_for, asyncio.wait, asyncio.gather, asyncio.as_completed, and asyncio.TaskGroup—will help you harness the full potential of asynchronous programming, making your applications more robust and scalable.
For a deeper dive into asynchronous programming and more practical examples, explore our detailed guide here.
If you enjoyed this article, consider subscribing to my newsletter so you don't miss out on future updates.
Happy coding!
The above is the detailed content of Creating and Managing Tasks with Asyncio. For more information, please follow other related articles on the PHP Chinese website!
Hot AI Tools
Undress AI Tool
Undress images for free
Undresser.AI Undress
AI-powered app for creating realistic nude photos
AI Clothes Remover
Online AI tool for removing clothes from photos.
Clothoff.io
AI clothes remover
Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!
Hot Article
Hot Tools
Notepad++7.3.1
Easy-to-use and free code editor
SublimeText3 Chinese version
Chinese version, very easy to use
Zend Studio 13.0.1
Powerful PHP integrated development environment
Dreamweaver CS6
Visual web development tools
SublimeText3 Mac version
God-level code editing software (SublimeText3)
SQLAlchemy 2.0 Deprecation Warning and Connection Close Problem Resolving Guide
Aug 05, 2025 pm 07:57 PM
This article aims to help SQLAlchemy beginners resolve the "RemovedIn20Warning" warning encountered when using create_engine and the subsequent "ResourceClosedError" connection closing error. The article will explain the cause of this warning in detail and provide specific steps and code examples to eliminate the warning and fix connection issues to ensure that you can query and operate the database smoothly.
How to automate data entry from Excel to a web form with Python?
Aug 12, 2025 am 02:39 AM
The method of filling Excel data into web forms using Python is: first use pandas to read Excel data, and then use Selenium to control the browser to automatically fill and submit the form; the specific steps include installing pandas, openpyxl and Selenium libraries, downloading the corresponding browser driver, using pandas to read Name, Email, Phone and other fields in the data.xlsx file, launching the browser through Selenium to open the target web page, locate the form elements and fill in the data line by line, using WebDriverWait to process dynamic loading content, add exception processing and delay to ensure stability, and finally submit the form and process all data lines in a loop.
python pandas styling dataframe example
Aug 04, 2025 pm 01:43 PM
Using PandasStyling in JupyterNotebook can achieve the beautiful display of DataFrame. 1. Use highlight_max and highlight_min to highlight the maximum value (green) and minimum value (red) of each column; 2. Add gradient background color (such as Blues or Reds) to the numeric column through background_gradient to visually display the data size; 3. Custom function color_score combined with applymap to set text colors for different fractional intervals (≥90 green, 80~89 orange, 60~79 red,
How to create a virtual environment in Python
Aug 05, 2025 pm 01:05 PM
To create a Python virtual environment, you can use the venv module. The steps are: 1. Enter the project directory to execute the python-mvenvenv environment to create the environment; 2. Use sourceenv/bin/activate to Mac/Linux and env\Scripts\activate to Windows; 3. Use the pipinstall installation package, pipfreeze>requirements.txt to export dependencies; 4. Be careful to avoid submitting the virtual environment to Git, and confirm that it is in the correct environment during installation. Virtual environments can isolate project dependencies to prevent conflicts, especially suitable for multi-project development, and editors such as PyCharm or VSCode are also
python schedule library example
Aug 04, 2025 am 10:33 AM
Use the Pythonschedule library to easily implement timing tasks. First, install the library through pipinstallschedule, then import the schedule and time modules, define the functions that need to be executed regularly, then use schedule.every() to set the time interval and bind the task function. Finally, call schedule.run_pending() and time.sleep(1) in a while loop to continuously run the task; for example, if you execute a task every 10 seconds, you can write it as schedule.every(10).seconds.do(job), which supports scheduling by minutes, hours, days, weeks, etc., and you can also specify specific tasks.
How to handle large datasets in Python that don't fit into memory?
Aug 14, 2025 pm 01:00 PM
When processing large data sets that exceed memory in Python, they cannot be loaded into RAM at one time. Instead, strategies such as chunking processing, disk storage or streaming should be adopted; CSV files can be read in chunks through Pandas' chunksize parameters and processed block by block. Dask can be used to realize parallelization and task scheduling similar to Pandas syntax to support large memory data operations. Write generator functions to read text files line by line to reduce memory usage. Use Parquet columnar storage format combined with PyArrow to efficiently read specific columns or row groups. Use NumPy's memmap to memory map large numerical arrays to access data fragments on demand, or store data in lightweight data such as SQLite or DuckDB.
python logging to file example
Aug 04, 2025 pm 01:37 PM
Python's logging module can write logs to files through FileHandler. First, call the basicConfig configuration file processor and format, such as setting the level to INFO, using FileHandler to write app.log; secondly, add StreamHandler to achieve output to the console at the same time; Advanced scenarios can use TimedRotatingFileHandler to divide logs by time, for example, setting when='midnight' to generate new files every day and keep 7 days of backup, and make sure that the log directory exists; it is recommended to use getLogger(__name__) to create named loggers, and produce
HDF5 Dataset Name Conflicts and Group Names: Solutions and Best Practices
Aug 23, 2025 pm 01:15 PM
This article provides detailed solutions and best practices for the problem that dataset names conflict with group names when operating HDF5 files using the h5py library. The article will analyze the causes of conflicts in depth and provide code examples to show how to effectively avoid and resolve such problems to ensure proper reading and writing of HDF5 files. Through this article, readers will be able to better understand the HDF5 file structure and write more robust h5py code.


