Async/Await in Python: A Complete Guide to Coroutines
Daniel Hayes
Full-Stack Engineer ยท Leapcell

Python Asynchronous Programming
In Python, there are multiple asynchronous approaches available, such as coroutines, multithreading, and multiprocessing. Additionally, there are some traditional methods and third - party asynchronous libraries. This article mainly focuses on coroutines and briefly introduces multithreading and multiprocessing.
async/await
In Python, a function declared with async
is an asynchronous function, often referred to as a coroutine. For example:
import asyncio async def hello(): await asyncio.sleep(1) print("hello leapcell")
Calling Method
The way to call an asynchronous function is a bit different from that of a regular function. For instance, here's how to call a regular function:
def hello(): print("hello leapcell") hello()
And here's how to call an asynchronous function:
import asyncio async def hello(): await asyncio.sleep(1) print("hello leapcell") h = hello() asyncio.run(h)
When calling an asynchronous function, first using h = hello()
returns a coroutine object, and the code inside the function won't execute. Only after using the asyncio.run(h)
function or the await h
statement will the code execute, as shown below:
import asyncio async def async_function(): print("This is inside the async function") await asyncio.sleep(1) return "Async function result" # Correct usage async def correct_usage(): print("Correct usage:") result = await async_function() print(f"Result: {result}") # Call without using await def incorrect_usage(): print("\nIncorrect usage:") coroutine = async_function() print(f"Returned object: {coroutine}") # Note: "This is inside the async function" won't be printed here # Handle an unawaited coroutine async def handle_unawaited_coroutine(): print("\nHandling unawaited coroutine:") coroutine = async_function() try: # Use asyncio.run() to run the coroutine result = await coroutine print(f"Result after handling: {result}") except RuntimeWarning as e: print(f"Caught warning: {e}") async def main(): await correct_usage() incorrect_usage() await handle_unawaited_coroutine() asyncio.run(main())
Common Methods for Calling Asynchronous Functions
asyncio.gather()
Using gather
starts multiple tasks simultaneously and executes them concurrently. After all tasks are completed and the results are returned, the subsequent code will continue to execute. For example:
import asyncio async def num01(): await asyncio.sleep(1) return 1 async def num02(): await asyncio.sleep(1) return 2 async def combine(): results = await asyncio.gather(num01(), num02()) print(results) asyncio.run(combine())
Output:
[1, 2]
There are two asynchronous functions above. Now, use asyncio.gather
to execute these two functions concurrently, and then use await
to wait for the results. The returned results are stored in results
.
Using await Directly
The gather
method above collects multiple asynchronous functions and executes them concurrently. Apart from this method, you can also directly use the await
keyword, as shown below:
import asyncio async def hello(): await asyncio.sleep(1) return "hello leapcell" async def example(): result = await hello() print(result) asyncio.run(example())
Output:
hello leapcell
In the example
function above, await
is used to wait for the result of the asynchronous function. After the result is returned, it is printed to the console. This method actually executes sequentially because the code will wait at the await
statement until the result is returned before continuing to execute the following code.
What if you don't wait here? If you use result = hello()
, the code inside hello()
won't execute, and the returned result
is a coroutine object.
asyncio.create_task()
Besides the above methods, there is a more flexible way, which is to use asyncio.create_task()
. This method creates a task and executes it in the background immediately. At this time, the main function can perform other operations. When you need to obtain the result of the asynchronous task, use await
to get it, as shown below:
import asyncio async def number(): await asyncio.sleep(1) return 1 async def float_num(): await asyncio.sleep(1) return 1.0 async def example(): n = asyncio.create_task(number()) f = asyncio.create_task(float_num()) print("do something...") print(await n) print(await f) asyncio.run(example())
Output:
do something...
1
1.0
From the above output, we can see that create_task
creates and starts a task first. At this time, the main function won't be blocked and will continue to execute the following code. When you need the result of the asynchronous function, call await n
to get the result. In this way, you can put some time - consuming tasks into asynchronous code for execution and then obtain the results of these asynchronous functions when needed.
Note: Calling an asynchronous function using create_task
is different from calling it like a regular function. When using the regular function call method number()
, the function won't execute. However, when using create_task
to call an asynchronous function, the function will execute immediately. Even if you don't use await
to get the result, the function will complete execution before the main function exits.
Semaphore
asyncio.Semaphore
is a synchronization primitive in Python's asyncio
library, which controls access to shared resources. It is very useful in asynchronous programming and can limit the number of coroutines that can access a certain resource simultaneously. As shown in the following code:
import asyncio import aiohttp async def fetch(url, session, semaphore): async with semaphore: print(f"Fetching {url}") async with session.get(url) as response: return await response.text() async def main(): urls = [ "http://example.com", "http://example.org", "http://example.net", "http://example.edu", "http://example.io", ] semaphore = asyncio.Semaphore(2) # Limit the number of concurrent requests to 2 async with aiohttp.ClientSession() as session: tasks = [fetch(url, session, semaphore) for url in urls] responses = await asyncio.gather(*tasks) for url, response in zip(urls, responses): print(f"URL: {url}, Response length: {len(response)}") asyncio.run(main())
In the above code, an asyncio.Semaphore(2)
is created to limit the number of concurrent requests to 2. In the asynchronous function fetch
, async with semaphore
is used to acquire and release the semaphore. Before entering, the acquire()
method is automatically called to acquire the semaphore, and when exiting the with
block, the release()
method is called to release the semaphore. Using Semaphore
can control the number of concurrent requests, preventing excessive pressure on the server. It is very useful when dealing with limited resources, such as database connections. At the same time, it can optimize system performance and find the balance of concurrency.
Semaphore Principle
Internally, it maintains a counter. When the counter is greater than zero, access is allowed; when it is equal to zero, access is prohibited. The methods to acquire and release the counter are acquire()
and release()
respectively. You need to specify an initial counter value during initialization. Then, control the number of concurrent requests by controlling the counter value in the code.
Multithreading
Multithreading is a traditional way of concurrent task execution and is suitable for I/O - bound tasks, as shown in the following example:
import threading import time def worker(name): print(f"Worker {name} starting") time.sleep(2) # Simulate a time - consuming operation print(f"Worker {name} finished") def main(): threads = [] for i in range(3): t = threading.Thread(target=worker, args=(i,)) threads.append(t) t.start() for t in threads: t.join() print("All workers finished") if __name__ == "__main__": main()
The t.join()
in multithreading is used to wait for the completion of three threads. When the join()
method is called on a thread or process object, the calling thread (usually the main thread) will be blocked until the thread or process on which join()
is called finishes execution.
Multiprocessing
Multiprocessing is suitable for CPU - intensive tasks and can fully utilize multi - core processors, as shown below:
import multiprocessing import time def worker(name): print(f"Worker {name} starting") time.sleep(2) # Simulate a time - consuming operation print(f"Worker {name} finished") if __name__ == "__main__": processes = [] for i in range(3): p = multiprocessing.Process(target=worker, args=(i,)) processes.append(p) p.start() for p in processes: p.join() print("All workers finished")
Conclusion
In addition to the above asynchronous methods, there are other asynchronous approaches in Python, such as using callback functions or third - party libraries like Gevent. Each method has its own advantages and limitations. For example, threads are suitable for I/O - bound tasks but are limited by the GIL (Global Interpreter Lock); multiprocessing is suitable for CPU - intensive tasks but has higher memory overhead; third - party libraries provide specialized functions and optimizations but may increase the complexity of the project. In contrast, the async/await
syntax provides a more modern and readable way of asynchronous programming and is currently the recommended method for handling asynchronous operations in Python.
Leapcell: The Best of Serverless Web Hosting
Finally, let me introduce a platform that is most suitable for deploying Python services: Leapcell
๐ Build with Your Favorite Language
Develop effortlessly in JavaScript, Python, Go, or Rust.
๐ Deploy Unlimited Projects for Free
Only pay for what you useโno requests, no charges.
โก Pay-as-You-Go, No Hidden Costs
No idle fees, just seamless scalability.
๐ Explore Our Documentation
๐น Follow us on Twitter: @LeapcellHQ