Common Async Pitfalls in Rust Concurrency
James Reed
Infrastructure Engineer · Leapcell

Asynchronous programming comes with certain complexities, and it's easy to make mistakes when using async in Rust. This article discusses common pitfalls in Rust asynchronous runtimes.
Unexpected Synchronous Blocking
Accidentally performing synchronous blocking operations in asynchronous code is a major pitfall. It undermines the advantages of async programming and causes performance bottlenecks. Here are some common scenarios:
- Using blocking I/O operations in an async function: For example, directly calling standard blocking functions like
std::fs::File::open
orstd::net::TcpStream::connect
inside anasync fn
. - Performing CPU-intensive tasks inside async closures: Running heavy computations in an async closure can block the current thread and affect the execution of other async tasks.
- Using blocking libraries or functions in async code: Some libraries may not offer async interfaces and can only be called synchronously. Using these in async code can cause blocking.
Take a look at the following code to compare the difference between using std::thread::sleep
and tokio::time::sleep
:
use tokio::task; use tokio::time::Duration; async fn handle_request() { println!("Start processing request"); // tokio::time::sleep(Duration::from_secs(1)).await; // Correct: use tokio::time::sleep std::thread::sleep(Duration::from_secs(1)); // Incorrect: using std::thread::sleep println!("Request processing completed"); } #[tokio::main(flavor = "current_thread")] // Use tokio::main macro in single-thread mode async fn main() { let start = std::time::Instant::now(); // Launch multiple concurrent tasks let handles = (0..10).map(|_| { task::spawn(handle_request()) }).collect::<Vec<_>>(); // Optionally wait for all tasks to complete for handle in handles { handle.await.unwrap(); } println!("All requests completed, elapsed time: {:?}", start.elapsed()); }
How to Avoid the Trap of Synchronous Blocking?
- Use asynchronous libraries and functions: Prefer libraries that offer async interfaces, such as async I/O, timers, and networking provided by runtimes like
tokio
orasync-std
. - Offload CPU-intensive tasks to a dedicated thread pool: If heavy computation is needed in async code, use
tokio::task::spawn_blocking
orasync-std::task::spawn_blocking
to move those tasks to a separate thread pool, avoiding main thread blocking. - Carefully review dependencies: When using third-party libraries, verify if they provide async interfaces to avoid introducing blocking operations.
- Use tools for analysis: Performance analysis tools can help detect blocking operations in async code. For example,
tokio
offers a tool calledconsole
.
Forgetting .await
An asynchronous function returns a Future
, and you must use .await
to actually execute it and retrieve the result. Forgetting to use .await
will result in the Future
not being executed at all.
Consider the following code:
async fn my_async_function() -> i32 { 42 } #[tokio::main] async fn main() { // Incorrect: forgot `.await`, the function will not execute my_async_function(); // Correct let result = my_async_function().await; println!("The result of the correct async operation is: {}", result); }
Overusing spawn
Excessively spawning lightweight tasks introduces overhead from task scheduling and context switching, which can actually reduce performance.
In the example below, we multiply each number by 2, store the result in a Vec
, and finally print the number of elements in the Vec
. Both incorrect and correct approaches are demonstrated:
use async_std::task; async fn process_item(item: i32) -> i32 { // A very simple operation item * 2 } async fn bad_use_of_spawn() { let mut results = Vec::new(); for i in 0..10000 { // Incorrect: spawning a task for each simple operation let handle = task::spawn(process_item(i)); results.push(handle.await); } println!("{:?}", results.len()); } async fn good_use_of_spawn() { let mut results = Vec::new(); for i in 0..10000 { results.push(process_item(i).await); } println!("{:?}", results.len()); } fn main() { task::block_on(async { bad_use_of_spawn().await; good_use_of_spawn().await; }); }
In the incorrect example above, a new task is spawned for each simple multiplication, leading to massive overhead from task scheduling. The correct approach directly awaits the async function, avoiding extra overhead.
We should only use spawn
when true concurrency is required. For CPU-intensive or long-running I/O-bound tasks, spawn
is appropriate. For very lightweight tasks, directly using .await
is typically more efficient. You can also manage multiple tasks more effectively using tokio::task::JoinSet
.
Conclusion
Async Rust is powerful, but easy to misuse. Avoid blocking calls, don’t forget .await
, and only spawn when needed. Write with care, and your async code will stay fast and reliable.
We are Leapcell, your top choice for hosting Rust projects.
Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:
Multi-Language Support
- Develop with Node.js, Python, Go, or Rust.
Deploy unlimited projects for free
- pay only for usage — no requests, no charges.
Unbeatable Cost Efficiency
- Pay-as-you-go with no idle charges.
- Example: $25 supports 6.94M requests at a 60ms average response time.
Streamlined Developer Experience
- Intuitive UI for effortless setup.
- Fully automated CI/CD pipelines and GitOps integration.
- Real-time metrics and logging for actionable insights.
Effortless Scalability and High Performance
- Auto-scaling to handle high concurrency with ease.
- Zero operational overhead — just focus on building.
Explore more in the Documentation!
Follow us on X: @LeapcellHQ