From Cache Breakdown to Robustness: singleflight in Go
Grace Collins
Solutions Engineer · Leapcell

Preface
When building high-performance services, caching is a key technology for optimizing database load and improving response speed. However, using caching also brings some challenges, among which cache breakdown is a major issue. Cache breakdown can cause a surge in database pressure, degrade database performance, and in severe cases, even bring down the database and render it unavailable.
In Go, the package golang.org/x/sync/singleflight
provides a mechanism to ensure that concurrent requests for any specific key are only executed once at the same time. This mechanism effectively prevents cache breakdown issues.
This article will delve into the usage of the singleflight
package in Go. Starting from the basics of the cache breakdown problem, it will then introduce the singleflight
package in detail, demonstrating how to use it to avoid cache breakdown.
Cache Breakdown
Cache breakdown refers to a situation where, under high concurrency, a hot key suddenly expires, causing a large number of requests to directly access the database, which can overload the database and even cause it to crash.
Common solutions include:
- Setting hot data to never expire: For some well-defined hot data, you can set it to never expire, ensuring that requests do not bypass the cache due to cache expiration and directly access the database.
- Using mutex locks: To prevent all requests from querying the database simultaneously when the cache expires, a locking mechanism can be adopted to ensure that only one request queries the database and updates the cache, while other requests wait until the cache is updated before accessing it.
- Proactive updates: Monitor cache usage in the background, and when the cache is about to expire, update it asynchronously to extend its expiration time.
The singleflight Package
Package singleflight provides a duplicate function call suppression mechanism.
This sentence is from the official documentation.
In other words, when multiple goroutines attempt to call the same function (based on a given key) at the same time, singleflight ensures that the function is only executed by the first arriving goroutine. The other goroutines will wait for the result of this call and share the result, instead of initiating multiple calls simultaneously.
In short, singleflight merges multiple requests into a single request, allowing multiple requests to share the same result.
Components
-
Group: This is the core structure of the singleflight package. It manages all requests and ensures that at any moment, requests for the same resource are only executed once. The Group object does not need to be explicitly created; you can simply declare and use it.
-
Do method: The Group struct provides the Do method, which is the main method to merge requests. This method takes two arguments: a string key (to identify the resource) and a function
fn
that executes the actual task. When calling Do, if a request with the same key is already in progress, Do will wait for this request to complete and share its result; otherwise, it will executefn
and return the result. -
The Do method has three return values. The first two are the return values of
fn
, of typeinterface{}
anderror
respectively. The last return value is a boolean, indicating whether the result of Do was shared by multiple calls. -
DoChan: This method is similar to Do, but it returns a channel that receives the result when the operation is done. The return value is a channel, meaning we can wait for the result in a non-blocking way.
-
Forget: This method is used to delete a key and its associated request records from the Group, ensuring that the next Do call with the same key will execute a new request instead of reusing the previous result.
-
Result: This is the struct type returned by the DoChan method. It encapsulates the result of a request and contains three fields:
Val
(interface{}): The result returned by the request.Err
(error): Any error information encountered during the request.Shared
(bool): Indicates whether the result was shared with requests other than the current one.
Installation
Install the singleflight dependency in your Go application with the following command:
go get golang.org/x/sync/singleflight
Example Usage
package main import ( "errors" "fmt" "golang.org/x/sync/singleflight" "sync" ) var errRedisKeyNotFound = errors.New("redis: key not found") func fetchDataFromCache() (any, error) { fmt.Println("fetch data from cache") return nil, errRedisKeyNotFound } func fetchDataFromDataBase() (any, error) { fmt.Println("fetch data from database") return "Leapcell", nil } func fetchData() (any, error) { cache, err := fetchDataFromCache() if err != nil && errors.Is(err, errRedisKeyNotFound) { fmt.Println(errRedisKeyNotFound.Error()) return fetchDataFromDataBase() } return cache, err } func main() { var ( sg singleflight.Group wg sync.WaitGroup ) for range 5 { wg.Add(1) go func() { defer wg.Done() v, err, shared := sg.Do("key", fetchData) if err != nil { panic(err) } fmt.Printf("v: %v, shared: %v\n", v, shared) }() } wg.Wait() }
This code simulates a typical concurrent access scenario: fetching data from the cache, and if the cache misses, retrieving from the database. During this process, the singleflight library plays a crucial role. It ensures that when multiple concurrent requests try to access the same data at the same time, the actual fetch operation (whether from cache or database) is only performed once. This not only reduces database load but also effectively prevents cache breakdown in high concurrency scenarios.
The output of the code is as follows:
fetch data from cache redis: key not found fetch data from database v: Leapcell, shared: true v: Leapcell, shared: true v: Leapcell, shared: true v: Leapcell, shared: true v: Leapcell, shared: true
As shown, when 5 goroutines concurrently fetch the same data, the data fetch operation is actually performed only once by one goroutine. Furthermore, since all returned shared values are true, it means the result was shared with the other 4 goroutines.
Best Practices
Key Design
When generating keys, we should ensure their uniqueness and consistency.
- Uniqueness: Make sure the key passed to the Do method is unique so that the Group can distinguish between different requests. It is recommended to use a structured naming convention for keys, such as
{type}:{identifier}
. For example, when fetching user information, the key can beuser:1234
, whereuser
denotes the data type and1234
is the specific user identifier. - Consistency: For the same request, the generated key should always be consistent, no matter when it is called. This allows the Group to properly merge identical requests and prevents unexpected errors.
Timeout Control
When calling Group.Do, the first arriving goroutine can successfully execute the fn
function, while other subsequent goroutines will be blocked. If the blocked state lasts too long, a downgrade strategy may be needed to ensure the system remains responsive. In such cases, we can use Group.DoChan in combination with the select
statement to implement timeout control.
Below is a simple example demonstrating timeout control:
package main import ( "fmt" "golang.org/x/sync/singleflight" "time" ) func main() { var sg singleflight.Group doChan := sg.DoChan("key", func() (interface{}, error) { time.Sleep(4 * time.Second) return "Leapcell", nil }) select { case <-doChan: fmt.Println("done") case <-time.After(2 * time.Second): fmt.Println("timeout") // Implement other downgrade strategies here } }
Summary
- This article first introduced the concept of cache breakdown and its common solutions.
- Then, it explored the singleflight package in depth, covering its basic concepts, components, installation, and usage examples.
- Next, it demonstrated how to use singleflight to prevent cache breakdown in high concurrency scenarios through a simulated concurrent access example.
- Finally, it discussed best practices for designing keys and controlling request timeouts in practice, aiming to help better understand and apply singleflight for optimizing concurrent processing logic.
We are Leapcell, your top choice for hosting Go projects.
Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:
Multi-Language Support
- Develop with Node.js, Python, Go, or Rust.
Deploy unlimited projects for free
- pay only for usage — no requests, no charges.
Unbeatable Cost Efficiency
- Pay-as-you-go with no idle charges.
- Example: $25 supports 6.94M requests at a 60ms average response time.
Streamlined Developer Experience
- Intuitive UI for effortless setup.
- Fully automated CI/CD pipelines and GitOps integration.
- Real-time metrics and logging for actionable insights.
Effortless Scalability and High Performance
- Auto-scaling to handle high concurrency with ease.
- Zero operational overhead — just focus on building.
Explore more in the Documentation!
Follow us on X: @LeapcellHQ