When to Use sync vs. channel in Go
James Reed
Infrastructure Engineer · Leapcell

How to Choose Between sync
and channel
When programming in C, we generally use shared memory for communication. When multiple threads operate on a shared piece of data concurrently, to ensure data safety and control thread synchronization, we use mutexes to lock and unlock as needed.
However, in Go, it's recommended to share memory by communicating—using channels to complete critical section synchronization mechanisms.
That said, channels in Go are relatively high-level primitives and naturally have lower performance compared to the locking mechanisms in the sync
package. If you're interested, you can write a simple benchmark test yourself to compare their performance, and discuss your findings in the comments.
Moreover, when using the sync
package to control synchronization, you do not lose ownership of the struct object, and you can still allow multiple goroutines to synchronize access to critical section resources. Therefore, if your requirements fit this scenario, it is still recommended to use the sync
package for synchronization as it is more reasonable and efficient.
Why You Should Choose the sync
Package for Synchronization:
- When you don't want to lose control of the struct while still allowing multiple goroutines to safely access critical section resources.
- When higher performance is required.
sync
's Mutex and RWMutex
Looking at the source code of the sync
package, we see that it contains the following structures:
- Mutex
- RWMutex
- Once
- Cond
- Pool
- Atomic operations in the
atomic
package
Among these, Mutex
is the most commonly used, especially when you are not yet skilled at using channels; you'll find Mutex quite handy. In contrast, RWMutex
is used less frequently.
Have you ever paid attention to the performance difference between using Mutex
and RWMutex
? Most people default to using a mutex, so let's write a simple demo to compare their performance.
var ( mu sync.Mutex murw sync.RWMutex tt1 = 1 tt2 = 2 tt3 = 3 ) // Using Mutex for reading data func BenchmarkReadMutex(b *testing.B) { b.RunParallel(func(pp *testing.PB) { for pp.Next() { mu.Lock() _ = tt1 mu.Unlock() } }) } // Using RWMutex for reading data func BenchmarkReadRWMutex(b *testing.B) { b.RunParallel(func(pp *testing.PB) { for pp.Next() { murw.RLock() _ = tt2 murw.RUnlock() } }) } // Using RWMutex for reading and writing data func BenchmarkWriteRWMutex(b *testing.B) { b.RunParallel(func(pp *testing.PB) { for pp.Next() { murw.Lock() tt3++ murw.Unlock() } }) }
We have written three simple benchmark tests:
- Reading data with a mutex lock.
- Reading data with the read lock of a read-write lock.
- Reading and writing data with a read-write lock.
$ go test -bench . bbb_test.go --cpu 2 goos: windows goarch: amd64 cpu: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz BenchmarkReadMutex-2 39638757 30.45 ns/op BenchmarkReadRWMutex-2 43082371 26.97 ns/op BenchmarkWriteRWMutex-2 16383997 71.35 ns/op $ go test -bench . bbb_test.go --cpu 4 goos: windows goarch: amd64 cpu: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz BenchmarkReadMutex-4 17066666 73.47 ns/op BenchmarkReadRWMutex-4 43885633 30.33 ns/op BenchmarkWriteRWMutex-4 10593098 110.3 ns/op $ go test -bench . bbb_test.go --cpu 8 goos: windows goarch: amd64 cpu: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz BenchmarkReadMutex-8 8969340 129.0 ns/op BenchmarkReadRWMutex-8 36451077 33.46 ns/op BenchmarkWriteRWMutex-8 7728303 158.5 ns/op $ go test -bench . bbb_test.go --cpu 16 goos: windows goarch: amd64 cpu: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz BenchmarkReadMutex-16 8533333 132.6 ns/op BenchmarkReadRWMutex-16 39638757 29.98 ns/op BenchmarkWriteRWMutex-16 6751646 173.9 ns/op $ go test -bench . bbb_test.go --cpu 128 goos: windows goarch: amd64 cpu: Intel(R) Core(TM)2 Duo CPU T7700 @ 2.40GHz BenchmarkReadMutex-128 10155368 116.0 ns/op BenchmarkReadRWMutex-128 35108558 33.27 ns/op BenchmarkWriteRWMutex-128 6334021 195.3 ns/op
From the results, we can see that when concurrency is low, the performance of mutex locks and read locks (from RWMutex) is similar. As concurrency increases, the performance of the read lock in RWMutex does not change significantly, but the performance of both the mutex and RWMutex degrade as concurrency increases.
It’s clear that RWMutex is suitable for read-heavy, write-light scenarios. In scenarios with many concurrent read operations, multiple goroutines can acquire the read lock simultaneously, which reduces lock contention and waiting time.
However, with a regular mutex, only one goroutine can acquire the lock at a time under concurrency. Other goroutines will be blocked and have to wait, which negatively affects performance.
For example, let’s see what kinds of problems might arise if we use a regular mutex in practice.
Things to Note When Using sync
When using the locks in the sync
package, you should not copy a Mutex or RWMutex after it has already been used.
Here’s a simple demo:
var mu sync.Mutex // Do not copy Mutex or RWMutex after they have been used. // If you need to copy, only do it before usage. func main() { go func(mm sync.Mutex) { for { mm.Lock() time.Sleep(time.Second * 1) fmt.Println("g2") mm.Unlock() } }(mu) mu.Lock() go func(mm sync.Mutex) { for { mm.Lock() time.Sleep(time.Second * 1) fmt.Println("g3") mm.Unlock() } }(mu) time.Sleep(time.Second * 1) fmt.Println("g1") mu.Unlock() time.Sleep(time.Second * 20) }
If you run this code, you’ll notice that "g3"
never gets printed. This means that the goroutine containing "g3"
has deadlocked and never gets the chance to call unlock.
The reason for this lies in the internal structure of Mutex. Let’s take a look:
//... // A Mutex must not be copied after first use. //... type Mutex struct { state int32 sema uint32 }
The Mutex struct contains an internal state
(which represents the state of the mutex) and sema
(which is used to control the semaphore of the mutex). When Mutex is initialized, both are 0. But once we lock the Mutex, its state changes to Locked. At this point, if another goroutine copies this Mutex and locks it in its own context, a deadlock will occur. This is a crucial detail to keep in mind.
If you have a scenario where multiple goroutines need to use a Mutex, you can use closures or pass in a pointer or the address of the structure that wraps the lock. This way, you can avoid unexpected results when using locks.
sync.Once
How often do you use other members of the sync
package? One of the more frequently used ones is sync.Once
. Let’s look at how to use sync.Once
and what you need to pay attention to.
In C or C++, when we need a singleton (only one instance during the lifecycle of the program), we often use the Singleton pattern. Here, sync.Once
is a great fit for implementing singletons in Go.
sync.Once
ensures that a particular function is only executed once during the lifetime of the program. This is more flexible than the init
function, which is called once per package.
One thing to note: If the function executed within sync.Once
panics, it is still considered executed. Afterwards, any logic attempting to enter sync.Once
again will not execute the function.
Typically, sync.Once
is used for object/resource initialization and cleanup to avoid repeated operations. Here’s a demo:
- The main function starts 3 goroutines and uses
sync.WaitGroup
to manage and wait for child goroutines to exit. - After starting all goroutines, the main function waits 2 seconds, then tries to create and get an instance.
- Each goroutine also tries to get the instance.
- As soon as one goroutine enters Once and executes the logic, a panic occurs.
- The goroutine that encounters the panic catches the exception. At this point, the global instance has already been initialized, and other goroutines still cannot enter the function inside Once.
type Instance struct { Name string } var instance *Instance var on sync.Once func GetInstance(num int) *Instance { defer func() { if err := recover(); err != nil { fmt.Println("num %d ,get instance and catch error ... \n", num) } }() on.Do(func() { instance = &Instance{Name: "Leapcell"} fmt.Printf("%d enter once ... \n", num) panic("panic....") }) return instance } func main() { var wg sync.WaitGroup for i := 0; i < 3; i++ { wg.Add(1) go func(i int) { ins := GetInstance(i) fmt.Printf("%d: ins:%+v , p=%p\n", i, ins, ins) wg.Done() }(i) } time.Sleep(time.Second * 2) ins := GetInstance(9) fmt.Printf("9: ins:%+v , p=%p\n", ins, ins) wg.Wait() }
From the output, we can see that goroutine 0 enters Once and encounters a panic, so the result returned by GetInstance for that goroutine is nil.
All other goroutines, including the main one, can get the address of instance
as expected, and the address is the same. This shows that the initialization only happens once globally.
$ go run main.go 0 enter once ... num %d ,get instance and catch error ... 0 0: ins:<nil> , p=0x0 1: ins:&{Name:Leapcell} , p=0xc000086000 2: ins:&{Name:Leapcell} , p=0xc000086000 9: ins:&{Name:Leapcell} , p=0xc000086000
We are Leapcell, your top choice for hosting Go projects.
Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:
Multi-Language Support
- Develop with Node.js, Python, Go, or Rust.
Deploy unlimited projects for free
- pay only for usage — no requests, no charges.
Unbeatable Cost Efficiency
- Pay-as-you-go with no idle charges.
- Example: $25 supports 6.94M requests at a 60ms average response time.
Streamlined Developer Experience
- Intuitive UI for effortless setup.
- Fully automated CI/CD pipelines and GitOps integration.
- Real-time metrics and logging for actionable insights.
Effortless Scalability and High Performance
- Auto-scaling to handle high concurrency with ease.
- Zero operational overhead — just focus on building.
Explore more in the Documentation!
Follow us on X: @LeapcellHQ