Learning Go Interface Encapsulation from Kubernetes
Daniel Hayes
Full-Stack Engineer · Leapcell

Hiding Input Parameter Details Using Interfaces
When a method’s input parameter is a struct, the internal calls will reveal too many details of the input. In such cases, you can implicitly convert the input into an interface so that the internal implementation only sees the methods it needs.
type Kubelet struct{} func (kl *Kubelet) HandlePodAdditions(pods []*Pod) { for _, pod := range pods { fmt.Printf("create pods : %s\n", pod.Status) } } func (kl *Kubelet) Run(updates <-chan Pod) { fmt.Println(" run kubelet") go kl.syncLoop(updates, kl) } func (kl *Kubelet) syncLoop(updates <-chan Pod, handler SyncHandler) { for { select { case pod := <-updates: handler.HandlePodAdditions([]*Pod{&pod}) } } } type SyncHandler interface { HandlePodAdditions(pods []*Pod) }
Here, we can see that the Kubelet itself has several methods:
syncLoop
: a loop for syncing stateRun
: used to start the listening loopHandlePodAdditions
: logic for handling Pod additions
Since syncLoop
does not actually need to know about the other methods on kubelet, we define a SyncHandler
interface, let kubelet implement this interface, and pass kubelet as an argument to syncLoop
as a SyncHandler
. This causes kubelet to be type-cast as SyncHandler
.
After this conversion, other methods on kubelet are no longer visible in the input parameters, allowing you to focus more on the logic inside syncLoop
during coding.
However, this approach can also cause some problems. The initial abstraction may be sufficient for the first set of requirements, but as requirements grow and iterate, if we need to use other methods on kubelet that are not wrapped in the interface, we’ll have to either pass kubelet explicitly or add to the interface, both of which increase coding effort and break the original encapsulation.
Layered encapsulation and hiding is our ultimate goal in design—allowing each part of the code to focus only on what it needs to care about.
Interface Encapsulation for Easier Mock Testing
Through abstraction with interfaces, we can directly instantiate a mock struct for the parts we don't care about during testing.
type OrderAPI interface { GetOrderId() string } type realOrderImpl struct{} func (r *realOrderImpl) GetOrderId() string { return "" } type mockOrderImpl struct{} func (m *mockOrderImpl) GetOrderId() string { return "mock" }
Here, if during testing we don't care whether GetOrderId
works correctly, we can directly initialize OrderAPI
with mockOrderImpl
, and the logic in the mock can be made as complex as needed.
func TestGetOrderId(t *testing.T) { orderAPI := &mockOrderImpl{} // If we need to get the order id, but it's not the focus of the test, just initialize with the mock struct fmt.Println(orderAPI.GetOrderId()) }
gomonkey
can also be used for test injection. So if the existing code wasn't encapsulated through interfaces, we can still achieve mocking, and this method is even more powerful.
patches := gomonkey.ApplyFunc(GetOrder, func(orderId string) Order { return Order{ OrderId: orderId, OrderState: delivering, } }) return func() { patches.Reset() }
Using gomonkey
allows for more flexible mocking, as it can directly set a function's return value, while interface abstraction can only handle content instantiated from structs.
Interface Encapsulation for Multiple Underlying Implementations
Implementations like iptables
and ipvs
are achieved through interface abstraction because all network settings need to handle both Service and Endpoint. Thus, they abstracted ServiceHandler
and EndpointSliceHandler
:
// ServiceHandler is an abstract interface used for receiving notifications about service object changes. type ServiceHandler interface { // OnServiceAdd is called when a new service object is observed to be created. OnServiceAdd(service *v1.Service) // OnServiceUpdate is called when an existing service object is observed to be modified. OnServiceUpdate(oldService, service *v1.Service) // OnServiceDelete is called when an existing service object is observed to be deleted. OnServiceDelete(service *v1.Service) // OnServiceSynced is called once all initial event handlers have been called and the state is fully propagated to the local cache. OnServiceSynced() } // EndpointSliceHandler is an abstract interface used for receiving notifications about endpoint slice object changes. type EndpointSliceHandler interface { // OnEndpointSliceAdd is called when a new endpoint slice object is observed to be created. OnEndpointSliceAdd(endpointSlice *discoveryv1.EndpointSlice) // OnEndpointSliceUpdate is called when an existing endpoint slice object is observed to be modified. OnEndpointSliceUpdate(oldEndpointSlice, newEndpointSlice *discoveryv1.EndpointSlice) // OnEndpointSliceDelete is called when an existing endpoint slice object is observed to be deleted. OnEndpointSliceDelete(endpointSlice *discoveryv1.EndpointSlice) // OnEndpointSlicesSynced is called once all initial event handlers have been called and the state is fully propagated to the local cache. OnEndpointSlicesSynced() }
Then they can be injected through a Provider
:
type Provider interface { config.EndpointSliceHandler config.ServiceHandler }
This is also the coding technique I use the most when working on components: by abstracting similar operations, the upper layer code doesn’t need to change after replacing the underlying implementation.
Encapsulating Exception Handling
If we don’t capture exceptions after launching goroutines, an exception will cause the goroutine to panic directly. But writing a global recover logic every time is not very elegant, so we can use an encapsulated HandleCrash
method:
package runtime var ( ReallyCrash = true ) // Default global Panic handler var PanicHandlers = []func(interface{}){logPanic} // Allows passing in extra custom panic handlers from outside func HandleCrash(additionalHandlers ...func(interface{})) { if r := recover(); r != nil { for _, fn := range PanicHandlers { fn(r) } for _, fn := range additionalHandlers { fn(r) } if ReallyCrash { panic(r) } } }
This supports both internal exception handling and external injection of additional handlers. If you don’t want to crash, you can modify the logic as needed.
package runtime func Go(fn func()) { go func() { defer HandleCrash() fn() }() }
When starting a goroutine, you can use the Go
method, which also prevents forgetting to add panic handling.
Encapsulating WaitGroup
import "sync" type Group struct { wg sync.WaitGroup } func (g *Group) Wait() { g.wg.Wait() } func (g *Group) Start(f func()) { g.wg.Add(1) go func() { defer g.wg.Done() f() }() }
The most important part here is the Start
method, which encapsulates Add
and Done
internally. Although it’s just a few lines of code, it ensures that whenever we use a waitgroup, we won’t forget to increment or complete the counter.
Encapsulating Logic Triggered by Semaphores
type BoundedFrequencyRunner struct { sync.Mutex // Actively triggered run chan struct{} // Timer limit timer *time.Timer // The actual logic to execute fn func() } func NewBoundedFrequencyRunner(fn func()) *BoundedFrequencyRunner { return &BoundedFrequencyRunner{ run: make(chan struct{}, 1), fn: fn, timer: time.NewTimer(0), } } // Run triggers execution; only one signal can be written here, additional signals are discarded without blocking. You can increase the queue size as needed. func (b *BoundedFrequencyRunner) Run() { select { case b.run <- struct{}{}: fmt.Println("Signal written successfully") default: fmt.Println("Signal already triggered once, discarding") } } func (b *BoundedFrequencyRunner) Loop() { b.timer.Reset(time.Second * 1) for { select { case <-b.run: fmt.Println("Run signal triggered") b.tryRun() case <-b.timer.C: fmt.Println("Timer triggered execution") b.tryRun() } } } func (b *BoundedFrequencyRunner) tryRun() { b.Lock() defer b.Unlock() // You can add logic here such as rate limiting b.timer.Reset(time.Second * 1) b.fn() }
We are Leapcell, your top choice for hosting Go projects.
Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:
Multi-Language Support
- Develop with Node.js, Python, Go, or Rust.
Deploy unlimited projects for free
- pay only for usage — no requests, no charges.
Unbeatable Cost Efficiency
- Pay-as-you-go with no idle charges.
- Example: $25 supports 6.94M requests at a 60ms average response time.
Streamlined Developer Experience
- Intuitive UI for effortless setup.
- Fully automated CI/CD pipelines and GitOps integration.
- Real-time metrics and logging for actionable insights.
Effortless Scalability and High Performance
- Auto-scaling to handle high concurrency with ease.
- Zero operational overhead — just focus on building.
Explore more in the Documentation!
Follow us on X: @LeapcellHQ