High-Performance Structured Logging in Go with slog and zerolog
Min-jun Kim
Dev Intern · Leapcell

Unlocking Performance and Clarity with Structured Logging
In the world of software development, logging acts as the lifeline for understanding application behavior, diagnosing issues, and monitoring performance. Traditional unstructured logs, often simple text strings, quickly become a nightmare to parse and analyze as systems grow in complexity and scale. They lack the inherent context crucial for efficient debugging and automated analysis. This is where structured logging shines. By emitting logs as machine-readable data (like JSON), we gain the ability to query, filter, and aggregate log data with unparalleled efficiency. For Go developers, the landscape of structured logging has significantly evolved, especially with the introduction of slog
in Go 1.21 and the long-standing popularity of zerolog
. This article will guide you through implementing high-performance structured logging using these powerful tools, transforming your log data into a valuable asset.
Deconstructing Structured Logging and Its Benefits
Before diving into the implementation details, let's clarify some core concepts related to structured logging.
Structured Logging: This refers to the practice of emitting log messages in a consistent, machine-readable format, typically JSON. Instead of a single textual string, a structured log entry consists of key-value pairs, where each pair represents a specific piece of contextual information.
Contextual Information: These are the attributes that provide meaning to a log message. Examples include request_id
, user_id
, service_name
, elapsed_time
, error_code
, or database_query
. Including such context directly in the log entry makes it easier to trace events across different parts of your system.
Log Levels: A categorization of the severity of a log message (e.g., DEBUG, INFO, WARN, ERROR, FATAL). These levels allow you to filter logs based on their importance, crucial for managing log volume in production.
Performance: When discussing high-performance logging, we're primarily concerned with minimizing the overhead introduced by the logging process itself. This includes factors like CPU cycles spent generating logs, memory allocations, and I/O operations. In high-throughput applications, even small inefficiencies can accumulate into significant performance bottlenecks.
The benefits of structured logging are manifold:
- Easier Analysis: Logs can be ingested into centralized logging systems (e.g., ELK Stack, Splunk, Grafana Loki) and queried using field-based filters.
- Automated Monitoring: Thresholds and alerts can be set on specific log fields, enabling proactive incident detection.
- Improved Debugging: Developers can quickly pinpoint the exact context surrounding an error or anomaly.
- Reduced Log Volume (selectively): By filtering based on structured fields and log levels, you can manage the sheer volume of logs more effectively.
High-Performance Structured Logging with slog and zerolog
Both slog
and zerolog
are designed with performance in mind, offering low-allocation logging and efficient output. Let's explore each.
Go 1.21's slog
: The Standardized Approach
slog
is Go's official structured logging package, introduced in Go 1.21. Its design emphasizes flexibility, performance, and best practices. It aims to provide a robust foundation for logging that can be extended and integrated with various log destinations.
Basic Usage of slog
A slog.Logger
instance is the primary interface for logging. You can create a logger with an slog.Handler
that defines how log records are processed and outputted.
package main import ( "log/slog" "os" "time" ) func main() { // Create a new logger with a JSON handler logger := slog.New(slog.NewJSONHandler(os.Stdout, nil)) // Set the default logger for convenience (optional) slog.SetDefault(logger) // Log an informational message with structured data slog.Info("user logged in", "user_id", 123, "email", "john.doe@example.com", "ip_address", "192.168.1.100", slog.Duration("login_duration", 250*time.Millisecond), // Example of a typed attribute ) // Log an error message with error details err := simulateError() slog.Error("failed to process request", "request_id", "abc-123", "component", "auth_service", "error", err, // slog automatically handles Go's error type ) // Log a debug message (won't be shown if default level is INFO) slog.Debug("data fetched from cache", "cache_key", "product:456") } func simulateError() error { return os.ErrPermission }
This code snippet demonstrates logging Info
and Error
messages with various key-value pairs. slog.NewJSONHandler(os.Stdout, nil)
creates a handler that outputs logs as JSON to standard output. slog
automatically infers the types for most Go primitives.
Adding Context and Attributes
You can add common attributes to a logger that will be included in all subsequent log messages from that logger. This is crucial for adding request-scoped context.
package main import ( "context" "log/slog" "os" "time" ) // RequestIDKey is a custom type for context key to avoid collisions type RequestIDKey string const requestIDKey RequestIDKey = "request_id" func main() { logger := slog.New(slog.NewJSONHandler(os.Stdout, nil)) slog.SetDefault(logger) // Simulate an incoming request with a unique ID reqID := "req-001-xyz" ctx := context.WithValue(context.Background(), requestIDKey, reqID) // Create a child logger with request-specific attributes requestLogger := logger.With( "request_id", reqID, "handler", "user_profile_api", "timestamp", time.Now().Format(time.RFC3339), // Custom timestamp formatting ) processUserRequest(ctx, requestLogger) } func processUserRequest(ctx context.Context, logger *slog.Logger) { userID := 456 logger.Info("fetching user data", "user_id", userID) // Simulate some work time.Sleep(10 * time.Millisecond) if userID%2 == 0 { logger.Warn("user account might be compromised", "user_id", userID, "risk_score", 7.5) } else { logger.Info("user data fetched successfully", "user_id", userID, "data_source", "database") } logger.Debug("finishing request processing") // Won't show if LevelInfo is default }
In processUserRequest
, requestLogger
already contains request_id
, handler
, and timestamp
, so you don't need to add them to every individual log call. This significantly reduces verbosity and ensures consistency.
Performance Considerations for slog
slog
is designed for performance. It uses techniques like:
- Lazy Evaluation: Attributes are only evaluated if the log level is enabled for the message.
- Pooled Buffers: Handlers can use
sync.Pool
to reuse buffers, reducing allocations.slog.NewJSONHandler
internally usesbytes.Buffer
but the actual pooling behavior depends on the underlyingEncoder
. - Optimized JSON Encoding: The default JSON handler is highly optimized.
For maximum performance, ensure your handlers are efficient and avoid complex, expensive computations within attributes that might be evaluated for every log call.
zerolog
: The Zero-Allocation Champion
zerolog
has long been a favorite in the Go community for its extreme performance, achieved through a "zero allocation" philosophy for all its primary logging paths (when not writing to a file). It directly writes to a buffer with minimal intermediate allocations, making it incredibly fast.
Basic Usage of zerolog
package main import ( "os" "time" "github.com/rs/zerolog" "github.com/rs/zerolog/log" ) func main() { // Configure zerolog to output JSON to stdout. // By default, zerolog logs at INFO level and above. zerolog.SetGlobalLevel(zerolog.InfoLevel) log.Logger = zerolog.New(os.Stdout).With().Timestamp().Logger() log.Info(). Int("user_id", 456). Str("email", "jane.doe@example.com"). Time("login_time", time.Now()). Msg("user logged in successfully") err := simulateProcessingError() log.Error(). Str("request_id", "def-456"). Str("component", "payment_gateway"). Err(err). // zerolog's dedicated Err field to log errors Msg("failed to process payment") // Debug message (won't be shown due to InfoLevel) log.Debug().Str("cache_key", "order:789").Msg("retrieving from cache") } func simulateProcessingError() error { return os.ErrDeadlineExceeded }
zerolog
uses a fluent API. You start with log.Level()
(e.g., log.Info()
), then chain methods to add fields (e.g., Int()
, Str()
, Err()
), and finally call Msg()
to write the log entry. With().Timestamp().Logger()
adds a timestamp to every log entry from this logger.
Adding Context for zerolog
Similar to slog
, zerolog
allows you to create child loggers with predefined context.
package main import ( "context" "os" "time" "github.com/rs/zerolog" "github.com/rs/zerolog/log" ) // Define a context key type contextKey string const requestIDKey contextKey = "request_id" func main() { zerolog.SetGlobalLevel(zerolog.InfoLevel) // Output to console for human readability during development // For production, use os.Stdout directly for JSON log.Logger = zerolog.New(os.Stdout).With().Timestamp().Logger().Output(zerolog.ConsoleWriter{Out: os.Stderr}) reqID := "order-xyz-789" ctx := context.WithValue(context.Background(), requestIDKey, reqID) // Create a contextual logger ctxLogger := log.With(). Str("request_id", reqID). Str("api_path", "/api/v1/orders"). Logger() processOrderHandler(ctx, ctxLogger) } func processOrderHandler(ctx context.Context, logger zerolog.Logger) { orderID := 12345 logger.Info().Int("order_id", orderID).Msg("received new order request") // Simulate some processing time.Sleep(5 * time.Millisecond) if orderID%2 != 0 { logger.Warn(). Int("order_id", orderID). Str("status", "pending_review"). Msg("order requires manual review") } else { logger.Info(). Int("order_id", orderID). Str("status", "processed"). Dur("processing_time", 10*time.Millisecond). // Duration field Msg("order successfully processed") } logger.Debug().Msg("order processing complete") // Won't show due to InfoLevel }
The ctxLogger
now carries request_id
and api_path
automatically. You can also pass zerolog.Context
objects around if you need to build up context incrementally.
Performance Considerations for zerolog
zerolog
achieves its speed through:
- No Reflection: It avoids Go's reflection API, which is slower.
- Direct Byte Pushing: Log events are often written directly to a buffer or
io.Writer
as bytes, minimizing string allocations. - Pre-allocated Buffers: It often reuses internal buffers.
- Fluent API: The chaining API might seem verbose but is designed to allow compile-time optimizations and minimize allocations when attributes are added.
Discard()
: When a log level is disabled,zerolog
's chain methods return azerolog.Nop
event, effectively discarding the log without performing any allocations or computations, making disabled logging paths extremely cheap.
Choosing Between slog
and zerolog
Both are excellent choices. Here's a quick guide:
slog
: Preferred for new Go 1.21+ projects where you want a standardized, future-proof logging solution. It's integrated into the standard library ecosystem, making it easier to swap out handlers. If you value maintainability and standard library integration above all else,slog
is your go-to.zerolog
: Continues to be a top choice for projects where absolute cutting-edge performance and minimal allocations are the paramount concern, or for older Go projects where Go 1.21 is not an option. Its fluent API is also very popular among its users.
In many high-performance scenarios, the actual I/O operations (writing to disk, network, etc.) will dominate the logging overhead, meaning the difference between slog
and zerolog
's internal processing speed might be less significant than the choice of your log output destination and handler.
Concluding Thoughts
Structured logging is no longer a luxury but a necessity for building observable, maintainable, and highly performant Go applications. By embracing slog
or zerolog
, you transform your log files into rich, queryable data streams that offer deep insights into your system's behavior. Both libraries provide battle-tested, high-performance solutions, enabling developers to build resilient applications without sacrificing critical diagnostic capabilities. Ultimately, leveraging these tools effectively empowers you to quickly understand, troubleshoot, and optimize your Go services, turning logging from a chore into a powerful debugging and monitoring asset.