Understanding Node.js Cluster: The Core Concepts
Grace Collins
Solutions Engineer · Leapcell
Preface
If you've used PM2 to manage Node.js processes, you may have noticed it supports a cluster mode. This mode allows Node.js to create multiple processes. When you set the number of instances in cluster mode to max, PM2 will automatically create a number of Node processes corresponding to the CPU cores available on the server.
PM2 achieves this by leveraging Node.js’s Cluster module. The module addresses Node.js's single-threaded nature, which traditionally limits its ability to utilize multiple CPU cores. But how does the Cluster module work internally? How do the processes communicate with each other? How can multiple processes listen on the same port? And how does Node.js distribute requests to these processes? If you’re curious about these questions, read on.
Core Principles
Node.js worker processes are created using the child_process.fork()
method. This means there is one parent process and multiple child processes. The code typically looks like this:
const cluster = require('cluster'); const os = require('os'); if (cluster.isMaster) { for (let i = 0, n = os.cpus().length; i < n; i++) { cluster.fork(); } } else { // Start the application }
If you’ve studied operating systems, you’re probably familiar with the fork()
system call. The calling process is the parent, while the newly created processes are the children. These child processes share the same data segment and stack as the parent, but their physical memory spaces are not necessarily shared. In a Node.js Cluster, the master process listens on the port and distributes incoming requests to the worker processes. This involves addressing three core topics: inter-process communication (IPC), load balancing strategies, and multi-process port listening.
Inter-Process Communication (IPC)
The master process creates child processes using process.fork()
. Communication between these processes is handled via an IPC channel. Operating systems provide several mechanisms for inter-process communication, such as:
-
Shared Memory
Multiple processes share a single memory space, often managed with semaphores for synchronization and mutual exclusion. -
Message Passing
Processes exchange data by sending and receiving messages. -
Semaphores
A semaphore is a system-assigned status value. Processes lacking control will be forced to halt at specific checkpoints, waiting for a signal to proceed. When limited to binary values (0 or 1), this mechanism is known as a "mutex" (mutual exclusion lock). -
Pipes
Pipes connect two processes, allowing the output of one process to serve as the input for another. This can be created using thepipe
system call. The|
command in shell scripting is a common example of this mechanism.
Node.js uses an event-based mechanism for communication between the parent and child processes. Here’s an example of a parent process sending a TCP server handle to a child process:
const subprocess = require('child_process').fork('subprocess.js'); // Create a server and send its handle. const server = require('net').createServer(); server.on('connection', (socket) => { socket.end('Handled by the parent process'); }); server.listen(1337, () => { subprocess.send('server', server); }); process.on('message', (m, server) => { if (m === 'server') { server.on('connection', (socket) => { socket.end('Handled by the child process'); }); } });
Load Balancing Strategy
As mentioned earlier, all requests are distributed by the master process. Ensuring the server load is evenly distributed among worker processes requires a load balancing strategy. Node.js uses a round-robin algorithm by default.
Round-Robin
The round-robin method is a common load balancing algorithm also employed by Nginx. It works by distributing incoming requests to each process sequentially, starting from the first process and looping back after reaching the last. However, this method assumes equal processing capacity across all processes. In scenarios where request handling time varies significantly, load imbalance may occur.
To address this, Nginx often uses Weighted Round-Robin (WRR), where servers are assigned different weights. The server with the highest weight is selected until its weight is reduced to zero, at which point the cycle starts over based on the new weight sequence.
You can adjust the load balancing strategy in Node.js by setting the NODE_CLUSTER_SCHED_POLICY
environment variable or configuring it via cluster.setupMaster(options)
. Combining Nginx for multi-machine clusters with Node.js Cluster for single-machine multi-process balancing is a common approach.
Multi-Process Port Listening
In early versions of Node.js, multiple processes listening on the same port competed for incoming connections, leading to uneven load distribution. This was later resolved with the round-robin strategy. The current approach works as follows:
- The master process creates a socket, binds it to an address, and starts listening.
- The socket’s file descriptor (fd) is not passed to the worker processes.
- When the master process accepts a new connection, it determines which worker process should handle the connection and forwards it accordingly.
In essence, the master process listens on the port and distributes connections to worker processes using a defined strategy (e.g., round-robin). This design eliminates competition between workers but requires the master process to be highly stable.
Conclusion
Using PM2’s Cluster Mode as an entry point, this article explored the core principles behind Node.js’s Cluster module for implementing multi-process applications. We focused on three key aspects: inter-process communication, load balancing, and multi-process port listening.
By studying the Cluster module, we can see that many fundamental principles and algorithms are universal. For instance, the round-robin algorithm is used in both operating system process scheduling and server load balancing. The master-worker architecture resembles the multi-process design in Nginx. Similarly, mechanisms like semaphores and pipes are ubiquitous in various programming paradigms.
While new technologies continuously emerge, their foundations remain consistent. Understanding these core concepts enables us to extrapolate and adapt to new challenges with confidence.
We are Leapcell, your top choice for deploying Node.js projects to the cloud.
Leapcell is the Next-Gen Serverless Platform for Web Hosting, Async Tasks, and Redis:
Multi-Language Support
- Develop with Node.js, Python, Go, or Rust.
Deploy unlimited projects for free
- pay only for usage — no requests, no charges.
Unbeatable Cost Efficiency
- Pay-as-you-go with no idle charges.
- Example: $25 supports 6.94M requests at a 60ms average response time.
Streamlined Developer Experience
- Intuitive UI for effortless setup.
- Fully automated CI/CD pipelines and GitOps integration.
- Real-time metrics and logging for actionable insights.
Effortless Scalability and High Performance
- Auto-scaling to handle high concurrency with ease.
- Zero operational overhead — just focus on building.
Explore more in the Documentation!
Follow us on X: @LeapcellHQ