I'm a bit confused because all the examples I read about Node cluster module only seem to apply to webservers and concurrent requests. Otherwise to CPU intensive application it is recommended to use the worker_threads module.
And what about I/O file operations? Imagine I have an array with 1 million filenames: ['1.txt', '2.txt', etc., ..., '1000000.txt'] and I need to do heavy processing and then write the result file content?
What would be the method to efficiently use all the cores of the CPU to spread the processing towards different cores amongst different filenames?
Normally I would use this:
const fs = require('fs')
const fs = require('async')
const heavyProcessing = require('./heavyProcessing.js')
const files = ['1.txt', '2.txt', ..., '1000000.txt']
async.each(files, function (file, cb) {
fs.writeFile(file, heavyProcessing(file), function (err) {
if (!err) cb()
})
}
Should I use now the cluster or the worker_threads? And how should I use it?
Does this work?
const fs = require('fs')
const fs = require('async')
const heavyProcessing = require('./heavyProcessing.js')
const cluster = require('node:cluster');
const http = require('node:http');
const numCPUs = require('node:os').cpus().length;
const process = require('node:process');
if (cluster.isPrimary) {
console.log(`Primary ${process.pid} is running`);
// Fork workers.
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', (worker, code, signal) => {
console.log(`worker ${worker.process.pid} died`);
});
} else {
const files = ['1.txt', '2.txt', ..., '1000000.txt']
async.each(files, function (file, cb) {
fs.writeFile(file, heavyProcessing(file), function (err) {
if (!err) cb()
})
}
}
Just for everyone to know, if they are interested, you need to use the npm module
piscina.In this gist I explain everything. NodeJS is a powerful tool for backend developers, but you must be aware of multi-core processing in order to maximize the potential of your CPU. This NodeJS multi-core feature is mostly used for webservers and NodeJS has already out of the box the
clustermodule thereto. Although NodeJS has also out of the box the modulethreads, it's not so easy to deal with.Let's create a project that will test single-thread and multi-thread CPU intensive data and write some random data to file.
Create the project:
Install dependencies and create
dist/directoryCreate the file
index.jsat the root of the project directoryCreate now
worker.jsalso at the root of the project directoryNow just run on single thread and check time elapsed, for 1000 and 10000 iterations (one iteration equals to data processing and file creation)
Now compare with the great advantage of multi-thread
With the test I did (16 cores CPU), the difference is huge, it went with 1000 iterations from
1:27.061 (m:ss.mmm)for single thread to8.884swith multi-thread. Check also the files insidedist/to be sure they were created correctly.