I was hoping to find a node.js system with a workerpool to handle CPU intensive tasks, but I'm a little confused about cpu usage in multiple routes. A scene is like this:
route1.js: const workerpool = require('workerpool'); const pool = workerpool.pool(__dirname + '/job1.js'); pool.exec....... route2.js: const workerpool = require('workerpool'); const pool = workerpool.pool(__dirname + '/job2.js'); pool.exec....... route3.js: const workerpool = require('workerpool'); const pool = workerpool.pool(__dirname + '/job3.js'); pool.exec.......
When node.js uses these three files, they create their own workerpool, and Is this possible since the worker_thread number and its control is implemented internally via node.js Create a threshold problem? And how to use workerpool correctly, thank you very much.
What I want to do is just have a set of workers. A Worker can expose multiple functions, so your Worker can expose job1, job2, and job3 without any problem. If you create one pool per pool, you need to consider that the pools may conflict with each other...
Consider your pools getting 100% of the CPU, which means if all 3 pools are full, you can require up to 300% of the available resources.
If you allocate 33% to each of them, that means you can ask for up to 100%, which is fine, but if only job1 is heavily needed at a time, it will only be able to use 33% of the available resources.
By using a single pool you can reach 100% without exceeding 100% of available space.