I see so many guides or posts on how to implement worker pools in Go, but I am having a hard time understanding how much this can scale.
Would it be alright to have a pool of 50 workers, create a slice of 1 million jobs and just run the worker pool on all of the jobs? Or is it better to divide the jobs into smaller batches and start a new worker pool for each batch of jobs? The context of the jobs is just performing one get request per job and writing some stuff from the result to a database. Right now I have 50 workers for millions of jobs and I had a problem with the file descriptor limits, now increased the limit. I want to improve my script/configuration so that my worker pool - http get requests are faster. I need some intuition of what I need to do.