We have a huge file of different urls (~500K - ~1M urls).
We want to use Grinder 3 for distributing these urls to the Workers in a way that every worker will invoke a single and different url.
In the JY script we could:
Read the file one time per Agent
Allocate line-number-ranges per Agent
Every Worker would gets a line/url according to its run-id from its Agent line-number-range.
This still means loading a huge file into memory and writing some code to a problem that might be common to many.
Any ideas to a simpler/ready-made solution?
I used Grinder in a similar fashion a while back, and wrote a utility for multi-threaded, one-time ingestion of URLs from a large file.
See https://bitbucket.org/travis_bear/file_util -- in particular, the sequential reader.
I'd recommend using the
split
command-line utility (or similar) to give separate chunks of the master file to each agent prior to executing your Grinder run.