Good day,
I have a implementation that looks somewhat like this: Essentially, a multiprocess web crawler
pool = Pool()
def worker:
link = get_url()
pool.async(worker, args=(link,))
pool.async(worker,args=("www.url.com",))
Unfortunately this does not work as it seems the process that the worker is running in can't seem to re-access the original pool and it crashes. Is there any way around this?