Scrapy CrawlProcess is throwing reactor already installed

23 views Asked by At

I have N workers in paralell (instantiated from docker) that trigger a scrapy crawl from a script via CrawlProcess. Why is it that I am getting this error: error: reactor already installed? I simply have a function:

def foo():
    process = CrawlerProcess(settings)
    stats = process.crawl(**kwargs, stats_callback=lambda stats: stats)
    process.start()

Shouldn't scrapy run a crawl in a separate process and the reactor will have to be reinstalled?

I don't want to use crawlRunner why is this occuring?

and scrapy: Scrapy==2.6.1

0

There are 0 answers