Solved: Python multiprocessing imap BrokenPipeError: [Errno 32] Broken pipe pdftoppm

1.1k views Asked by At

Let me first say that this is not a duplicate of the other similar questions, where people tend to manage more closely the pool of workers.

I have been struggling with the following exception thrown by my code when using multiprocessing.Pool.imap:

  File "/usr/local/bin/homebrew/Cellar/python@2/2.7.17/lib/python2.7/multiprocessing/process.py", line 267, in _bootstrap
    self.run()
  File "/usr/local/bin/homebrew/Cellar/python@2/2.7.17/lib/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/bin/homebrew/Cellar/python@2/2.7.17/lib/python2.7/multiprocessing/pool.py", line 122, in worker
    put((job, i, (False, wrapped)))
  File "/usr/local/bin/homebrew/Cellar/python@2/2.7.17/lib/python2.7/multiprocessing/queues.py", line 390, in put
    return send(obj)
IOError: [Errno 32] Broken pipe

This arises at various points while executing the following main program:

    pool = mp.Pool(num_workers)
    # Calculate a good chunksize (based on implementation of pool.map)
    chunksize, extra = divmod(lengthData, 4 * num_workers)
    if extra:
        chunksize += 1

    func = partial(pdf_to_txt, input_folder=inputFolder, junk_folder=imageJunkFolder, out_folder=outTextFolder,
                   log_name=log_name, log_folder=None,
                   empty_log=False, input_folder_iterator=None,
                   print_console=True)

    flag_vec = pool.imap(func, (dataFrame['testo accordo'][i] for i in range(lengthData)), chunksize)
    dataFrame['flags_conversion'] = pd.Series(flag_vec)
    dataFrame.to_excel("{0}logs/{1}.xlsx".format(outTextFolder, nameOut))
    pool.close()
    pool.join()

Just for reference, the partial function takes non-OCR PDF files, splits them into images for each page, and runs OCR using pytesseract.

I am running the code on the following machine:

This is a physical machine (PowerEdge R930) running RedHat 7.7 (Linux 3.10.0).

Processor:  Intel(R) Xeon(R) CPU E7-8880 v3 @ 2.30GHz (x144)
Memory:     1.48 TiB
Swap:       7.81 GiB
Uptime:     21 days

Perhaps I should lower the chunk size? It is really unclear to me. I have noticed that the code seemed to work better when less workers were available on the server...

1

There are 1 answers

0
KantAndr1804 On

After a lot of pain, I discovered the problem was with pdftoppm (that is, using pdf2image). It appears that pdftoppm sometimes gets stuck without raising any exception.

If anyone ever runs into this problem, I warmly recommend switching to PyMuPDF to extract images from pdfs. It is faster and more stable!