Anyway to multithread pdf mining?

1.1k views Asked by At

I have a code which is looking for a particular string sequence throughout a bunch of pdfs. The problems is that this process is extremely slow. (Sometimes I get pdf's with over 50000 pages)

Is there a way to do multi threading? Unfortunately even though I searched, I couldn't make heads or tails about the threading codes

import os
import shutil as sh
f = 'C:/Users/akhan37/Desktop/learning profiles/unzipped/unzipped_files'

import slate3k as slate


idee = "123456789"
os.chdir(f)
for file in os.listdir('.'):
    print(file) 
    with open(file,'rb') as g:
        extracted_text = slate.PDF(g)

            #extracted_text = slate.PDF() 

        # print(Text)
        if idee in extracted_text:
            print(file)
        else:
            pass

The run time is very long. I don't think it's the codes fault but rather the fact that I have to go through over 700 pdfs

1

There are 1 answers

2
Bill Chen On BEST ANSWER

I would suggest using pdfminer, you can convert to the document object into a list of page object, which you can multi-processing on different cores.

    fp = open(pdf_path, "rb")
    parser = PDFParser(fp)
    document = PDFDocument(parser, password)
    if not document.is_extractable:
        raise PDFTextExtractionNotAllowed

    laparams = LAParams() # set
    resource_manager = PDFResourceManager()
    device = PDFPageAggregator(resource_manager, laparams=laparams)
    interpreter = PDFPageInterpreter(resource_manager, device)

    all_attributes = []

    list_of_page_obj = list(PDFPage.create_pages(document))