How can you use easyocr with multiprocessing?

1.1k views Asked by At

I tried to read text on images with easyocr on python, and I want to run it separately so it doesn't hold back other parts of the code. But when I call the function inside a multiprocessing loop, I get a notimplemented error. Here is an example of code.

import multiprocessing as mp
import easyocr
import cv2

def ocr_test(q, reader):
    while not q.empty():
        q.get()
        img = cv2.imread('unknown.png')
        result = reader.readtext(img)


if __name__ == '__main__':
    q = mp.Queue()
    reader = easyocr.Reader(['en'])

    p = mp.Process(target=ocr_test, args=(q,reader))
    p.start()
    q.put('start')
    p.join()

and this is the error I get.

Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:\Program Files\Python310\lib\multiprocessing\spawn.py", line 126, in _main
    self = reduction.pickle.load(from_parent)
  File "C:\Python\venv\lib\site-packages\torch\multiprocessing\reductions.py", line 90, in rebuild_tensor
    t = torch._utils._rebuild_tensor(storage, storage_offset, size, stride)
  File "C:\Python\venv\lib\site-packages\torch\_utils.py", line 134, in _rebuild_tensor
    t = torch.tensor([], dtype=storage.dtype, device=storage._untyped().device)

NotImplementedError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build). If you are a Facebook employee using PyTorch on mobile, please visit https://fburl.com/ptmfixes for possible resolutions. 'aten::empty.memory_format' is only available for these backends: [CPU, Meta, MkldnnCPU, SparseCPU, SparseCsrCPU, BackendSelect, Python, Named, Conjugate, Negative, ZeroTensor, ADInplaceOrView, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradLazy, AutogradXPU, AutogradMLC, AutogradHPU, AutogradNestedTensor, AutogradPrivateUse1, AutogradPrivateUse2, AutogradPrivateUse3, Tracer, AutocastCPU, Autocast, Batched, VmapMode, Functionalize].

Is there a way to solve this problem?

0

There are 0 answers