I'm trying to make the (romanian) syntaxnet model run on a part of my corpus. I have the unzipped files mapped in a docker container. When i run the script for tokenizing:
cat sent.txt | syntaxnet/models/parsey_universal/tokenize.sh $MODEL_DIRECTORY/
I'm getting a core dumped error related to a missing file, specifically char-map.
Looking inside the unzipped folder, i notice that indeed, there is no char-map file there (only char-ngram-map).
What am i missing here?
If the archive is indeed missing some files, then how did it get uploaded there ?
Thanks