unable to predict using rasa_nlu in python

876 views Asked by At

I am trying to replicate the sample restaurant search. I am running it on windows 64 / python 3.6 Anaconda 4.4. My config.json looks like this.

    {
      "name": null,
      "pipeline": ["nlp_spacy", "tokenizer_spacy", "intent_entity_featurizer_regex", "intent_featurizer_spacy", "ner_crf", "ner_synonyms",  "intent_classifier_sklearn"],
      "language": "en",
      "num_threads": 4,
      "path": "D:/rasa-nlu-working/models",
      "response_log": "logs",
      "config": "config.json",
      "log_level": "INFO",
      "port": 5000,
      "data": null,
      "emulate": null,
      "log_file": null,
      "mitie_file": "data/total_word_feature_extractor.dat",
      "spacy_model_name": null,
      "server_model_dirs": null,
      "token": null,
      "max_number_of_ngrams": 7,
      "duckling_dimensions": ["time", "number", "money","ordinal","duration"],
      "entity_crf_BILOU_flag": true,
      "entity_crf_features": [
        ["low", "title", "upper", "pos", "pos2"],
        ["bias", "low", "word3", "word2", "upper", "title", "digit", "pos", "pos2", "pattern"],
        ["low", "title", "upper", "pos", "pos2"]]
    }

I am trying to train and predict using jupyter notebook. Train steps goes on smoothly. As expected models are created. But when I try to predict using the following code.

from rasa_nlu.model import Metadata, Interpreter

# where `model_directory points to the folder the model is persisted in
interpreter = Interpreter.load('D:/rasa-nlu-working/models/model_20170904-132507', RasaNLUConfig("D:/rasa-nlu-working/config.json"))

I am getting the following error.

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-2-9f85d157325d> in <module>()
      4 
      5 # where `model_directory points to the folder the model is persisted in
----> 6 interpreter = Interpreter.load('D:/rasa-nlu-working/models/model_20170904-132507', RasaNLUConfig("D:/rasa-nlu-working/config.json"))

D:\Anaconda3\lib\site-packages\rasa_nlu\model.py in load(model_metadata, config, component_builder, skip_valdation)
    206         # Before instantiating the component classes, lets check if all required packages are available
    207         if not skip_valdation:
--> 208             components.validate_requirements(model_metadata.pipeline)
    209 
    210         for component_name in model_metadata.pipeline:

AttributeError: 'str' object has no attribute 'pipeline'

But the same config works fine when I run it in HTTP server mode. Kindly help me in resolving the issue.

2

There are 2 answers

3
Caleb Keller On

I've asked for a few clarifications in the comments, but thought I would start writing an answer anyway.

The error that you've posted isn't actually a problem with your config file. It looks like metadata.json isn't being loaded and/or parsed correctly. metadata.json is kind of like a snapshot of the config file at the time the model was trained.

Here's the order of operations:

  1. Whenever you call Interpreter.load one of the first things that gets done is loading the metadata.json file. See here.
  2. Next over in Metadata.load we try to load and parse that file. See here
  3. Back in Interpreter we try to get the pipeline from the metadata that was returned. See here.

That's where your error is happening. For some reason the metadata.json file is loaded without errors, but isn't parsed properly.

A few possible errors:

  • metadata.json is improperly formatted JSON. not certain how this would happen, but can you provide the metadata.json so we can check it out.
  • There's a windows encoding problem that's not being handled correctly.

Also you specifically mention the http API. Can the http API load this model and use it to parse? You should be able to call the below to test it, after you've started the server.

curl -XPOST localhost:5000/parse -d '{"q":"hello there", "model": "model_20170904-132507"}'

If the HTTP server can load/parse it then we know that it's likely something in your python code specifically.

Conversely if that works, then you should try to train the data using the http API and see what is different about the metadata.json file training via http api vs your python implementation.

More to come as you provide more info.

0
Drakodux On

I had the same problem while using rasa from python, and stumbled upon this thread while googling the same error but as mentioned above by @Caleb Keller, changing the rasa version from 0.9.0 to 0.10.0a5 solved the problem. Thanks for the help.