I am trying to replicate the sample restaurant search. I am running it on windows 64 / python 3.6 Anaconda 4.4. My config.json looks like this.
{
"name": null,
"pipeline": ["nlp_spacy", "tokenizer_spacy", "intent_entity_featurizer_regex", "intent_featurizer_spacy", "ner_crf", "ner_synonyms", "intent_classifier_sklearn"],
"language": "en",
"num_threads": 4,
"path": "D:/rasa-nlu-working/models",
"response_log": "logs",
"config": "config.json",
"log_level": "INFO",
"port": 5000,
"data": null,
"emulate": null,
"log_file": null,
"mitie_file": "data/total_word_feature_extractor.dat",
"spacy_model_name": null,
"server_model_dirs": null,
"token": null,
"max_number_of_ngrams": 7,
"duckling_dimensions": ["time", "number", "money","ordinal","duration"],
"entity_crf_BILOU_flag": true,
"entity_crf_features": [
["low", "title", "upper", "pos", "pos2"],
["bias", "low", "word3", "word2", "upper", "title", "digit", "pos", "pos2", "pattern"],
["low", "title", "upper", "pos", "pos2"]]
}
I am trying to train and predict using jupyter notebook. Train steps goes on smoothly. As expected models are created. But when I try to predict using the following code.
from rasa_nlu.model import Metadata, Interpreter
# where `model_directory points to the folder the model is persisted in
interpreter = Interpreter.load('D:/rasa-nlu-working/models/model_20170904-132507', RasaNLUConfig("D:/rasa-nlu-working/config.json"))
I am getting the following error.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-2-9f85d157325d> in <module>()
4
5 # where `model_directory points to the folder the model is persisted in
----> 6 interpreter = Interpreter.load('D:/rasa-nlu-working/models/model_20170904-132507', RasaNLUConfig("D:/rasa-nlu-working/config.json"))
D:\Anaconda3\lib\site-packages\rasa_nlu\model.py in load(model_metadata, config, component_builder, skip_valdation)
206 # Before instantiating the component classes, lets check if all required packages are available
207 if not skip_valdation:
--> 208 components.validate_requirements(model_metadata.pipeline)
209
210 for component_name in model_metadata.pipeline:
AttributeError: 'str' object has no attribute 'pipeline'
But the same config works fine when I run it in HTTP server mode. Kindly help me in resolving the issue.
I've asked for a few clarifications in the comments, but thought I would start writing an answer anyway.
The error that you've posted isn't actually a problem with your config file. It looks like metadata.json isn't being loaded and/or parsed correctly. metadata.json is kind of like a snapshot of the config file at the time the model was trained.
Here's the order of operations:
That's where your error is happening. For some reason the metadata.json file is loaded without errors, but isn't parsed properly.
A few possible errors:
Also you specifically mention the http API. Can the http API load this model and use it to parse? You should be able to call the below to test it, after you've started the server.
If the HTTP server can load/parse it then we know that it's likely something in your python code specifically.
Conversely if that works, then you should try to train the data using the http API and see what is different about the metadata.json file training via http api vs your python implementation.
More to come as you provide more info.