Re-training Approach for NLC or R&R

342 views Asked by At

The ground truth we know is used to re-train the NLC or R&R.

The ground truth is a question level training data.

e.g.

"How hot is it today?,temperature"

The question "how hot is it today?" is therefore classified to "temperature" class.

Once the application is up, real user questions will be received. Some are the same (i.e. the question from the real users are the same to the question in the ground truth), some are similar terms, some are new questions. Assume the application has a feedback loop to know whether or not the class (for NLC) or answer (for R&R) are relevant.

About the new questions, the approach seems to just add the them to the ground truth, which is then used to re-train the NLC/R&R?
For the questions with similar terms, do we just add them like the new questions, or do we just ignore them, given that similar terms can also be scored well even similar terms are not used to train the classifier?
In the case of the same questions, there seems nothing to do on the ground truth for NLC, however, to the R&R, are we just increase or decrease 1 for the relevance label in the ground truth?

The main question here is, in short, about what the re-training approach is for NLC & R&R...

2

There are 2 answers

0
davidgeorgeuk On

Once your application has gone live, you should periodically review your feedback log for opportunities for improvement. For NLC, if there are texts being incorrectly classified, then you can add those texts to the training set and retrain in order to improve your classifier.

It is not necessary to capture every imaginable variation of a class, as long as your classifier is returning acceptable responses.

You could use the additional examples of classes from your log to assemble a test set of texts that do not feature in your training set. Running this test set when you make changes will enable you to determine whether or not a change has inadvertently caused a regression. You can run this test either by calling the classifier using a REST client, or via the Beta Natural Language Classifier toolkit.

0
Daniel Toczala On

A solid retraining approach should be getting feedback from live users. Your testing and validation of any retrained NLC (or R&R for that matter) should be guided by some of the principles that James Ravenscroft has outlined here (https://brainsteam.co.uk/2016/03/29/cognitive-quality-assurance-an-introduction/).

The answer by @davidgeorgeuk is correct, but fails to extend the thought to the conclusion that you are looking for. I would have a monthly set of activities where I would go through application logs where REAL users are indicating that your not classifying things correctly, and also incorporate any new classes to your classifier. I would retrain a second instance of NLC with the new data, and go through the test scenarios outlined above.

Once you are satisfied that you have IMPROVED your model, I would then switch my code to point at the new NLC instance, and the old NLC instance would be your "backup" instance, and the one that you would use for this exercise the next month. It's just applying a simple DevOps approach to managing your NLC instances. You could extend this to a development, QA, production scenario if you wanted.