Tuning language weight (LW) and word insertion penalties (WIP) in CMU SPHINX

898 views Asked by At

What is the right way to tune LW and WIP parameters of sphinx 3 and at what point do we stop tuning ?

I have been using these steps so far,

  • Since the decoder is much more sensitive to the LW, it is tuned first to get the highest word accuracy, using the default WIP of 0.7. And according to sphinx documentation LW has to be tuned between 6 and 13.

  • The WIP is now tuned keeping the above obtained LW until the insertions matches the deletions. At this point, the total number of decoded words will be equal to that of ground truth, which is expected out of an ideal decoder. And according to sphinx documentation WIP is supposed to be tuned in the range of 0.2 and 0.7

For my development data-set on finding the best LW of 10, I could never achieve insertions equal to deletions in 0.2 to 0.7 range of WIP. I reached this point only on using WIP of 2e22 ! Which is way out of the range.

1

There are 1 answers

0
Nikolay Shmyrev On

As for such a high WIP, it should not be that high for sure. There are probably issues with your dataset, maybe you have a lot of silence there and it's better to tune silence probability for higher values otherwise it tries to match silence with words so you need more word insertion penalty to penalize those words. You always have an option to share your test set in order to get help on accuracy.