I am currently using Gensim LDA for topic modeling.
While Tuning hyper-parameters I found out that the model always gives negative log-perplexity
Is it normal for model to behave like this?? (is it even possible?)
if it is, is smaller perplexity better than bigger one? (-100 is better than -20??)
It is usual for gensim to give a negative value, and here, LDA -20 perplexity is better, as you should see the value closer to 0.