I am running LDA on a small corpus of 2 docs (sentences) for testing purposes. Following code returns topic-term and document-topic distributions that are not reasonable at all given the input documents. Running exactly the same returns in Python reasonable results. Who knows what is wrong here?
library(topicmodels)
library(tm)
d1 <- "bank bank bank"
d2 <- "stock stock stock"
corpus <- Corpus(VectorSource(c(d1,d2)))
##fit lda to data
dtm <- DocumentTermMatrix(corpus)
ldafit <- LDA(dtm, k=2, method="Gibbs")
##get posteriors
topicTerm <- t(posterior(ldafit)$terms)
docTopic <- posterior(ldafit)$topics
topicTerm
docTopic
> topicTerm
1 2
bank 0.3114525 0.6885475
stock 0.6885475 0.3114525
> docTopic
1 2
1 0.4963245 0.5036755
2 0.5036755 0.4963245
The results from Python are as follows:
>>> docTopic
array([[ 0.87100799, 0.12899201],
[ 0.12916713, 0.87083287]])
>>> fit.print_topic(1)
u'0.821*"bank" + 0.179*"stock"'
>>> fit.print_topic(0)
u'0.824*"stock" + 0.176*"bank"'
Try to plot the perplexity over iterations and make sure they converge. Initial status also matters. (The document size and sample size both seem to be small, though.)