I'm trying to switch over from Stata and R to Python's statsmodels. I find I can do most of what I need but am missing something when running negative binomial regression -- specifically, the significance level for the chi-squared test. For example, with a model like this:
mod1 = smf.glm(formula='like_count ~ source + followers_count + \
mission_focus_Patient_advocacy + Assets + hashtag_count, data=df, \
missing='drop', hasconst=None, family=sm.families.NegativeBinomial()).fit()
From this I can run mod1.summary() and get the main model results, as seen below:
Typically I would report the number of observations, which is there, the log-likelihood value, which is there, the Pearson chi-squared value, which is also there, along with three pieces of missing information: 1) the significance level for the chi-squared test, 2) a pseudo-R2 value such as McFadden pseudo-R-squared (I know I should not do this but reviewers generally want to see it), and 3) the results of the Likelihood-ratio chi-squared test (to show that Poisson model would not work instead).
My question is, how can I get these things? I know I can access specific model results individually, including:
mod1.llf #get Log-Likelihood
mod1.pearson_chi2 #get chi-square
Checking the online documentation I can't find the parameters I'm looking for. By contrast, I know that with a logit model, for instance, I could get the following:
mod_logit.llf #get Log-Likelihood
mod_logit.llr #get chi-square
mod_logit.llr_pvalue #get sig. level of chi-square test
mod_logit.prsquared #get pseudo-rsquared
But in negative binomial I can't seem to find a way to get the significance level for the chi-squared test. At a bare minimum I'd like to get this. Ideally, I'd also be able to get the likelihood-ratio test results as well as a pseudo-r-squared. Thanks in advance for any help.