I am running into a small issue that i can't figure my head around.
I developed a model that I test using lmer
. Full code:
#importing library
library(tidyverse)
library(lmerTest)
library(car)
library(easystats)
library(modelsummary)
library(lme4)
df <- Mod_Final_R1R2_Meta_moderation_sharing
summary(df)
df_x <- subset(df, Category == "x")
modelZ <- lmer(reliability corrected ~ 0 + intention + a + b + (1|Study), data = df_x).
summary(modelZ)
modelsummary(modelZ, statistic="p.value")
I receive (part of) the following output via:
summary(modelZ)
Linear mixed model fit by REML. t-tests use Satterthwaite's method ['lmerModLmerTest']
Formula: `Y` ~ 0 + intention + a + b + c + (1 | Study)
Data: df_x
REML criterion at convergence: 14.3
Scaled residuals:
Min 1Q Median 3Q Max
-1.51502 -0.22449 -0.06608 0.36910 1.52729
Random effects:
Groups Name Variance Std.Dev.
Study (Intercept) 0.22052 0.4696
Residual 0.01198 0.1094
Number of obs: 25, groups: Study, 15
Fixed effects:
Estimate Std. Error df t value Pr(>|t|)
intention -0.10750 0.05989 9.27089 -1.795 0.105
a 0.02907 0.22882 11.72994 0.127 0.901
b 0.12935 0.26092 11.69105 0.496 0.629
c -0.17325 0.22935 12.19998 -0.755 0.464
However, if I extract model summary via either the package modelsummary
or sjPlot
(same output with both packages), I receive the following output:
modelsummary(modelZ, statistic="p.value")
Variable | Estimate | p-value |
---|---|---|
Intention | -0.107 | 0.089 |
a | 0.029 | 0.900 |
b | 0.129 | 0.626 |
c | -0.173 | 0.459 |
As you can see, while the estimates are more or less the same, p-values are quite different, some even on the threshold of non-significant to significant. Does anyone has an idea what could be the underlying mechanisms that caused this? Shouldn't it extract and use the modelx output of lmer?
Thanks in advance!
I searched on forums like Stackoverflow and the CRAN packages for any underlying causes, but didn't find it.
In the absence of a reproducible example, my best guess is that this comes from the method used for finite-size correction.
modelsummary()
callsparameters::model_parameters()
to extract parameters; theci_method
options for mixed models are "wald" (no finite-size correction), "satterthwaite", or "kenward". I believe theci_method=
argument will get passed through frommodelsummary()
tomodel_parameters()
: you could tryci_method = "satterthwaite"
and see what happens ...In this example the p-value returned for "satterthwaite" and "kenward" happens to be the same up to three significant digits (for the intercept; we can't see what happens for the slope term. Could tweak the example, e.g. make the second element of
beta
smaller ...)