I´m running a measurement invariance testing with blavaan but I´m getting de next error:
[1] "Error in lamsign[l1, 1] : subscript out of bounds\n"
attr(,"class")
[1] "try-error"
attr(,"condition")
<subscriptOutOfBoundsError in lamsign[l1, 1]: subscript out of bounds>
Error in blavaan(mod_sem_1, ordered = c("y3", "y4", "y6", "y7", "y8", :
blavaan ERROR: problem with translation from lavaan to MCMC syntax.
I´m working on Rstudio with the code:
mod_sem_1 <- '
# Modelo de medicion
# Parte endogena
eta1 =~ y3 + y4 + y5 # Patron de consumo
eta2 =~ y6 + y7 + y8 + y9 + y10 + y11 + y12 + y13 # Consumo problematico
# Parte exógena
xi1 =~ x14 + x15 + x16 # Factores individuales
xi2 =~ x17 + x18 + x19 + x20 + x21 # Factores micro-sociales
xi3 =~ x22 + x23 + x24 # Factores macro-sociales
#Modelo estructural
eta1 ~ xi1 + xi2 + xi3
eta2 ~ eta1
# Covariancias
xi1 ~~ xi2
xi1 ~~ xi3
xi2 ~~ xi3
'
fit_mg_g3 <- bsem(mod_sem_1,
ordered=c("y3","y4","y6","y7","y8","y9","y10","y11","y12","y13",
"x14","x16","x17","x18","x19","x20","x21","x22","x23","x24"),
std.lv=TRUE, dp=dpriors(lambda="normal(1,1)"),
group = "genero",
group.equal = "loadings",
n.chains = 3, burnin = 9000, sample = 1000,
data=datos_imputados)
I already tried to re-install blavaan package and I´m sure that the code, variable and data are fine. Actually, the multigroup model without constraints run succesfully, only after I add the line "group.equal = "loadings"
the error show up.
Consulting foros and R Github, I think that maybe the error is because a bug in the blavaan package but I tried to re-install the package and the error persist.
Can you please provide a reproducible example? From what I see in your code, it seems that you want all the indicators to be considered as ordinal.
Use
ordered = T
if you want all the indicators to be considered as ordinal:Since you are using ordinal models, I think you should check this. Another note, since you are fixing models parameters to be equal across groups in ordinal models, I would recommend first to fix thresholds to be equal. They serve as parameters for a primary measurement model of the underlying latent item responses linked to the observed discrete responses, since these latent responses act as indicators for the common factors. However, it is not accurate to view thresholds as the categorical equivalent of intercepts, as some researchers have previously suggested. For a detailed understanding, I suggest reading Wu & Estabrook (2016) who proposed testing threshold equivalence between the configural and metric steps applied to continuous indicators, with a couple of exceptions:
Following Wu & Estabrook's (2016) advice, I would suggest testing the equivalence of thresholds first, followed by factor loadings, and then intercepts. Be sure to remove unnecessary identification constraints from the configural model. Fischer's, et al. (2018) provide an excellent example of the sequence of models to test.
Just like intercepts are not comparable (and hence should not be constrained) for indicators whose factor loadings are not equivalent, factor loadings should not be constrained for indicators whose thresholds are not equivalent."
References
Fischer, F., Gibbons, C., Coste, J., Valderas, J. M., Rose, M., & Leplège, A. (2018). Measurement invariance and general population reference values of the PROMIS Profile 29 in the UK, France, and Germany. Quality of Life Research, 27(4), 999-1014. https://doi.org/10.1007/s11136-018-1785-8
Wu, H., & Estabrook, R. (2016). Identification of confirmatory factor analysis models of different levels of invariance for ordered categorical outcomes. Psychometrika, 81(4), 1014–1045. https://doi.org/10.1007/s11336-016-9506-0