Coverting OpenBUGS code to PYMC3

637 views Asked by At

I have previously been working with OpenBUGS/WinBUGS to perform my Bayesian statistics but have decided to move over to using the PYMC3 package in Python. So I am fairly new to the pacakage and still learning how to use it fully. I am having some difficulties converting my BUGS code into PYMC3. The original BUGS code is as follows:

model {
for (i in 1 : N) {
    Tobs[i] ~ dpois(mu[i])
    mu[i]<- u[i]*lamda[i]
    u[i]~dbern(p[i])
    log(lamda[i]) <- a0 + a1*insta[i] + a2*shear[i] + b[i]
    p[i]<-exp(-beta/exp(popd[i]))}
b[1:N] ~ car.normal(adj[], weights[], num[], tau)
for(k in 1:sumNumNeigh) {weights[k] <- 1}
a0 ~ dnorm(0, 0.001)
a1 ~ dnorm(0, 0.001)
a2 ~ dnorm(0, 0.001)
beta<-exp(betaaux)
betaaux~dnorm(1, 0.001)
tau ~ dgamma(0.01,0.01)
sigma <-sqrt(1 / tau)
}

I have written this up in python as such:

model1 = pm.Model()
with model1:
    #Vague prior
        betaaux = pm.Normal('betaaux', mu=1.0, tau=1.0e-3)
        beta = pm.Deterministic('beta', tt.exp(betaaux))
    #Random effects (hierarchial) prior
        tau_c = pm.InverseGamma('tau_c', alpha=1.0e-2, beta=1.0e-2)
    #Spatial clustering prior
        sigma = pm.Deterministic('sigma', np.sqrt(1/tau_c))
    #Co-efficents
        a0 = pm.Normal('a0', mu=0.0, tau=1.0e-3)
        a1 = pm.Normal('a1', mu=0.0, tau=1.0e-3)
        a2 = pm.Normal('a2', mu=0.0, tau=1.0e-3)
        a3 = pm.Normal('a3', mu=0.0, tau=1.0e-3)
    #Population Effect
        pop_eff= pm.Deterministic('pop_eff', tt.exp((-1*beta)/tt.exp(pop_den_all))) 
    #Regional Random Effects
        b_new = CAR('b_new', w=wmat, a=amat, tau=tau_c, shape=1425)
    #Mean/Expected Event Occurance Rate
        lamda = pm.Deterministic('lamda', tt.exp(a0 + a1*insta + a2*shear + a3*odd + b_new))
    #Actual Occurrence of Events
        Tlatent_new = pm.Poisson('Tlatent_new', mu=lamda, shape=1425)
    #Observed Event Counts
        Tobs_new = pm.Binomial('Tobs_new', n=Tlatent_new, p=pop_eff, shape=1425, observed=Tobs)
    #Initialization
        start1 = {'betaaux': 2., 'tau_c_log__': 2., 'a0': 1., 'a1': 1., 'a2': 1., 'a3': 1., 'Tlatent_new': Tlatent, 'b_new': b}
        step1 = pm.Metropolis([a0, a1, a2, a3, betaaux, beta, tau_c, b_new, Tlatent_new])

with model1:
     trace1 = merge_traces([pm.sample(draws=15000, step=[step1], tune=5000, start=start1, chain=i)
                        for i in range(1)])  

My model runs but the output doesn't seem to be right. Specifically, 'Tlatent_new' doesn't get updated from the initial value that I assign to it in 'Tlatent'. If I don't try to incorporate 'pop_eff' in my model i.e. remove the line 'Tobs_new = pm.Binomial('Tobs_new', n=Tlatent_new, p=pop_eff, shape=1425, observed=Tobs)' I find that 'Tlatent_new' will change from the intial values given in 'Tlatent'. I can't understand why this additional line would prevent the model from updating 'Tlatent' or how I can work around this.

Any suggestions as to what the issue might be would be appreciated.

1

There are 1 answers

0
aseyboldt On

It is a bit surprising to me that it gets stuck completely, but Metropolis really doesn't work in higher dimensions. I wouldn't expect this to work well at.

You could use NUTS if you find a way to get rid of discrete variables (observed ones are fine). In this example maybe just model Tlatent as a continuous variable – Binomial works with continuous n.

A couple of other small things:

  • The InverseGamma prior on scale parameters was very common some time ago, but it seems that it can show very unfortunate behavior. If you really want an uninformative prior you could use the Jeffrey's prior using log_sigma = pm.Flat(); sigma = tt.exp(log_sigma). But in most cases it is much better to use a HalfStudentT or HalfCauchy.
  • No need to use tau, sd is usually nicer to read.