Pyro change AutodiagonalNormal settings

474 views Asked by At

I use pyro-ppl 3.0 for probabilistic programming. When I go through the tutorial on Bayesian regression. I used AutoGuide and pyro.random_module to transfer a normal feed-forward network to bayesian network.

# linear regression
class RegressionModel(nn.Module):
    def __init__(self, p):
        # p number of feature
        super(RegressionModel, self).__init__()
        self.linear1 = nn.Linear(p, 2)
        self.linear2 = nn.Linear(2, 1)
        self.softplus = nn.Softplus()

    def forward(self, x):
        x = self.softplus(self.linear1(x))
        return self.linear2(x)

# model
def model(x_data, y_data):
    # weight and bias prior
    w1_prior = Normal(torch.zeros(2,2), torch.ones(2,2)).to_event(2)
    b1_prior = Normal(torch.ones(2)*8, torch.ones(2)*1000).to_event(1)
    w2_prior = Normal(torch.zeros(1,2), torch.ones(1,2)).to_event(1)
    b2_prior = Normal(torch.ones(1)*3, torch.ones(1)*500).to_event(1)

    priors = {'linear1.weight': w1_prior, 'linear1.bias': b1_prior,
             'linear2.weight': w2_prior, 'linear2.bias': b2_prior}

    scale = pyro.sample("sigma", Uniform(0., 10.))

    # lift module parameters to random variables sampled from the priors
    lifted_module = pyro.random_module("module", regression_model, priors)
    # sample a nn (which also samples w and b)
    lifted_reg_model = lifted_module()
    with pyro.plate("map", len(x_data)):
        # run the nn forward on data
        prediction_mean = lifted_reg_model(x_data).squeeze(-1)
        # condition on the observed data
        pyro.sample("obs",
                    Normal(prediction_mean, scale),
                    obs=y_data)
        return prediction_mean

guide = AutoDiagonalNormal(model)

#================

#================

# inference
optim = Adam({"lr": 0.03})
svi = SVI(model, guide, optim, loss=Trace_ELBO(), num_samples=1000)

def train():
    pyro.clear_param_store()
    for j in range(num_iterations):
        # calculate the loss and take a gradient step
        loss = svi.step(x_data, y_data)
        if j % 100 == 0:
            print("[iteration %04d] loss: %.4f" % (j + 1, loss / len(data)))

train()

for name, value in pyro.get_param_store().items():
    print(name, pyro.param(name))

the results are showing as following: auto_loc tensor([-2.1585, -0.9799, -0.0378, -0.5000, -1.0241, 2.6091, -1.3760, 1.6920, 0.2553, 4.5768], requires_grad=True) auto_scale tensor([0.1432, 0.1017, 0.0368, 0.7588, 0.4160, 0.0624, 0.6657, 0.0431, 0.2972, 0.0901], grad_fn=)

the number of latent variable is automatically set to be 10. I want to change the number. as mentioned in the tutorial, I add

##
latent_dim = 5
pyro.param("auto_loc", torch.randn(latent_dim))
pyro.param("auto_scale", torch.ones(latent_dim),
           constraint=constraints.positive)

between #============= mentioned above.

but the result still the same. the number doesn't change. So, How to set up the AutoDiagnalNormal function to change the number of latent variables

1

There are 1 answers

0
fritzo On

I think you'll just need to pyro.clear_param_store() between training settings. I believe what was happening is that you were training with latent_dim=5, and then when you set latent_dim=10 the old parameters were still in Pyro's global param store. Note that the torch.randn(latent_dim) argument to your pyro.param() statement is used only for initialization, and is ignored if the parameter has already been initialized (and is found in the global param store).