gpytorch, regression on targets and classification of gradients to negative or positive

105 views Asked by At

i would like to set up the following model in GPYtorch: i have 4 inputs and i want to predict an output (regression) at the same time, i want to constrain the gradients of 3 inputs to be positive and of 1 input to be negative (with respect to the input)

However, i dont know how to set this problem up with multiple likelihoods. Up to now, i have been generating the gradients with torchgrad and adding/substracting their probit function.

        dist1 = torch.distributions.normal.Normal(gradspred[:,0], grads_var[:,0])
        dist2 = torch.distributions.normal.Normal(gradspred[:,1], grads_var[:,1])
        dist3 = torch.distributions.normal.Normal(gradspred[:,2], grads_var[:,2])
        dist4 = torch.distributions.normal.Normal(gradspred[:,3], grads_var[:,3])

        loss_1 = torch.mean(dist1.cdf(torch.tensor(0)))
        loss_2 = 1-torch.mean(dist2.cdf(torch.tensor(0)))
        loss_3 = 1-torch.mean(dist3.cdf(torch.tensor(0)))
        loss_4 = 1-torch.mean(dist4.cdf(torch.tensor(0)))
        

        loss = -torch.mean(self.mll(self.output, self.train_y))\
            +100*(loss_1 + loss_2 + loss_3 + loss_4)

but this is not working correctly. Ultimately, i would like to either have 2 mll, a gaussian and a bernoulli, or recreate this paper

http://proceedings.mlr.press/v9/riihimaki10a/riihimaki10a.pdf

Note: I have found a similar implementation in GPy, namely so:

def fd(x):
return -np.ones((x.shape[0],1))
def test_multioutput_model_with_ep():
    f = lambda x: np.sin(x)+0.1*(x-2.)**2-0.005*x**3
    N=10
    sigma=0.05
    x = np.array([np.linspace(1,10,N)]).T
    y = f(x) 
    print(y)
    M=15
    xd = x
    yd = fd(x)


    # squared exponential kernel:
    se = GPy.kern.RBF(input_dim = 1, lengthscale=1.5, variance=0.2)

    # We need to generate separate kernel for the derivative observations and give the created kernel as an input:
    se_der = GPy.kern.DiffKern(se, 0)

    #Then 
    gauss = GPy.likelihoods.Gaussian(variance=sigma**2)
    probit = GPy.likelihoods.Binomial(gp_link = GPy.likelihoods.link_functions.ScaledProbit(nu=100))


    inference = GPy.inference.latent_function_inference.expectation_propagation.EP(ep_mode = 'nested')
    m = GPy.models.MultioutputGP(X_list=[x, xd], Y_list=[y, yd], kernel_list=[se, se_der], likelihood_list = [gauss, probit], inference_method=inference)
    m.optimize(messages=0, ipython_notebook=False)

but this breaks for multi-dimensional inputs because the EP is only implemented for 1D. Any help will be more than welcome and i dont care which library is used Best

0

There are 0 answers