I'm trying to set up a simple denoising autoencoder with Matlab for 1D data. As currently there is no specialised input layer for 1D data the imageInputLayer()
function has to be used:
function net = DenoisingAutoencoder(data)
[N, n] = size(data);
%setting up input
X = zeros([n 1 1 N]);
for i = 1:n
for j = 1:N
X(i, 1, 1, j) = data(j,i);
end
end
% noisy X : 1/10th of elements are set to 0
Xnoisy = X;
mask1 = (mod(randi(10, size(X)), 7) ~= 0);
Xnoisy = Xnoisy .* mask1;
layers = [imageInputLayer([n 1 1]) fullyConnectedLayer(n) regressionLayer()];
opts = trainingOptions('sgdm');
net = trainNetwork(X, Xnoisy, layers, opts);
However, the code fails with this error message:
The output size [1 1 n] of the last layer doesn't match the response size [n 1 1].
Any thoughts on how should the input / layers should be reconfigured? If the fullyConnectedLayer
is left out then the code runs fine, but obviously then I'm left without the hidden layer.
The target output should be a matrix, not a 4D tensor.
Here's a working version of the previous code: