What solver should I use to replace the minFunc when porting MATLAB Sparse Filtering to F#

674 views Asked by At

I have the following Sparse Filtering MATLAB code that I would like to port to F#. I am aware of F# Type Provider for MATLAB but can't use it here because it would create a dependency on MATLAB (I can use it for testing)

function [optW] = SparseFiltering(N, X);
   % N = # features to learn, X = input data (examples in column)
   % You should pre-process X by removing the DC component per example,
   % before calling this function.
   % e.g., X = bsxfun(@minus, X, mean(X));
   addpath minFunc/ % Add path to minFunc optimization package
   optW = randn(N, size(X, 1));
   optW = minFunc(@SparseFilteringObj, optW(:), struct('MaxIter', 100), X, N);
   optW = reshape(optW, [N, size(X, 1)]);
end

function [Obj, DeltaW] = SparseFilteringObj (W, X, N)
   % Reshape W into matrix form
   W = reshape(W, [N, size(X,1)]);
   % Feed Forward
   F = W*X; % Linear Activation
   Fs = sqrt(F.ˆ2 + 1e-8); % Soft-Absolute Activation
   [NFs, L2Fs] = l2row(Fs); % Normalize by Rows
   [Fhat, L2Fn] = l2row(NFs'); % Normalize by Columns
   % Compute Objective Function
   Obj = sum(sum(Fhat, 2), 1);
   % Backprop through each feedforward step
   DeltaW = l2grad(NFs', Fhat, L2Fn, ones(size(Fhat)));
   DeltaW = l2grad(Fs, NFs, L2Fs, DeltaW');
   DeltaW = (DeltaW .* (F ./ Fs)) * X';
   DeltaW = DeltaW(:);
end

function [Y,N] = l2row(X) % L2 Normalize X by rows
   % We also use this to normalize by column with l2row(X')
   N = sqrt(sum(X.ˆ2,2) + 1e-8);
   Y = bsxfun(@rdivide,X,N);
end

function [G] = l2grad(X,Y,N,D) % Backpropagate through Normalization
   G = bsxfun(@rdivide, D, N) - bsxfun(@times, Y, sum(D.*X, 2) ./ (N.ˆ2));
end

I understand most of the MATLAB code, but I'm not sure what the equivalent of MATLAB's minFunc is in .Net. I believe I want one of the Microsoft.SolverFoundation.Solvers. According to MATLAB's site

...the default parameters of minFunc call a quasi-Newton strategy, where limited-memory BFGS updates with Shanno-Phua scaling are used in computing the step direction, and a bracketing line-search for a point satisfying the strong Wolfe conditions is used to compute the step direction. In the line search, (safeguarded) cubic interpolation is used to generate trial values, and the method switches to an Armijo back-tracking line search on iterations where the objective function enters a region where the parameters do not produce a real valued output

Given the above information, can anyone confirm that the Microsoft.SolverFoundation.Solvers.CompactQuasiNewtonModel is the right way to go?

Also, are there any other obvious "gotchas" when porting the above code to F#? (new to this type of port)

1

There are 1 answers

1
pad On BEST ANSWER

I think CompactQuasiNewtonSolver is your best bet. If you take a look at SolverFoundation's samples, there is CQN sample which demonstrates how to implement and solve the Rosenbrock function. Its results are inline with what I see in minFunc's Rosenbrock example. The sample above is in C#; but it should be easy to translate to F#.

Also, are there any other obvious "gotchas" when porting the above code to F#? (new to this type of port)?

You probably need a good linear algebra package to get close to MATLAB code. Math.Net seems to be an ideal choice since it has good F# support.

Alternatively, you can reimplement the code using Accord.NET framework. The framework has an implementation of L-BFGS algorithm for optimization in Machine Learning, so it is closer to your purpose.