I am trying to define a few of famous kernels like RBF, hyperbolic tangent, Fourier and etc for `svm.SVR`

method in `sklearn`

library. I started working on `rbf`

(I know there's a default kernel in svm for rbf but I need to define one to be able to customize it later), and found some useful link in here and chose this one:

```
def my_kernel(X,Y):
K = np.zeros((X.shape[0],Y.shape[0]))
for i,x in enumerate(X):
for j,y in enumerate(Y):
K[i,j] = np.exp(-1*np.linalg.norm(x-y)**2)
return K
clf=SVR(kernel=my_kernel)
```

I used this one because I could use it for my train (with shape of [3850,4]) and test data (with shape of [1200,4]) which have different shapes. But the problem is that it's too slow and I have to wait so long for the results. I even used static-typing and memoryviews in cython, but its performance is not as good as the default `svm`

rbf kernel. I also found this link which is about the same problem but working with `numpy.einsum`

and `numexpr.evaluate`

is a little bit confusing for me. It turns out that this was the best code in terms of speed performance:

from scipy.linalg.blas import sgemm

```
def app2(X, gamma, var):
X_norm = -gamma*np.einsum('ij,ij->i',X,X)
return ne.evaluate('v * exp(A + B + C)', {\
'A' : X_norm[:,None],\
'B' : X_norm[None,:],\
'C' : sgemm(alpha=2.0*gamma, a=X, b=X, trans_b=True),\
'g' : gamma,\
'v' : var\
})
```

This code just works for one input (X) and I couldn't find a way to modify it for my case (two inputs with two different sizes - The kernel function gets matrices with shape (m,n) and (l,n) and outputs (m,l) according to svm docs ). I guess I only need to replace the `K[i,j] = np.exp(-1*np.linalg.norm(x-y)**2)`

from the first code in the second one to speed it up. Any helps would be appreciated.

## Three possible variants

Variants 1 and 3 makes use of

as described here or here. But for special cases like a small second dimension Variant 2 is also OK.

Timings