Spectral clustering with Similarity matrix constructed by jaccard coefficient

924 views Asked by At

I have a categorical dataset, I am performing spectral clustering on it. But I do not get very good output. I choose the eigen vectors corresponding to largest eigen values as my centroids for k-means.

Please find below the process I follow:

1. Create a symmetric similarity matrix (m*m) using jaccard coefficient.
   For example, for a data set,
   a,b,c,d
   a,b,x,y
   The similarity matrix I compute would look like :
   |1       0.33|
   |0.33     1  |
2. Compute the first k eigen vectors corresponding to largest eigen values. where k is the number of cluster.
3. Normalize the symmetric similarity matrix
4. perform the clustering on the normalized similarity matrix using eigen vectors as initial centroids for k-means.

My questions are :

Is computing Jaccard similarity matrix the right choice for spectral clustering.

Is it the right way of selecting eigen vectors as cluster centroids for spectal clustering because I dont see other options for categorical dataset.

Is there anything wrong with the procedure I follow.
1

There are 1 answers

2
Has QUIT--Anony-Mousse On BEST ANSWER

As far as I can tell, you have mixed and shuffled aa number of approaches. No wonder it doesn't work...

  1. you could simply use jaccard distance (a simple inversion of jaccard similarity) + hierachical clustering
  2. you could do MDS to project you data, then k-means (probably what you are trying to do)
  3. affinity propagation etc. are worth a try