Why is the evaluation of Mahout Recommender Systems with Movielens dataset so slow?

427 views Asked by At

I have written a simple User-User recommender and evaluation code in mahout.

The recommender works fine but as soon as I add the evaluation part it takes forever to get a result from "Movielens1m" dataset in Eclipse

Is it normal? How long should it take? The evaluation works fine on Movielens 100K dataset. I get the result of evaluation (0.923..) after couple of seconds.

Here is my Code :

public class RecommenderEvaluator {

    public static void main(String[] args) throws Exception {

        //RandomUtils.useTestSeed();
        DataModel model = new FileDataModel(new File("data/movies1m.csv"));
        AverageAbsoluteDifferenceRecommenderEvaluator evaluator = new AverageAbsoluteDifferenceRecommenderEvaluator();

        RecommenderBuilder builder = new RecommenderBuilder() {
            @Override
            public Recommender buildRecommender(DataModel model) throws TasteException {

                UserSimilarity similarity = new PearsonCorrelationSimilarity(model);
                UserNeighborhood neighborhood = new NearestNUserNeighborhood(2,similarity, model);
                return new GenericUserBasedRecommender(model, neighborhood, similarity);
            }
        };
        double score = evaluator.evaluate(builder, null, model, 0.9, 1.0);
        System.out.println(score);

    }

}
1

There are 1 answers

0
Dan Jarratt On

You're using a user-user collaborative filtering algorithm. U-U compares every user to every other user and stores similarity values, so that later you can choose the N nearest neighbors and use their ratings for prediction or recommendation. When users change ratings, you have to recompute the entire model because potentially many neighborhoods will change. A big benefit to user-user CF is that there's visibility into whose ratings make up a certain prediction, and you can potentially show that to users as part of a recommendation explanation. However, its computational cost led most practitioners to go to item-item collaborative filtering or matrix factorization (e.g., SVD) a while ago.

Item-item collaborative filtering is best when you have many more users than items. Here you have to compute the similarity of all items to all other items. But since there's many more users than items, the rating distribution of items tends to change slowly (unless the item is new in the system) and so you don't have to recompute as often.

Try different algorithms and measure the build and test times for all of them.