I am working on movielens 100K movie data for recommendation system. I divide the data into test and training and calculate the precision and recall. In testing there are more than 10K users chosen randomly. I am able to find the precision and recall for an individual user.
I wanted to know: Is there any practical importance to the aggregated precision and recall?
You will see Precision/Recall results reported in academic papers as a aggregate, rather than 10,000 different P/R results. In that respect it gives the reader a very general sense of RS performance. Typically you will see Precision/Recall represented as a curve (as seen here: http://www.cs.washington.edu/ai/mln/images/image001.png). You tend to see that at Recall = 1, Precision is low, and where Precision = 1, recall is low. You can easily create one of these curves in Excel or Google Sheets from your 10,000 results.
As mentioned in the comments F-measure is a way to combine P/R to generate a mean value, although you need to be aware of the limitations of the F measure before you go "boasting" about it. It is not uncommon to justify some sort of weighting for either precision or recall depending on your application domain, so just be aware that the basic F-measure is balanced (both precision and recall are treated as equally important).
Receiver Operator Characteristic (https://en.wikipedia.org/wiki/Receiver_operating_characteristic) is also commonly used along side P/R curves, and f-measure in recommender system evaluation. If you are looking for extra credit then I would recommend using multiple methods to evaluate RS performance such as P/R curve, F measure, AUC, and ROC.