The relevance model just estimates the relevance feedback based on feedback documents. In this case, the relevance model would have a higher probability of getting common words as its feedbacks. Thus I assumed the performance of the relevance model won't be so good comparing to the other two models. However, I learned that all those models perform pretty well. What would be the reason for that?
Related Questions in INFORMATION-RETRIEVAL
- Can I reference multiple versions of a Chef cookbook?
- chef bootstrap a node on google cloud
- Support multiple notifies in definitions
- Chef cleanup - nodes, environments, cookbooks, roles etc.,
- Chef remote_file from https site with self signed certificate
- test-kitchen: how to read platform specific attributes in kitchen.yml
- Accessing Ubuntu chef server using hostname in my windows machine won't work
- Bootstrapping Chef to Windows Node failing
- wait until the end of Chefs compile phase before running code block?
- Chef knife configuration
Related Questions in FEEDBACK
- Can I reference multiple versions of a Chef cookbook?
- chef bootstrap a node on google cloud
- Support multiple notifies in definitions
- Chef cleanup - nodes, environments, cookbooks, roles etc.,
- Chef remote_file from https site with self signed certificate
- test-kitchen: how to read platform specific attributes in kitchen.yml
- Accessing Ubuntu chef server using hostname in my windows machine won't work
- Bootstrapping Chef to Windows Node failing
- wait until the end of Chefs compile phase before running code block?
- Chef knife configuration
Related Questions in RELEVANCE
- Can I reference multiple versions of a Chef cookbook?
- chef bootstrap a node on google cloud
- Support multiple notifies in definitions
- Chef cleanup - nodes, environments, cookbooks, roles etc.,
- Chef remote_file from https site with self signed certificate
- test-kitchen: how to read platform specific attributes in kitchen.yml
- Accessing Ubuntu chef server using hostname in my windows machine won't work
- Bootstrapping Chef to Windows Node failing
- wait until the end of Chefs compile phase before running code block?
- Chef knife configuration
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Popular Tags
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
"In contrast, the relevance model just estimates the relevance feedback based on feedback documents. In this case, the relevance model would have a higher probability of getting common words as its feedbacks"
That's a common perception which isn't necessarily true. To be more specific, recall that the estimation equation of relevance model looks like:
P(w|R) = \sum_{D \in Top-K} P(w|D) \prod_{t \in Q} P(q|D)
which in simple English means that --
To compute the weight of a term
w
in the set of top-K docs - you iterate over each document in top-K and multiplyP(w|D)
with the similarity score of Q with D (this is the value\prod_{t \in Q} P(q|D)
). Now, theidf
factor is hidden inside the expressionP(w|D)
.Following the standard language model paradigm (Jelinek-Mercer or Dirichlet), this isn't just a simple max-likelihood estimate but is rather a collection smoothed version, e.g., for Jelinek-Mercer, this is:
P(w|D) = log(1+ lambda/(1-lambda) * count(w,D)/length(D) * collection_size/cf(t))
which is nothing but a linear combination based generalization of tf*idf - the second component
collection_size/cf(t)
specifically denoting inverse collection frequency.So, this expression of
P(w|D)
ensures that terms with higher idf values tend to get higher weights in the relevance model estimation. In addition to the high idf weights, they should also have a high level of co-occurrence with the query terms due to the product of P(w|D) with P(q|D).