I would like to adapt this Text Rank code to extract keywords in my text with values normalized between 0 and 1. I show a short snippet:
# Pare text by spaCy
doc = nlp(text)
# Filter sentences
sentences = self.sentence_segment(doc, candidate_pos, lower) # list of list of words
# Build vocabulary
vocab = self.get_vocab(sentences)
# Get token_pairs from windows
token_pairs = self.get_token_pairs(window_size, sentences)
# Get normalized matrix
g = self.get_matrix(vocab, token_pairs)
# Initionlization for weight(pagerank value)
pr = np.array([1] * len(vocab))
# Iteration
previous_pr = 0
for epoch in range(self.steps):
pr = (1-self.d) + self.d * np.dot(g, pr)
if abs(previous_pr - sum(pr)) < self.min_diff:
break
else:
previous_pr = sum(pr)
# Get weight for each node
node_weight = dict()
for word, index in vocab.items():
node_weight[word] = pr[index]
self.node_weight = node_weight
I saw the output is something similar:
# Output
# science - 1.717603106506989
# fiction - 1.6952610926181002
# filmmaking - 1.4388798751402918
# China - 1.4259793786986021
# Earth - 1.3088154732297723
# tone - 1.1145002295684114
# Chinese - 1.0996896235078055
# Wandering - 1.0071059904601571
# weekend - 1.002449354657688
# America - 0.9976329264870932
# budget - 0.9857269586649321
# North - 0.9711240881032547
I would like to normalize the Text Rank value between 0 to 1 in order to have a maximum value.
On wikipedia I found these 2 formulas here
But if I add (1-self.d)/g.shape[0]
to the previous formula so:
pr = (1-self.d)/g.shape[0] + self.d * np.dot(g, pr)
I still continue to have some values higher than 1. What's the mistake?