In deep learning, in particularly NLP, words are transformed into a vector representation to be fed into a neural network such as an RNN. By referring to the link:
In the section of Word Embeddings, it is said that:
A word embedding W:words→Rn is a paramaterized function mapping words in some language to high-dimensional vectors (perhaps 200 to 500 dimensions)
I do not understand the purpose of the dimension of the vectors. What does it mean to have a vector of 200 dimensions compared to a vector of 20 dimensions?
Does it improve the overall accuracy of the model? Could anyone give me a simple example regarding the choice of dimension of the vectors.