I have a Python dictionary of two-dimensional points, like points = {1:[0,1],2: [0,0],3: [0,-1]}.
With each iteration I'm choosing a random point from a dictionary and change it's coordinates (applying random-walk) with the following function:
def changecoor(X):
import random
return [X[0] + random.choice((-1,1)),X[1] + random.choice((-1,1))]
Every five iterations, I'm adding a new point to the dictionary. The code is as follows:
points = {1: [0,0]} # creating the first element
for i in range(2000):
mychoice = random.choice(list(points.keys())) # choosing a random element from the dict
points[mychoice] = changecoor(points[mychoice]) # applying a function to the chosen element
if i % 5 == 0:
points[len(points) + 1] = [0,0] # Adding new element every 5 loops
Now, if I plot my elements on a graph, I expect the element [0,0] to be somewhere in the middle, but it isn't. The second coordinate is evenly distributed around 0, but I don't have a single point which has the first coordinate negative.
This is an example of what I get:

Your code is correct, but your plotting of the results is not correct.
Using your code
If we take just the values() of the points dictionary and put that into a dataframe like
Then we do a scatterplot of the x,y values for each point we get a graph of the expected distribution around 0 that you are expecting