I have a file with 44,586 lines of data. It is read in using pylab:
data = pl.loadtxt("20100101.txt")
density = data[:,0]
I need to run something like...
densities = np.random.normal(density, 30, 1)
np.savetxt('1.txt', np.vstack((densities.ravel())).T)
...and create a new file named 1.txt which has all 44,586 lines of my data randomised within the parameters I desire. Will my above commands be sufficient to read through and perform what I want on every line of data?
The more complicated part on top of this - is that I want to run this 1,000 times and produce 1,000 .txt files (1.txt, 2.txt ... 1000.txt) which each run the exact same command.
I become stuck when trying to run loops in scripts, as I am still very inexperienced. I am having trouble even beginning to get this running how I desire - also I am confused how to handle saving the files with their different names. I have used np.savetxt in the past, but don't know how to make it perform this task.
Thanks for any help!
There are two minor issues - the first relates to how to select the name of the files (which can be solved using pythons support for string concatenation), the second relates to
np.random.normal
- which only allows a size parameter when loc is a scalar.