I am building data simulation framework with numpy ORM, where it is much more convenient to work with classes and objects instead of numpy arrays directly. Nevertheless, output of the simulation should be numpy array. Also blockz is quite interesting as a backend here.
I would like to map all object attributes to numpy arrays. Thus, numpy arrays work like a column-oriented "persistent" storage for my classes. I also need to link "new" attributes to objects which I can calculate using numpy(pandas) framework. And then just link them to objects accordingly using the same back-end.
Is there any solution for such approach? Would you recommend any way to build it in a HPC way? I have found only django-pandas. PyTables is quite slow on adding new columns-attributes.
Something like (working on pointers to np_array):
class Instance()
def __init__(self, np_array, np_position):
self.np_array = np_array
self.np_position = np_position
def get_test_property():
return(self.np_array[np_position])
def set_test_property(value):
self.np_array[np_position] = value
In fact there is a way to change NumPy or bcolz arrays by reference. Simple example can be found in the following code.