When I query a large hdfstore file (>10G) like this:
hdf = pd.HDFStore('raw_sample_storage.h5')
nrows = hdf.get_storer('raw_sample_all').nrows
chunksize = 300000
for i in xrange(nrows//chunksize + 1):
chunk = hdf.select('raw_sample_all', where=[pd.Term('node_id', '==', 1)], start=i*chunksize, stop=(i+1)*chunksize)
print chunk.head(2)
I got results where most entries' node_id is 1, but some entries have node_id other than 1. So is it a hdfstore glitch, or I did something wrong?
Here is part of the results you can see there are some entries with node_id other than 1.
time GW_time node_id X Y Z status seq \
2 2013-10-22 17:20:58 39821888 1 16927 21438 22722 0 34
6 2013-10-22 17:20:58 39822144 1 16927 21438 22722 0 35
rssi lqi
2 -46 48
6 -51 48
time GW_time node_id X Y Z status \
300002 2013-10-22 17:30:50 59223744 3 19915 20840 22003 0
300006 2013-10-22 17:30:50 59224000 3 19913 20844 22002 0
seq rssi lqi
300002 46 -64 50
300006 47 -64 48
time GW_time node_id X Y Z status \
600000 2013-10-22 17:40:55 79050561 1 17612 22536 21198 0
600004 2013-10-22 17:40:55 79050817 1 17613 22535 21201 0
seq rssi lqi
600000 55 -67 46
600004 56 -67 49
time GW_time node_id X Y Z status \
900003 2013-10-22 17:50:44 98345217 4 18934 20212 19364 0
900007 2013-10-22 17:50:44 98345473 4 18935 20212 19359 0
seq rssi lqi
900003 32 -60 46
900007 33 -60 48
time GW_time node_id X Y Z status \
1200003 2013-10-22 18:00:31 117600065 1 17618 22541 21191 0
1200007 2013-10-22 18:00:31 117600321 1 17620 22538 21187 0
seq rssi lqi
1200003 111 -66 47
1200007 112 -66 48
Noticing row 300002 is an unwanted result, I try to select node 1 around that particular area like this:
chunk = hdf.select('raw_sample_all', start=300002-20, stop=300002+20,
where=[pd.Term('node_id', '==', 1)])
Only node 3 is returned in the result:
time GW_time node_id X Y Z status seq rssi lqi
299982 2013-10-22 17:30:50 59222464 3 19912 20838 22003 0 41 -64 48
299986 2013-10-22 17:30:50 59222720 3 19912 20838 22003 0 42 -64 48
299990 2013-10-22 17:30:50 59222976 3 19913 20840 22007 0 43 -64 50
299994 2013-10-22 17:30:50 59223232 3 19913 20840 22007 0 44 -64 50
299998 2013-10-22 17:30:50 59223488 3 19915 20840 22003 0 45 -64 48
300002 2013-10-22 17:30:50 59223744 3 19915 20840 22003 0 46 -64 50
300006 2013-10-22 17:30:50 59224000 3 19913 20844 22002 0 47 -64 48
300010 2013-10-22 17:30:50 59224256 3 19913 20844 22002 0 48 -64 50
300014 2013-10-22 17:30:50 59224512 3 19914 20844 22010 0 49 -64 49
300018 2013-10-22 17:30:50 59224768 3 19914 20844 22010 0 50 -64 50
Then I try use index instead of start/stop like this:
chunk = hdf.select('raw_sample_all',
where=[pd.Term('index', '>=', 300002-20),
pd.Term('index', '<=', 300002+20),
pd.Term('node_id', '==', 1)])
And this time it returned correct results:
time GW_time node_id X Y Z status seq rssi lqi
299984 2013-10-22 17:30:50 59222593 1 17613 22543 21203 0 42 -80 48
299988 2013-10-22 17:30:50 59222849 1 17613 22543 21203 0 43 -81 48
299992 2013-10-22 17:30:50 59223105 1 17610 22547 21194 0 44 -81 48
299996 2013-10-22 17:30:50 59223361 1 17610 22547 21194 0 45 -81 47
300000 2013-10-22 17:30:50 59223617 1 17609 22545 21190 0 46 -81 45
300004 2013-10-22 17:30:50 59223873 1 17609 22545 21190 0 47 -81 49
300008 2013-10-22 17:30:50 59224129 1 17606 22547 21199 0 48 -81 48
300012 2013-10-22 17:30:50 59224385 1 17606 22547 21199 0 49 -81 48
300016 2013-10-22 17:30:50 59224641 1 17607 22548 21191 0 50 -81 49
300020 2013-10-22 17:30:50 59224897 1 17607 22548 21191 0 51 -80 48
I guess I might walk around this problem with selection on index, but I am not completely sure because the method with start/stop also get the correct results most of the time, so even though the method with index got it right where start/stop failed, it might fail somewhere else.
And I would really like the start/stop method to work, because it is much faster, and I have a large data set, a slow method is really time-consuming.
BTW, In case you are wondering, I cannot use 'chunksize' like this:
df = hdf.select('raw_sample_all',chunksize=300000, where="node_id==1")
for chunk in df:
print chunk.head(2)
Every time I try chunksize I got a MemoryError like this. Struggling with many problems, Pandas is really tough for a newbie like me. Any help is greatly appreciated.
This was a recently fixed bug in
PyTables
, see the related issue here. In effect on some larger stores the indexers where not computed correctly when using awhere
andstart/stop
.You will need to update to
PyTables
3.2, then re-write the store itself. You can either recreate it how you did the first time, or use ptrepack