I am creating an lmdb database with following settings:
train_lmdb = lmdb.open(train_lmdb_name, map_size=int(1e13), writemap=True)
lmdb_txn = train_lmdb(write=True)
My input data is of shape 128x128x128
and I have 900.000 samples. Creating the database is no problem, but when trying to train the dataset using caffe I receive following error:
Check failed: mdb_status == 0 (12 vs. 0) Cannot allocate memory
I think it is because I use map_size=int(1e13)
rather than 1e12
and therefore I get the error, however my dataset is 128x128x128x900000 = 1.887.436.800.000 > 1e12
what can I do?
Stacktrace:
I0827 12:00:06.379778 11729 net.cpp:198] conv1_b_conv1_b_relu_0_split needs backward computation.
I0827 12:00:06.379784 11729 net.cpp:198] conv1_b_relu needs backward computation.
I0827 12:00:06.379789 11729 net.cpp:198] scale_conv1_b needs backward computation.
I0827 12:00:06.379794 11729 net.cpp:198] bn_conv1_b needs backward computation.
I0827 12:00:06.379799 11729 net.cpp:198] conv1_b needs backward computation.
I0827 12:00:06.379806 11729 net.cpp:200] volume does not need backward computation.
I0827 12:00:06.379811 11729 net.cpp:200] data does not need backward computation.
I0827 12:00:06.379814 11729 net.cpp:242] This network produces output linear2
I0827 12:00:06.379820 11729 net.cpp:242] This network produces output loss
I0827 12:00:06.380133 11729 net.cpp:255] Network initialization done.
I0827 12:00:06.380894 11729 solver.cpp:56] Solver scaffolding done.
I0827 12:00:06.400212 11729 caffe.cpp:248] Starting Optimization
F0827 12:00:07.560200 11766 db_lmdb.hpp:15] Check failed: mdb_status == 0 (12 vs. 0) Cannot allocate memory
*** Check failure stack trace: ***
@ 0x2b0d2be31b2d google::LogMessage::Fail()
@ 0x2b0d2be33995 google::LogMessage::SendToLog()
@ 0x2b0d2be316a9 google::LogMessage::Flush()
@ 0x2b0d2be3442e google::LogMessageFatal::~LogMessageFatal()
@ 0x2b0d2ac7aede caffe::db::LMDB::Open()
@ 0x2b0d2ab67667 caffe::DataLayer<>::DataLayer()
@ 0x2b0d2ab67922 caffe::Creator_DataLayer<>()
@ 0x2b0d2abfca1b caffe::LayerRegistry<>::CreateLayer()
@ 0x2b0d2ac3c97a caffe::Net<>::Init()
@ 0x2b0d2ac3ed55 caffe::Net<>::Net()
@ 0x2b0d2ac4fdf6 caffe::Solver<>::InitTrainNet()
@ 0x2b0d2ac51363 caffe::Solver<>::Init()
@ 0x2b0d2ac5167f caffe::Solver<>::Solver()
@ 0x2b0d2ac62301 caffe::Creator_AdamSolver<>()
@ 0x415c6c caffe::SolverRegistry<>::CreateSolver()
@ 0x2b0d2ac4b32f caffe::Worker<>::InternalThreadEntry()
@ 0x2b0d2ab01185 caffe::InternalThread::entry()
@ 0x2b0d2ab01b6e boost::detail::thread_data<>::run()
@ 0x2b0d2b3e2739 thread_proxy
@ 0x2b0d2c6cedc5 start_thread
@ 0x2b0d414c873d __clone