I'm in a scene that multiple threads of one process A would write to several files concurrently, and then multiple threads of the process B will read from these files and resotre the content after process A is finished. The content might be huge and it is said that MappedByteBuffer
could speed it up. However, I find that the changes I write does not go into the real file because when I check the content of the file using xxd
command after the write process terminated, it shows all zero(I'm sure the real content bytes are not all zero because I print it out)
To address the concurrency problem, I maintained an AtomicInteger
to make sure that only after all the threads has finished their jobs would I force()
the changes into the disk. Here is the core code.
public void flush() {
if(flushcount.incrementAndGet() >= threads_num.get()) //both are of type AtomicInteger
finalflush();
}
private void finalflush() {
flushlock.lock();
try {
//key is file name, value is the corresponding MappedByteBuffer
//the limit of the file is set to 10M manually based on my needs
for(Map.Entry<String, MappedByteBuffer> map:files.entrySet()){
System.out.printf("%s : pos = %d, limit = %d\n", map.getKey(), map.getValue().position(), map.getValue().limit());
map.getValue().force();
map.getValue().load();
}
for(Map.Entry<String, FileChannel> map: channels.entrySet()) {
map.getValue().close();
}
} catch (IOException e) {
e.printStackTrace();
} finally {
flushlock.unlock();
}
}
The output are like this
QUEUE_3 : pos = 43112, limit = 10485760
QUEUE_4 : pos = 42407, limit = 10485760
QUEUE_5 : pos = 43254, limit = 10485760
TOPIC_3 : pos = 395296, limit = 10485760
TOPIC_2 : pos = 401236, limit = 10485760
TOPIC_1 : pos = 398728, limit = 10485760
TOPIC_0 : pos = 392744, limit = 10485760
Seems pretty normal, but it just doesn't work... How can I sync these changes into the disk? Or, even if it doesn't have to be