When writing to Chronicle Queue, the default write doesn't flush to disk, so I believe anything that is in the linux kernel dirty page cache is lost. What's the best approach to get guaranteed recovery in the event of power failure? Would a battery backed raid array along with enforced flush on write be a good approach? Or is it better to use replication with an ack from the second machine before assuming the write is safely recorded? Which of these approaches would have the best performance? Theoretically the power failure could affect both machines if on the same power grid....
Related Questions in PERFORMANCE
- Upsert huge amount of data by EFCore.BulkExtensions
- How can I resolve this error and work smoothly in deep learning?
- Efficiently processing many small elements of a collection concurrently in Java
- Theme Preloader for speed optimization in WordPress
- I need help to understand the time wich my simple ''hello world'' is taking to execute
- Non-blocking state update
- Do conditional checks cause bottlenecks in Javascript?
- Performance of sketch drastically decreases outside of the P5 Web Editor
- sample query for review for improvement on big query
- Is there an indexing strategy in Postgres which will operate effectively for JOINs with ORs
- Performance difference between two JavaScript code snippets for comparing arrays of strings
- C++ : Is there an objective universal way to compare the speed of iterative algorithms?
- How to configure api http request with load testing
- the difference in terms of performance two types of update in opensearch
- Sveltekit : really long to send the first page and intense CPU computation
Related Questions in CHRONICLE
- Chronicle queue version 25 and JDK17
- YARA-L Rule - Chronicle
- Chronicle Queue Heap memory issue
- Chronicle Queue
- Anybody using Openhft Chronicle 3.24ea1 in prod env? Would like to know about the challenges/issues faced. Right now seeing classloader related issue
- How to configure StoreFileListener to implement data retention with Chronicle Queue
- Profiling outliers using Async Profiler
- Crash dump when closing StoreAppenderContext
- What causes UnrecoverableTimeoutException and how should i fix or avoid it?
- ChronicleMap is causing high heap usage / OOM error
- How can I put chunks of data from a REST endpoint into a Chronicle queue as separate messages?
- Chronicle Queue exception - 'Unsigned Int 31-bit 3544677295290987316 out of range'
- How do we setup a pub sub configuration using chronicle queues among multiple JVM processes within same machine?
- Does chronicle queue have problems rolling daily if I skip weekends?
- What is the file name that chronicle queue writes to disk and how do we handle the growth of the file as it keeps growing?
Related Questions in CHRONICLE-QUEUE
- Chronicle queue version 25 and JDK17
- Chronicle Queue Heap memory issue
- Re-opening a chronicle queue results in exception
- OpenJdk 21 cannot support Chronicle-Queue
- Chronicle Queue - Failed to acquire exclusive lock on the table store file
- How to configure StoreFileListener to implement data retention with Chronicle Queue
- Chronicle Queue MethodReader stops reading
- Crash dump when closing StoreAppenderContext
- What causes UnrecoverableTimeoutException and how should i fix or avoid it?
- How can I put chunks of data from a REST endpoint into a Chronicle queue as separate messages?
- Chronicle Queue exception - 'Unsigned Int 31-bit 3544677295290987316 out of range'
- Reading and writing to byteBuffer in Chronicle Queue
- How do we setup a pub sub configuration using chronicle queues among multiple JVM processes within same machine?
- Does chronicle queue have problems rolling daily if I skip weekends?
- What is the file name that chronicle queue writes to disk and how do we handle the growth of the file as it keeps growing?
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Yes
Replicate the data to a second or third machine. That way even if the whole machine/data centre can't be recovered you can continue operation without data loss.
You have to trust the reliability of the hardware, something Chronicle can't guarantee and many of our clients have been burnt on before.
It depends on your requirements. This is best practice in our opinion, though many clients don't feel they need this option.
Another approach is to replicate the data to a secondary machine and have the secondary process the data. This can halve network latency introduced.
The best performance is to assume a manual process will used in the event of a failure and be willing to accept a small loss. In this case, you process everything as soon as possible.
Note: There are some alternatives.
This is where 2+1 replication might be an option. One backup server nearby to recover normal operation in the event of the failure of a rack or part of one. AN a second backup off site, which is slower to replicate but has fair less chance of also failing.