When writing to Chronicle Queue, the default write doesn't flush to disk, so I believe anything that is in the linux kernel dirty page cache is lost. What's the best approach to get guaranteed recovery in the event of power failure? Would a battery backed raid array along with enforced flush on write be a good approach? Or is it better to use replication with an ack from the second machine before assuming the write is safely recorded? Which of these approaches would have the best performance? Theoretically the power failure could affect both machines if on the same power grid....
Related Questions in PERFORMANCE
- php Variable name must change in for loop
- register_shutdown_function is not getting called
- Query returning zero rows despite entries existing
- Retrieving *number* pages by page id
- Automatically closing tags in form input?
- How to resize images with PHP PARSE SDK
- how to send email from localhost using codeigniter?
- Mariadb max Error while sending QUERY packet PID
- Multiusers login redirect different page in php
- Imaginary folder when I use "DirectoryIterator" in PHP?
Related Questions in CHRONICLE
- php Variable name must change in for loop
- register_shutdown_function is not getting called
- Query returning zero rows despite entries existing
- Retrieving *number* pages by page id
- Automatically closing tags in form input?
- How to resize images with PHP PARSE SDK
- how to send email from localhost using codeigniter?
- Mariadb max Error while sending QUERY packet PID
- Multiusers login redirect different page in php
- Imaginary folder when I use "DirectoryIterator" in PHP?
Related Questions in CHRONICLE-QUEUE
- php Variable name must change in for loop
- register_shutdown_function is not getting called
- Query returning zero rows despite entries existing
- Retrieving *number* pages by page id
- Automatically closing tags in form input?
- How to resize images with PHP PARSE SDK
- how to send email from localhost using codeigniter?
- Mariadb max Error while sending QUERY packet PID
- Multiusers login redirect different page in php
- Imaginary folder when I use "DirectoryIterator" in PHP?
Popular Questions
- How do I undo the most recent local commits in Git?
- How can I remove a specific item from an array in JavaScript?
- How do I delete a Git branch locally and remotely?
- Find all files containing a specific text (string) on Linux?
- How do I revert a Git repository to a previous commit?
- How do I create an HTML button that acts like a link?
- How do I check out a remote Git branch?
- How do I force "git pull" to overwrite local files?
- How do I list all files of a directory?
- How to check whether a string contains a substring in JavaScript?
- How do I redirect to another webpage?
- How can I iterate over rows in a Pandas DataFrame?
- How do I convert a String to an int in Java?
- Does Python have a string 'contains' substring method?
- How do I check if a string contains a specific word?
Popular Tags
Trending Questions
- UIImageView Frame Doesn't Reflect Constraints
- Is it possible to use adb commands to click on a view by finding its ID?
- How to create a new web character symbol recognizable by html/javascript?
- Why isn't my CSS3 animation smooth in Google Chrome (but very smooth on other browsers)?
- Heap Gives Page Fault
- Connect ffmpeg to Visual Studio 2008
- Both Object- and ValueAnimator jumps when Duration is set above API LvL 24
- How to avoid default initialization of objects in std::vector?
- second argument of the command line arguments in a format other than char** argv or char* argv[]
- How to improve efficiency of algorithm which generates next lexicographic permutation?
- Navigating to the another actvity app getting crash in android
- How to read the particular message format in android and store in sqlite database?
- Resetting inventory status after order is cancelled
- Efficiently compute powers of X in SSE/AVX
- Insert into an external database using ajax and php : POST 500 (Internal Server Error)
Yes
Replicate the data to a second or third machine. That way even if the whole machine/data centre can't be recovered you can continue operation without data loss.
You have to trust the reliability of the hardware, something Chronicle can't guarantee and many of our clients have been burnt on before.
It depends on your requirements. This is best practice in our opinion, though many clients don't feel they need this option.
Another approach is to replicate the data to a secondary machine and have the secondary process the data. This can halve network latency introduced.
The best performance is to assume a manual process will used in the event of a failure and be willing to accept a small loss. In this case, you process everything as soon as possible.
Note: There are some alternatives.
This is where 2+1 replication might be an option. One backup server nearby to recover normal operation in the event of the failure of a rack or part of one. AN a second backup off site, which is slower to replicate but has fair less chance of also failing.