We have an application that does a LOT of logging. The medium we log to is SLC SSD drives however we are starting to see some failures in the field. We could turn logging off (we do), have log levels (we have) however sometimes an engineer turns on logging to diagnose a fault and forgets to turn it off which results in a failed SSD some time later.
Looking at the logging code, we save the log entry to a queue and every 5 seconds, iterate over the collection and use File.AppendAllText
to write the line to the file.
According to MSDN this writes to the file then closes it.
What would be a better regime to use to achieve the same functionality but prevent (or reduce) damage to the SSD?
Would it be better to open a FileStream
at software start, write to the stream during use and close before the software quits? How would this alleviate the situation at the disk level? What processes are involved and how is this better than opening the file and closing it immediately. Using FileStream
'feels' better but I need a more concrete rationale before making changes.
Maybe there is a better way that we haven't considered.
This is not so much about the number of writes but about the number of SSD pages written. The more you buffer and the less physical writes you cause the better.
AppendAllText
to append a single line is a very inefficient way to do this. It burns a lot of CPU because lots of objects and handles must be opened and closed for each line. Each change in file size causes an NTFS log flush when that change hardens.Write all data out with one
AppendXxx
call every five seconds, or build something similar using aFileStream
. You can leave it open or not. It doesn't matter. One additional IO every five seconds is meaningless for endurance.It is not possible to be more efficient than this. This scheme writes the minimal amount of data in a sequential way.
Consider compressing what you write.