I have a general question about file systems and how they manage to keep a reliable state while managing metadata on a disk.
Lets assume we have a block device with a block size of 512 bytes. The file system on that drive does store information about the different file sizes in specific data structures. One data structure has a size of 64 bytes. Because I care about reliability and crash-resistance a lot, there is a redundant copy of that metadata. A one-bit flag indicates, which version for the metadata is currently in use. As this is an atomic action, it improves the reliability.
Now I would like to write some data to the file itself. Besides the actual data region, the metadata (like file-size) do have to be updated. From my understanding both for the data region and for the metadata, the operating system must perform a read-modify-write operation. The read-operating for the metadata could be 512 bytes.
Now that the metadata is written I would like to flip the bit. Is there a way to not read the full block and modify it again? Otherwise I would have to write full 512 bytes just to flip one bit, which is not an atomic operation any more. Again, this would contradict my intention to have high reliability for the files.
Do you know how modern file systems handle those fine and small write operations while still being crash resistant?
From my limited knowledge of systems, atomicity is segmented into different levels. If this update action is mutually exclusive, then from the perspective of VFS or FS, it is considered atomic, as mentioned by @stark