Remember, a file system is a complex entity, and to ensure total data integrity is a major challenge. Not only is there a need to write the data, but also the directory entries, and updates to the allocation tables that keeps track of what clusters of the media have been written too.
To safeguard this multiple writing operation, the dirty bit would be written, followed by ALL the required data writes including the multiple copies of the allocation table, followed by once it was known the data had been flushed to the actual disk, the dirty bit would be cleared.
While it is possible that some hypothetical file system would forgo some of these steps, the use of the dirty bit is an attempt to ensure that full consistency exists between all the structures that comprise a file system.
I don't care WHAT O/S or file system is in use; it is impossible for their to be a guarantee on the consistency of these data structures without some form of dirty bit; especially when some of the structures are stored in more than one location.
|