I’m not sure I believe that. Safe save is in fact one of the boasts of APFS. You ask for
safe save, which causes the file system to clone the file and now you get copy-on-write
automatically, which is far more efficient because only the changed blocks actually need
to be written. When you’re done saving, the original is atomically swapped with the clone.
matt neuburg, phd =
On Apr 19, 2020, at 3:41 AM, Allan Odgaard
On 17 Apr 2020, at 4:19, Marc Wilson wrote:
I hope you don't actually make this change... atomic save is desirable, even if you
end up losing some metadata.
Disabling atomic save is not motivated by Timothy’s problem, but with APFS atomic saving
can only be done by writing a new file and then replace the old with the new via rename.
The problem with this is:
Extra care must be taken to preserve file meta data, i.e. must be copied from existing
file to the new one we write.
If user does not have write permission to the directory containing the file, we simply
can’t write the file, period.
Each save bumps the date of the containing directory: For tools observing file systems
for changes¹, this means they will have to re-scan the directory to see what changed.
Since a new file is written, it gets a new inode and thereby breaks “references” to the
file made via the inode.
Programs observing the file system via the kevent() API will be told that the file got
deleted instead of written to (and they will have to check if a new file got written in
Some of the above should not be visible to the user, as long as the software handles it
properly, but at least item 2 and 3 are user visible and IMHO a regression compared to
pre-APFS where we had exchangedata for atomic saves.
¹ This isn’t just real-time observing, but a build system may also end up doing more work
because of this, if it supports globs to select input files or similar.
TextMate mailing list