Lots of errors after running sanitizer-0.15.jar

As part of my backup routine I am running sanitizer and check the output before the actual backup takes place to make sure I am not backing up a broken vault. For a while sanitizer returned 2 orphaned files in the info category which I took as that, as information. But today I decided to do a deep check by adding the --deep flag to the call to sanitizer.
Well, I got 17602 lines similar to this one “ERROR Unauthentic file content at chunk 9990: d/SG/N44OZ2KI5AXL57OLTLG…”
And another 2000+ with “INFO OrphanMFile m/ZW/VX/ZWVXOD3…”

Now this leads to a couple of questions:

  1. What does this error actually mean?
  2. How can I best find out which files are affected?
  • Open the vault
  • Copy it’s content to a secure place
  • Wait for errors to occur
  1. What might have caused this issue?

I tried #2 using rsync and received a lengthy error message:
[pool-1-thread-691] WARN org.eclipse.jetty.server.HttpChannel - /Container/Pictures/2017/2017%2008%20WorkshopDave/PrideAndJoy.mp4
java.io.IOException: org.cryptomator.cryptolib.api.AuthenticationFailedException: Authentication of chunk 1597 failed.
at org.cryptomator.cryptofs.ChunkCache.get(ChunkCache.java:47)
at org.cryptomator.cryptofs.OpenCryptoFile.read(OpenCryptoFile.java:75)

at org.cryptomator.cryptofs.ChunkCache.get(ChunkCache.java:40)
… 65 common frames omitted
rsync: send_files failed to open “/media/Container/Pictures/2017/2017 08 Workshop Dave/PrideAndJoy.mp4”: Input/output error (5)
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1196) [sender=3.1.2]

It looks like rsync continued on after the error so is that it, one corrupt 600MB file leading to 17k unauthentic file content errors?


Ok, I went ahead and created an unencrypted copy, created a new vault, copied ~10.000 files into the new container and ran another “sanitizer --deep”.
No errors but 2023 times “INFO OrphanMFile m/JO/54/JO546WDN4GSHFOJ4KVNMBETC7YANIGVK.lng”
I know it says “INFO” but the term “orphan” makes me nervous, do I have data chunks without a parent and hence inaccessible?

If a ciphertext file is being modified (even just a single bit flip), it’ll cause an AuthenticationFailedException. The content of a file is broken down into multiple chunks and each chunk has an authentication code (see file content encryption). If that authentication code can’t be verified anymore, the affected file has most likely been manipulated outside of Cryptomator. Ciphertext manipulation is just something that you’d like to avoid.

E.g., in your case chunk number 9990 or 1597 were somehow manipulated and that’s why you’re seeing this error. But I’m clueless on how this could’ve happened to your files. Is there something more specific to your backup routine other than rsync? Where is the backup stored? On a local hard drive, in some cloud storage? Maybe (for whatever reason) the files get manipulated after they’re being stored?

If you’re getting about 17k errors then 17k files are affected, not just one. Cryptomator encrypts each file individually and they are not dependent on other files (okay, that’s not completely true, filenames are dependent on their parent directory but it’s true for file contents and AuthenticationFailedException is a file content error).

OrphanMFile is typically not an issue (that’s why it’s just INFO in Sanitizer). Files inside m are used to map long filenames (see name shortening) and they’re not being cleaned up by Cryptomator (see issue 625 on GitHub). Actually, you can use the Sanitizer’s check command to solve OrphanMFile issues using --solve OrphanMFile. But you have to be absolutely sure that the vault is completely synced (if you’re storing it on a cloud storage) to avoid false positives (that’s why I said that it typically isn’t an issue).

OrphanDirectory warnings are the ones that indicate directories (and the files inside) that don’t have a parent and hence are inaccessible. But that shouldn’t happen. And if it does, you can use Sanitizer’s decryptVault command to restore them. However, the filenames would be lost because of the aforementioned “filename and parent directory” dependency (see filename encryption).

Thanks for getting back Tobi.

Well, my setup is as follows:

  • The vault is created and stored on a Ubuntu 18.04 server on a btrfs file system
  • I use cryptomator-CLI and “mount -t davfs http://localhost:8080/Container/ /media/Container” to make the vault content accessible.
  • The /media/Container folder is the only way in and out for clear text data
  • I use rsync /media/Container /media/backup to create a clear text copy on a separate drive
  • In parallel the actual vault is synced to OneDrive

The only access to the data on OneDrive is through the Cryptomator Android app, and it’s view only, don’t think I ever created a document on Android.

The only process (I know) interfacing with the files from “outside” would be the OneDrive sync. But given that no new data comes in via OneDrive this should always be uploads rather than downloads = the local vault is the master.
Means unless my onedrive sync application is somehow touching the local files (timestamps?), what I see should not happen?

As far as the orphans are concerned, I ran the below earlier today:

  1. Create a new vault on the local disk
  2. Open the vault via cryptomator-cli and “mount davfs”
  3. rsync ~10k files into the open vault
  4. run sanitizier --deep
  5. got 2000 orphans

If I understood you correctly this should not happen as no file names changed and syncing will only be upwards.

I can run through the 5 steps again tomorrow with a vault outside the OneDrive scope just to make sure the syncing process isn’t throwing in a curve ball with maybe updating timestamps or similar non content related changes.

Actually just did:

Scanning vault structure may take some time. Be patient…
Wrote structure to TestVault.structure.txt.
688 files in vault

Checking the vault may take some time. Be patient…

Found 1 problem(s):

  • 0 FATAL
  • 0 ERROR
  • 1 WARN
  • 1 INFO

See TestVault.check.txt for details.

1 problem(s) found.

All I did was rsync-ing files into the vault and running sanitizer. No OneDrive involved.