1.4.0 beta 2: How to use rsync and fuse to sync files?

os:linux

#1

Hi there,

the issue is that in a fuse mounted vault the creations times cannot be modified. If I use “touch -t” or -d to modify the time stamp, it will always use the current time. This behaviour prevents the use of rsync to copy files efficiently into the vault.

Details:
I want to place an encrypted copy of my important data folders in the cloud and regularly update them by transferring only changed files. So my cryptomator vault is located in the local cloud folder and I mount the vault using fuse (as to avoid several known Webdav issues). Now I need to copy (and later sync) several data folders into the vault. rsync should do the trick and it somewhat worked when I used Webdav.

Now, with the new fuse-mount (I’m using 1.4.0 beta 2 x64 on Suse Leap 15), unfortunately all copied files have time stamps of the time when copied, not their original time stamps. As rsync relies on timestamps to find new files to sync, the result is that always the entire content is copied, not just the changed files.

I am aware that there are issues on github (Issue #220 ) but before this is fixed …

Question:
How are you guys using fuse and rsync right now?

Regards,
arngast


#2

hi,

I had the same issue with rsync and fuse-mount (also 1.4.0 beta 2 on mint 19).

Not sure if this will work for you, but with the --update (-u) flag it seems to work just fine.

Here is what I run. -stats tells me that rsync only copies new files.

`rsync -uav --delete /source/folder/ Cryptomator/fuse/folder/`

before i had just -av --delete and it copied everything every time… not the purpose of rsync.

does that help?


#3

Hi,

right you are. This is a workaround - thanks for pointing this out.

However, original time stamps are not preserved. But yesterday issue #220 got closed which should fix the time stamp problem. So, hopefully we will soon see a 1.4.0 that doesn’t require the workaround.

Thanks for your reply.
arngast


#4

I have to admit that using 1.4.0 final under Linux (AppImage) it still does not work properly with rsync. But rsync is one of the best synchronization tool and it would be very much expected to work with it.
Having WebDAV drive which I mounted over FUSE using davfs2 and /etc/fstab entry. So it looks like network drive which user can mount.

Did two tests:

  1. local data -> remote vault without rsync: source folder locally and Cryptomator vault on that network drive - mounted network drive beforehand and then mounted vault and used rsync to copy data from local drive to remote storage. Had problems that sometimes data was doubled, discovered one file broken
  2. local data ---rsync--> local vault --rsync-> remote vault: source folder locally and Cryptomator vault also locally - even during rsync from one local to another local folder, already one file was damaged. Also when used rsync to copy to that network drive mounted locally - same issue.

Remote vault was not mounted in Cryptomator. Tried initially sync two vaults while source and destination vault were mounted in Cryptomator but it did not work properly. Then realized that at least destination should not be mounted during sync.

To tackle with temporary rsync files (beginning with dot), tested following rsync parameters:

--delete
--delete-after
--inplace
-u

… but never got satisfactory result. Tried to run multiple times and every time got different result.

Strange was that --delete suppose to delete in destination all what is not in source location but it did not. Even when locally synced from one folder to another. At some point --delete-after helped but not always. Same applies to -u. The --inplace might sound appealing but might be dangerous while syncing remotely. At some point helped but again - not always.

Usual keys were -ah and used --info=progress2 to see only conclusive output. Noticed that -z (compressing for data transfer) slowed down and was useless.

Yes, the #220 bug is fixed and timestamps are kept but this does not help if randomly temporary files are deleted and not. I got same folders twice, sometimes masterkey.cryptomator.bkup were deleted or there was only that randomly generated filename but original .bkup file was missing. Then Cryptomator refused to connect that vault.

Also noticed small arrow on left pane edge under settings, when clicked then small pull-down menu appeared to be visible where I could choose dav/webdav - before clicking that tiny arrow that menu was hidden
Also noticed that in file ~/.Cryptomator/settings.json was setting:
"preferredGvfsScheme": "dav", - don’t know whether it would be possible to use also davs but it seems that it depends on next setting whether there is dav or FUSE used.
There was also:
"preferredVolumeImpl": "FUSE"
… which is most important.

Would expect stable working with rsync (also + SSH) both locally and remotely. Currently could not sync local folder with vault without losses. Tried also manually just copy using PCmanFM file manager. As rsync prvides delta transfer then it is most suitable way. Also many hosting services provide SSH, WebDAV(s), rsync.
WebDAV is more useful as cannot give SSH access via key to users to server (no jail for SSH usually) and often hosting providers allow to use also WebDAV which can be locked into certain directory.

Some citations from ~/.Cryptomator/cryptomator0.log
this one was several (probably thousands of) times with different data chunks and I got ~32MiB of log file:

[Thread-13709] WARN  o.c.c.CryptoBasicFileAttributes - Wrong cipher text file size of file /home/user/webdisk/Pictures/d/Y3/BSB2TJ3DDM6HHV7SYD25KZ2UWAZE2Y/37WTTSF3ZGN27RHRSNXYE2GWYDA5TVRXUWZIW2FODCDLKOHHNQPZETLCXA======. Returning a file size of 0.
04:39:28.217 [Thread-13709] WARN  o.c.c.CryptoBasicFileAttributes - Thrown exception was:
java.lang.IllegalArgumentException: expected ciphertextSize to be positive, but was -88
        at com.google.common.base.Preconditions.checkArgument(Preconditions.java:202)
        at org.cryptomator.cryptolib.Cryptors.cleartextSize(Cryptors.java:42)
        at org.cryptomator.cryptofs.CryptoBasicFileAttributes.size(CryptoBasicFileAttributes.java:71)
        at org.cryptomator.frontend.fuse.ReadOnlyAdapter.getattr(ReadOnlyAdapter.java:125)
        at ru.serce.jnrfuse.AbstractFuseFS.lambda$init$1(AbstractFuseFS.java:96)
        at jnr.ffi.provider.jffi.NativeClosureProxy$$impl$$0.invoke(Unknown Source)

In all cases the ciphertextSize value was -88 in log file.

So - currently it seems that I still cannot use Cryptomator no matter how much I would like to. Thought that hopefully locally I can encrypt and not keep data outside vault to save disk space but it seems that even locally data may be corrupted and unreadable. So currently yet it is not enough reliable to store data only in Cryptomator 1.4.0 vault. For Linux I used LXLE 16.04 LTS (basically Lubuntu 16.04 LTS) with all current updates done.

Found one possibly related issue:


#5

Yes, I also see those problems with 1.4.0 and FUSE. It all seemed to work ok on a small set of files, but when I tried rsyncing a larger number of files, I found:

  • performance is very slow (looks like way too many permission checks in the code)
  • some files are synced even when not changed

Reverting to WebDAV is no option as the time stamps needed for rsync do not work. So, I agree, right now I also do not have a working solution.


#6

The Timestamp issue should be solved with 1.4.
im using WebDAV and my backup tools uses timestamp comarison and it works fine for me.


#7

ok, wrong argument on my part. I tried WebDAV before, but is so full of issues that I needed to switch to FUSE. And now, FUSE does not seem to be ready either …


#8

Cryptomator 1.4.2 is now released with several fixes and improvements. Are the timestamps still an issue?