Still performance issues in 1.4.6

Hi,
I’m using 1.4.6 linux with FUSE. I’m rsync’ing local files into the vault. The sync and encryption process for my backup has improved since 1.4.0, but is still too slow (right now, three photo jpegs per minute). I’ve done some digging in the source code of cryptomator and cryptolib. It appears that extensive locking may be the culprit.

If I rsync a tree starting at directory ./a and the jpegs are in nesting level 4 (i.e. ./a/b/c/d is a dir that contains the files) it is much slower that rsyncing directory ./d directly. I looks like as if locks are acquired for a, then a/b, then a/b/c, then a/b/c/d, then the file is compared, then all the locks are lifted vice versa, and then all the locks are acquired again for the next file in directory d. Thus, rsyncing whole trees is slow.

The locking code is spread a bit around between cryptolib and cryptomator. Without docs not easy to analyse. Any hints on how to tackle this issue? I’m happy to help if someone points me in theright direction …

Regards,
Arngast

Your observation is correct. We’ve implemented the locking scheme suggested in this paper (pdf).

Our tests with a profiler have shown very low lock contention so far, but we haven’t tested a scenario with write-only access on deep trees. In theory only the deepest directory level as well as the leaves need an exclusive write lock, but we might have made an implementation error here.

While adding a file f1 to any directory d1 on thread t1 doesn’t affect other directories dn, a second thread t2 is unable to add a second file f2 to the same directory d1 concurrently due to the write lock being held by t1. There is certainly some room for improvement.

If you want to contribute any test results, this might be a good starting point:

even i have the same issue.
want some really good solution
Thank u in advance.

We will discuss technical details here:

We found out, that locking is not the problem. However we found a different bottleneck: