First for the sake of clarity, let me detail the setup:
I have a Windows 10 machine where Cryptomator is installed, using Dokany.
On this Windows machine, a network share called “Vault” is mounted as “P:”
The share is setup on a Debian server, and accessible from 10.10.20.11.
Only a specific user can access the share/directory (\10.10.20.11\Vault\ and has read/write/execute permissions for it.
The issue:
When I try to create a Cryptomator vault in the share, it is not possible because Cryptomator reports that there is no write permission to the location.
The path would be as such: \10.10.20.11\Vault\CryptoVaultChosenName
Here, “CryptoVaultChosenName” is just a placeholder name for the Cryptomator vault (can be anything).
However, when I create a Cryptomator vault on a local disk, then move the vault to the share afterward, it works perfectly fine. It does transfer files, intergrity seems good and after that I can lock and unlock the vault from Cryptomator on the Windows client, and the vault gets mounted under Windows file explorer without any problem and I can access and operate the files, read/copy…
So I am not sure if this is a limitation of very specific to my setup, but I would appreciate to just be able to create the Cryptomator vaults directly on the shares, instead of having to create multiple, then move them one by one.
For now I have a concern of being prevented to access the files in the future if it stops working.
Then I would have to copy back all the vaults of several TB, back to a Windows machine to be able to read the files. Network speed is not a concern, but client available storage is.
Cryptomator relies for filesystem permissions on the JDK, the standard toolkit of Java. It checks if the parent directory of the vault is writable, because in there it needs to create a directory.
You can test the reported filesystem access, by downloading a recent version of the JDK(for example AdoptOpenJDK 16.0.1), unzip it and run within a terminal the jshell command. (located in the .\ [...]\bin directory.
Enter
var p = Path.of("P:\\"); Files.isWritable(p);
and if the last output is true, then the location is considered writable, otherwise not.
Thanks for checking. What I realized is that even if I don’t mount the directory as a “network drive” and just map the directory as “network location” (which basically the same, but does not assign a letter and does not retrieve storage info), it does not work either.
Note that the user in question has no permissions to directly write on \10.10.20.11, he can only do so in \10.10.20.11\Vault\
Here is what I get with the JDK:
jshell> var p = Path.of("\\10.10.20.11"); Files.isWritable(p);
p ==> \10.10.20.11
$13 ==> false
jshell> var p = Path.of("\\10.10.20.11\\Vault"); Files.isWritable(p);
p ==> \10.10.20.11\Vault
$6 ==> false
I think it’s because I don’t specify the user info and the code is too simple?
Probably that’s exactly where the problem lies in the Cryptomator app. Maybe the Windows application tries to use the “windows current logged user” instead of the specific user/credentials which enable access to the folder, in order to perform the read/write test which allows you to create the vault?
As you can see in the screenshot below, there is nothing wrong with the permissions. The encryption/decryption/cryptomatorFS part works fine, it’s just that Cryptomator prevents me to create a new vault in the directory directly during the vault creation process.
So I have to create vault outside, move it by hand into the directory, then it works fine.
I moved about 1.5TB of data in one of the vaults.
I’m not knowledgeable in Java, but willing to try to help pinpoint the issue with some guidance.
When you are using jshell, you are actually writing Java code. To use the backslash in a string, it must be escaped with another backslash. Hence, to use two backslashes you must write four:
Thanks for the tip, my bad. I understood that a single backslash was to escape, but for some reason my brain skipped that there were two actual backslashes for a network path
jshell> var p = Path.of("\\\\10.10.20.11\\Vault"); Files.isWritable(p);
p ==> \\10.10.20.11\Vault\
$3 ==> false
jshelvar p = Path.of("\\\\10.10.20.11"); Files.isWritable(p);
| Exception java.nio.file.InvalidPathException: UNC path is missing sharename: \\10.10.20.11
| at WindowsPathParser.parse (WindowsPathParser.java:118)
| at WindowsPathParser.parse (WindowsPathParser.java:77)
| at WindowsPath.parse (WindowsPath.java:92)
| at WindowsFileSystem.getPath (WindowsFileSystem.java:230)
| at Path.of (Path.java:147)
| at do_it$Aux (#4:1)
| at (#4:1)
Even when the vault/CryptomatorFS is mounted from the app and working fine, I do get a “false” from this test:
jshell> var p = Path.of("N:\\"); Files.isWritable(p);
p ==> N:\
$12 ==> false
As you can see down here, full control for the user:
I’m experiencing a very similar problem but with a Synology NAS drive. I have a vault there already which I can unlock and re-lock fine. But when I try to create a new vault on the NAS I get that same ‘no write permission’ message. I haven’t tried creating it on my hard drive and then moving it to the NAS. (But I will.) Wondering if I should try an earlier version of Cryptomator.
It sounds like that the local login user differs from “access allowed” user. Is this the case?
If yes, if you run the the terminal with jshell/Cryptomator as this specific user, does it then work?
On my setup, where I only need credentials to the NAS storage (and must not be a special user), creating a vault works.
I’m not sure of how Windows handles stored credentials and such.
The only thing I know is that it does not matter if I map a drive letter, a domain name with host redirection, or a path with the IP. In all cases there is that same “permission” error.
As the user in question only exists on the remote server, I’m pretty sure I cannot start a Windows command prompt/jshell with that non local user?
EDIT: Note that I do not use Samba, maybe that’s the reason?
Also I would like to make a reminder of why I am trying to figure this out.
I really believe Cryptomator is neat, and I am convinced the “normal” behaviour would be for Cryptomator to allow me to create the vaults directly in the folder, instead of 1) creating the vault on a local drive 2) moving that vault to the distant directory 3) change the path in cryptomator.
So it would be a nice thing if we could discover why this happens in this case, then you guys can eventually fix it, if it is something caused by the application itself. That would be a nice improvement of Cryptomator. I must not be the only one with that issue.
Same issue here, with Koofr drive.
I was able to create Cryptomator vaults through Cyberduck.
Now I was not able to delete or write files there (me too, after updating to 1.5.16) so… I came here.
Same false results from JShell. I’ve even tried to delete Windows credentials associated to Cryptomator but still no luck.
I just tested and same here: Cryptomator cannot create a Vault on any of my SMB network shares despite I have a proper access through windows and my Android devices. Cryptomator process is running properly under the right user name.
I did not check if this could be the issue: I am logged in with windows PIN while the underlying user password is more complex and corresponds to the User SMB credentials.
Maybe other users having the same issue can confirm if they also use a PIN or try to disable PIN and login using the password !
Would be wired though since the TrueNAS credentials are properly populated in windows Credentials Manager. It is only the Vault creation in Cryptomator that fails with write access error. Maybe the dokany driver or some underlying system components are not running with proper user credentials ?
Thanks for the message, MarcoB. The fact that there is a 1.6 on the stocks suggests that maybe there is some awareness of this. It is taking a while though.
Yup. Meanwhile I had to “move back” using Cyberduck with Cryptomator, that works (wondering which Cryptomator version they are using: from source pom.xml seems to be 1.3.0!).
But, hey, anyway they both are two useful pieces of software and I’ve actually contributed to them, so I’m fine for now.