You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When FUSE-mounting the cloud drive in multi-threaded mode, there is a race condition when reading a file immediately after writing it.
Steps to reproduce:
mount Amazon Cloud Drive with acd_cli mount (default is multi-threaded mode) at /mnt/acd
cd /mnt/acd
echo "this is a test" > file ; cat file
This will return nothing, because the when cat reads the newly created file, a new thread is started that reads the file, but the file is empty because the upload thread has not yet completed.
If this procedure is repeated, but -st is passed to the mount command, then it works as expected and the command echo "this is a test" > file ; cat file returns "this is a test".
I ran into this problem using s3ql on top of an acd_cli FUSE mount. It often writes a file and then opens it to read almost immediately. But it gets and empty file and panics. If I mount acd_cli in single-threaded mode, everything works as expected, but I only ever get one upload (or download) thread, which is much slower than when multiple threads are used.
So, is there a way to fix the race condition by generally using multiple threads in the acd_cli FUSE mount, but never using more than one thread for the same file? Or perhaps another solution would be to introduce a lock on an open file once data is being written to it, and while the lock exists, any read requests have to wait until the lock is released. Once the write is finished (signaled by closing the file-handle that was used to write), the lock is released. This way it could also be avoided that two different threads write to the same file at the same time (not sure whether that is a potential issue or whether a mechanism to guard against this exists already).
The text was updated successfully, but these errors were encountered:
@jlippuner have a look at my branch for pr #374. I ran into similar issues with ecryptfs and implemented a local file cache that sticks around as long as there are open file handles, effectively reference counting that file.
The caveat here is in your example:
echo "this is a test" > file ; cat file
The reference count will go to zero at the semicolon since the first file operation finished and the second hasn't started yet.
This example may work with my branch, but you can imagine the race condition:
echo "this is a test" > file & ; cat file
That said, if s3ql keeps a file open and reads/writes to it occasionally and closes it some time later, my PR is probably what you're looking for.
When FUSE-mounting the cloud drive in multi-threaded mode, there is a race condition when reading a file immediately after writing it.
Steps to reproduce:
/mnt/acd
cd /mnt/acd
echo "this is a test" > file ; cat file
This will return nothing, because the when cat reads the newly created file, a new thread is started that reads the file, but the file is empty because the upload thread has not yet completed.
If this procedure is repeated, but
-st
is passed to the mount command, then it works as expected and the commandecho "this is a test" > file ; cat file
returns "this is a test".I ran into this problem using s3ql on top of an acd_cli FUSE mount. It often writes a file and then opens it to read almost immediately. But it gets and empty file and panics. If I mount acd_cli in single-threaded mode, everything works as expected, but I only ever get one upload (or download) thread, which is much slower than when multiple threads are used.
So, is there a way to fix the race condition by generally using multiple threads in the acd_cli FUSE mount, but never using more than one thread for the same file? Or perhaps another solution would be to introduce a lock on an open file once data is being written to it, and while the lock exists, any read requests have to wait until the lock is released. Once the write is finished (signaled by closing the file-handle that was used to write), the lock is released. This way it could also be avoided that two different threads write to the same file at the same time (not sure whether that is a potential issue or whether a mechanism to guard against this exists already).
The text was updated successfully, but these errors were encountered: