I'm trying to lock a file and obviously there is something I'm missing, because eventhough it seems it's locked I can still access and edit it using vim editor.
Locking file:
flock -x lock2.txt sleep 30
Checking using lslocks:
COMMAND PID TYPE SIZE MODE M START END PATH
flock 5417 FLOCK 10B WRITE 0 0 0 /home/lock2.txt
But still able to access it and edit using different terminal (different process I believe).
I tried the solution with file descriptors here
Linux flock, how to "just" lock a file?
but the result is still the same.
The flock command uses flock(2) to lock the file. As the documentation says
flock() places advisory locks only; given suitable permissions on a file, a process is free to ignore the use of flock() and perform I/O on the file.
In general, applications don't check advisory locks. They're intended for use within a specific application to coordinate between multiple processes.
The flock command is most often used by a single application just to prevent itself from running multiple times concurrently.
Related
Is there any way to open a file with non-sharing exclusive read-write access?
A file change event from fs.watch does not necessarily mean that the file has been completely written, In the case of most node based processes more chunks are coming down the stream, or it might just not have been flushed yet.
fs.open lets a file that is already open and being streamed to be opened, in write mode without an error. One could introduce a timeout delay but that's just too brittle and arbitrary.
On windows, one would be able to do CreateFile with FILE_SHARE_NONE from C, can't quite recall what the equivalent is on Linux (as locks are advisory if i remember correctly), don't know if OS X has an equivalent, posix or otherwise).
You can use #ronomon/opened to check if a file is open in another process, if any applications have open handles or file descriptors to the file.
It won't tell you which applications have the file open, only that the file is open in other applications.
It works on Windows, macOS and Linux and requires privileges only on Linux.
It uses a native binding on Windows to open the file with exclusive sharing mode to detect any sharing violations due to other processes with open handles.
On macOS and Linux it wraps lsof.
Compared to using an alternative such as flock, as far as I understand, flock is only advisory, so it will only work if all processes cooperate to check the lock, which is not the case most of the time if the processes are independent.
See flock function in fs-ext package.
I'm trying to implement a file-based exclusive lock for a daemon, which applies on a per-file basis (no inter-thread or intra-process locking). I know it's a common problem, and has some conventions established, but I am having trouble getting it right, or understanding the problem completely.
I've looked at other answers, and currently I'm using something very close to this: https://stackoverflow.com/a/1643134, ie. using flock to create an advisory lock on program start. However, this doesn't do what I want; the call to flock always succeeds.
I'm not sure if my code is incorrect, or if I've misunderstood, and flock isn't meant to work across separate processes (?).
This is C++ (11) code, tested on Linux 2.6.32 (CentOS VM) and 3.12.9 (Arch), both ext4 filesystems.
I was closing the file descriptor at the end of the acquire routine. Whoops.
I'm working on a multithreaded application where multiple threads may want exclusive access to the same file. I'm looking for a way of serializing these operations. I was planning to use flock, lockf, or fcntl locking. However it appears that with these methods an attempt to lock a file by a second thread when a first thread already owns the lock will be granted, because the two threads are in the same process. This is according to the manpages for flock and fnctl (and I guess in linux lockf is implemented with fnctl). Also supported by this other question. So, are there other ways of locking a file in linux which works at a thread-level instead of a process-level?
Some alternatives that I came up with which I do not like are:
1) Use a lockfile (xxx.lock) opened with O_CREAT | O_EXCL flags. This call will succeed only in one thread if there is contention. The problem with this is that then other threads have to spin on the call until they achieve the lock, meaning that I have to _yield() or sleep() which makes me think this is not a great option.
2) Keep a mutex'ed list of all open files. When a thread wants to open/close a file it has to lock the list first. When opening a file, it searches the list to see if it's open. This sounds particularly inefficient because it requires a significant amount of work even if the file is not owned yet.
Are there other ways of doing this?
Edit:
I just discovered this text in my system's manpages which isn't in the online man pages:
If a process uses open(2) (or similar) to obtain more than one descriptor for the same file, these descriptors are treated independently by flock(). An attempt to lock the file using one of these file descriptors may be denied by a lock that the calling process has already placed via another descriptor.
I'm not happy about the words "may be denied", I'd prefer "will be denied" but I guess it's time to test that.
While stracing some linux daemons (eg. sendmail) I noticed that some of them will call close() on a number of descriptors (usually ranging from 3 to 255) right at the beginning. Is this being done on purpose or is this some sort of a side effect of doing something else?
It is usually done as part of making a process a daemon.
All file descriptors are closed so that the long-running daemon does not unnecessarily hold any resources. For example, if a daemon were to inherit an open file and the daemon did not close it then the file could not be deleted (the storage for it would remain allocated until close) and the filesystem that the file is on could not be unmounted.
Daemonizing a process will also take a number of other actions, but those actions are beyond the scope of this question.
If one of my processes open a file, let's say for reading only, does the O.S guarantee that no other process will write on it as I'm reading, maybe
leaving the reading process with first part of the old file version, and second part of the newer file version, making data integrity questionable?
I am not talking about pipes which have no seek, but on regular files, with seek option (at least when opened with only one process).
No, other processes can change the file contents as you are reading it. Try running "man fcntl" and ignore the section on "advisory" locks; those are "optional" locks that processes only have to pay attention to if they want. Instead, look for the (alas, non-POSIX) "mandatory" locks. Those are the ones that will protect you from other programs. Try a read lock.
No, if you open a file, other processes can write to it, unless you use a lock.
On Linux, you can add an advisory lock on a file with:
#include <sys/file.h>
...
flock(file_descriptor,LOCK_EX); // apply an advisory exclusive lock
Any process which can open the file for writing, may write to it. Writes can happen concurrently with your own writes, resulting in (potentially) indeterminate states.
It is your responsibility as an application writer to ensure that Bad Things don't happen. In my opinion mandatory locking is not a good idea.
A better idea is not to grant write access to processes which you don't want to write to the file.
If several processes open a file, they will have independent file pointers, so they can seek() and not affect one another.
If a file is opened by a threaded program (or a task which shares its file descriptors with another, more generally), the file pointer is also shared, so you need to use another method to access the file to avoid race conditions causing chaos - normally pread, pwrite, or the scatter/gather functions readv and writev.