cross-process locking in linux - linux

I am looking to make an application in Linux, where only one instance of the application can run at a time. I want to make it robust, such that if an instance of the app crashes, that it won't block all the other instances indefinitely. I would really appreciate some example code on how to do this (as there's lots of discussion on this topic on the web, but I couldn't find anything which worked when I tried it).

You can use file locking facilities that Linux provides. You haven't specified the language, however you might find this capability pretty much everywhere in some form or another.
Here is a simple idea how to do that in a C program. When the program starts you can take an exclusive non-blocking lock on the whole file using fcntl system call. When another instance of the applications is attempted to be started, it will get an error trying to lock the file, which will mean the application is already running.
Here is a small example how to take the full file lock using fcntl (this function provides facilities for putting byte range locks, but when length is 0, the full file is locked).
struct flock lock_struct;
memset(&lock_struct, 0, sizeof(lock_struct));
lock_struct.l_type = F_WRLCK;
lock_struct.l_whence = SEEK_SET;
lock_struct.l_pid = getpid();
ret = fcntl(fd, F_SETLK, &lock_struct);
Please note that you need to open a file first to put a lock. This means you need to have a file around to use for locking. It might be useful to put the it somewhere where it won't cause any distraction/confusion for other applications.
When the process terminates, all locks that it has taken will be released, so nothing will be blocked.
This is just one of the ideas. I'm pretty sure there are other ways around.

The conventional UNIX way of doing this is with PID files.
Before a process starts, it checks to see if a pre-determined file - usually /var/run/<process_name>.pid exists. If found, its an indication that a process is already running and this process quits.
If the file does not exist, this is the first process to run. It creates the file /var/run/<process_name>.pid and writes its PID into it. The process unlinks the file on exit.
Update:
To handle cases where a daemon has crashed & left behind the pid file, additional checks can be made during startup if a pid file was found:
Do a ps and ensure that a process with that PID doesn't exist
If it exists ensure that its a different process
from the said ps output
from /proc/$PID/stat

Related

replace a process bin file when it is running

I have a server program(compile by g++) which is running. And I change some code and compile a new bin file. Without kill the running process, I mv the new created bin to overwrite the old one.
After a while, the server process crashed. Dose it relate to my replace action?
My server is an multi-thread high concurrent server. One crash is segfault, other one is deadlock.
I print all parameters in the core dump file and pass them exactly same to the function which was crashed. But it is OK.
And I carefully watch all thread info in the deadlock core dump, I can not find it is an possibility to cause deadlock.
So I doubt the replacement will cause strange things
According to this question, if swap action is happen, it indeed will generate strange things
For a simple standard program, even if it is currently opened by the running process, moving a new file will first unlink the original file which will remain untouched apart from that.
But for long running servers, many things can happen: some fork new processes and occasionally some can even exec a new fresh version. In that case, you could have different versions running side by side which could or not be supported depending on the change.
Said differently, without more info on what is the server program, how it is designed to run and what was the change, the only answer I can give is maybe.
If you can make sure that you remove ONLY the bin file, and the bin file isn't used by any other process (such as some daemon). Then it doesn't relate to your replace action.

How to get the last process that modified a particular file?

Ηi,
Say I have a file called something.txt. I would like to find the most recent program to modify it, specifically the full path to said program (eg. /usr/bin/nano). I only need to worry about files modified while my program is running, so I can add an event listener at program startup and find out what program modified it when my program was running.
Thanks!
auditd in Linux could perform actions regarding file modifications
See the following URI xmodulo.com/how-to-monitor-file-access-on-linux.html
Something like this generally isn't going to be possible for arbitrary processes. If these aren't arbitrary processes, then you could use some sort of network bus (e.g. redis) to publish "write" messages. Otherwise your only other bet would be to implement your own filesystem using FUSE. Even with FUSE though, you may not always have access to the pid depending on who/what is writing to the file and the security setup of your OS.

Creating a new "internal" process?

I'm writing a DLL (in Visual C++) and I recently decided that I need to move stuff that currently happens in threads into their own process. This is because I want to support multiple instances of the DLL being loaded and running. However, they all need to access the same group of resources (i/o buffers to a COM port) that needs to be autonomously monitored as long as there is at least one instance of the DLL running.
It seems I need to use CreateProcess(), but I'm unclear on how I should use the lpApplicationName argument. In the examples I've seen, the name of an existing program gets passed, but that isn't what I imagine I need to do. I expected to be able to start a process by specifying a function, much like with CreateThread(). The process doesn't need to be compiled and output as its own executable, does it? It definitely shouldn't be used by anything other than my DLL. Thanks.
EDIT: Okay, so if all CreateProcess() can do is start a pre-existing program, how can I get this to work? If the following happens:
Process loads the DLL
DLL starts port monitoring threads
Second process loads the DLL
Second DLL establishes some IPC to access the same data as the first DLL
First DLL is about to exit, and terminates the monitoring threads
Second DLL starts its own monitoring threads and continues
Doing 5 and 6 seems (especially with my implementation) like a clunky way of doing things, rather than just have behavior that I never have to terminate and restart.
EDIT: The more I think about this, the more I like the idea of making a separate executable, but if anyone think of a more "elegant" method, I'd still like to know.
You can't do that. On *nix you could fork can then call whatever function you want, but CreateProcess doesn't work that way. The only thing CreateProcess can do is launch a new process with execution starting at the entry point of an on-disk executable.

How to create temporary files on linux that will automatically clean up after themselves no matter what?

I want to create a temporary file on linux while making sure that the file will disappear after my program has terminated, even if it got killed or someone performs a hard reboot in the wrong moment. Does tmpfile() handle all this for me?
You seem pre-occupied with the idea that files might get left behind some how because of some race condition, I don't see an explanation of why this is a concern.
"A race condition occurs when a program doesn't work as it's supposed to because of an unexpected ordering of events that produces contention over the same resource."
I was assuming that from your comments on other answers your concern was specifically on a dead-lock which is a result of trying to remediate a race-condition ( contention of the shared resource ). It is still not clear what your concern is, calling tmpfile() and having the program exit abnormally before that function gets to call unlink() is the least of your worries if your application is really that fragile.
Given that there isn't any mention of concurrency, threading or other processes sharing this file descriptor to this temp file, I still don't see the possibility for a race condition, maybe the concept of an incomplete logical transaction, but that can be detected and cleaned up.
The correct way to make absolutely sure that any allocated file system resources are cleaned up is not solely on exit of an application but also also on start-up. All my server code, makes sure that everything is cleaned up from a previous run before it starts and makes itself available.
Put your temp files in a sub-dir in /tmp make sure your application cleans this sub-dir on startup and normal shutdown. You can wrap your app start up with a shell script that detects abnormal ( kill -9 ) shutdown based on PID existence and also does clean up activities.
If you don't want to use tmpfile(), you can unlink() your file immediately after creating it. It will stay open and present and allocated until it is closed.
But on a hard reboot, a fsck might be needed in order to recover the space. But as this is always the case, it is no special drawback of this approach.
according to tmpfile() man page:
The file will be automatically deleted when it is closed or the
program terminates.
I have not tested, but it seems it should do what you want.
Moreover:
The default location, if TMPDIR is not set, is /tmp.
Then, when a reboot is produced, /tmp will be empty.
EDIT: Yes
I checked the tmpfile source, and it does indeed use glglgl trick, and instantly unlocks the file.
Original:
I would say no. Got killed should work, but I would assume that it can happen, that after a hard reboot (e.g. due to power outtake) the file is still there. But that depends on your Linux distribution and the used settings.
If the temp file is created in a ramdisk, it is gone (there are unix distris out there that e.g. use a ram based tmpfs for temporary files).
Or if you use an environment that has certain policy regarding tmp, it could be also gone (maybe not instant, but often there are policies, like e.g. remove all files in /tmp that are not accessed within one month), but it could be also on a standard file system where such rules are not enforced. In this case the file would stay.
The customary approach is to set up a signal handler to clean up if the program is interrupted. This will not handle kill -9 or a physical reboot, which can't be trapped. Create temporary files in /tmp, which is normally cleaned out when the system boots. All that remains then is to teach people not to use kill -9 when they don't need to, but that appears to be an uphill battle.
In linux, mktemp command works.

Is it good practice to use mkdir as file-based locking on linux?

I wanted to quickly implement some sort of locking in perl program on linux, which would be shareable between different processes.
So I used mkdir as an atomic operation, which returns 1 if the directory doesn't exist and 0 if it does. I remove the directory right after the critical section.
Now, it was pointed to me that it's not a good practice in general (independently on the language). I think it's quite OK, but I would like to ask your opinion.
edit:
to show an example, my code looked something like this:
while (!mkdir "lock_dir") {wait some time}
critical section
rmdir "lock_dir"
IMHO this is a very bad practice. What if the perl script which created the lock directory somehow got killed during the critical section? Another perl script waiting for the lock dir to be removed will wait forever, because it won't get removed by the script which originally created it.
To use safe locking, use flock() on a lock file (see perldoc -f flock).
This is fine until an unexpected failure (e.g. program crash, power failure) happens while the directory exists.
After this, the program will never run because the lock is locked forever (assuming the directory is on a persistent filesystem).
Normally I'd use flock with LOCK_EXCL instead.
Open a file for reading+writing, creating it if it doesn't exist. Then take the exclusive lock, if that fails (if you use LOCK_NB) then some other process has it locked.
After you've got the lock, you need to keep the file open.
The advantage of this approach is, if the process dies unexpected (for example, crash, is killed or the machine fails), the lock is automatically released.

Resources