This question already has an answer here:
Linux flock, how to "just" lock a file?
(1 answer)
Closed 1 year ago.
I have an existing lock file sometimes used by other processes. I want to temporarily acquire this lock file so other programs that potentially use it have to wait for me to unlock. Then I want to run a few commands, and then unlock it. How do I do this? I thought this would be easy but for some reason I cannot figure it out at all. I understand that I would most likely need to use flock for this, but what arguments should I be using in this scenario? flock seems to always need a command or second file to work, however in this situation there doesn't seem to be one.
Context: A bash script I am using is running into race conditions around a particular lock file (/var/lib/apt/lists/lock to be specific), and to test my solution I want to reliably be able to lock this file so I can check if my changes to the script work.
You have an example in the flock (1) man page. You should execute the commands in a subshell:
(
flock -n 9 || exit 1
...
) 9>/var/lib/apt/lists/lock
Using this form the lock is released when the subshell exits.
Related
I'm trying to determine whether it's possible to distinguish between two separate handles on the same file, and a single handle with two file descriptors pointing to it, using metadata from procfs.
Case 1: Two File Handles
# setup
exec 3>test.lck
exec 4>test.lck
# usage
flock -x 3 # this grabs an exclusive lock
flock -s 4 # this blocks
echo "This code is never reached"
Case 2: One Handle, Two FDs
# setup
exec 3>test.lck
exec 4>&3
# usage
flock -x 3 # this grabs an exclusive lock
flock -s 4 # this converts that lock to a shared lock
echo "This code gets run"
If I'm inspecting a system's state from userland after the "setup" stage has finished and before the "usage", and I want to distinguish between those two cases, is the necessary metadata available? If not, what's the best way to expose it? (Is adding kernelspace pointers to /proc/*/fdinfo a reasonable action, which upstream is likely to accept as a patch?)
I'm unaware of anything exposing this in proc as it is. Figuring this out may be useful when debugging some crap, but then you can just inspect the state with the kernel debugger or a systemtap script.
From your question it seems you want to achieve this in a manner which can be easily scripted and here I have to ask what is the real problem.
I have no idea if linux folks would be interested in exposing this. One problem is that exposing a pointer to file adds another infoleak and thus would be likely plugged in the future. Other means would require numbering all file objects and that's not going to happen. Regardless, you would be asked for a justification in a similar way I asked you above.
So I have looked for a solution to this everywhere and I never got an answer.
How do I make multiple crontabs? I currently am running scripts that interfere and I am 110% sure that if I am able to run multiple crontabs that I will solve this issue. (Yeah I tried everything).
Can I perhaps make multiple users that each have their own crontab? And will those crontabs run at the same time?
Thanks!
There's no exclusivity inherent in multiple crontabs. If two separate crontabs each say to run a script every Monday at 4am, then cron will run both scripts more or less at the same time, on Mondays at 4am.
At a guess, you need locking, so that only one or the other of your interfering scripts runs at a time. flock(1) is a very convenient tool for use in shell scripts.
#!/bin/bash
exec 3> /path/to/lock
flock 3
# something useful here
exit 0 # releases lock
The above acquires an advisory lock, or waits until the lock is freed and then acquires it. To try only once without waiting, you can do the following:
#!/bin/bash
exec 3> /path/to/lock
flock -n 3 || exit 1
I am trying write some script with which i can try to lock a region of file using bash shell script.
I have used flock, but it locks the whole file and does not provide parameters to lock a region of a file like in C language you get with fcntl.
Will be helpful someone can provide some suggestions in this area?
As you use flock (1) (which is a C program, see http://util-linux.sourcearchive.com/documentation/2.17/flock_8c-source.html) to utilize flock (2), you would need a similar command that utilizes fcntl. If such a command doesn't exist yet, one would have to write it.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm trying to find or implement a simple solution that can sequentially queue up Linux shell commands, so that they are executed one at a time. Here are the criteria:
The queue must execute the commands one at a time, i.e. no two commands can run at the same time.
I don't have the list of commands ahead of time. They will be coming in from web requests that my web server receives. That means the queue could be empty for a long time, and 10 requests can come in at the same time.
My web server can only do system calls to the shell, so this program/solution needs to be callable from the command line.
I only have one machine, so it can't and doesn't need to farm out the work to multiple machines.
Originally I thought the at command can do what I want, but the only thing is that it doesn't execute the commands sequentially.
I'm thinking of implementing my own solution in python with these parts:
Have a dedicated directory with a lock file
Queued commands are stored as individual files with the filename containing an incrementing sequence ID or timestamp or something similar, which I'll call "command files"
Write a python script using fcntl module on the lock file to ensure only 1 instance of the script is running
The script will watch the directory for any files and execute the shell commands in the files in the order of the filename
When the directory has no more "command files", the script will unlock the lock file and exit
When my web server wants to enqueue jobs, it will add a new "command file" and call my python script
The python script will check if another instance of itself is running. If yes, then exit, which will let the other instance handle the newly queued "command file". If no, then lock the lock file and start executing the "command files" in order
Does this sound like it'll work? The only race condition that I don't know how to handle is when the first instance of the script checks the directory and see that it's empty, and before it unlocks the lock file, a new command is queued and new instance of the script invoked. And that new script will exit when it sees the file is locked. Then the original script will unlock the file and exit.
Is there something out there that already do this, so I don't have to implement this myself?
Use a named pipe, aka FIFO:
mkfifo /tmp/shellpipe
Start a shell process whose input comes from the pipe:
/bin/sh < /tmp/shellpipe
When the web server wants to execute a command, it writes it to the pipe.
sprintf(cmdbuf, "echo '%s' > /tmp/shellpipe", command);
system(cmdbuf);
A posix message queue seems tailor made for this and whole lot simpler (and faster) than messing around with timestamped files and such. A script can enqueue the requests when they come in; another script dequeues the requests and executes them. There are some size limitations that apply to the queues but it doesn't sound like you would come close to hitting them.
I wanted to quickly implement some sort of locking in perl program on linux, which would be shareable between different processes.
So I used mkdir as an atomic operation, which returns 1 if the directory doesn't exist and 0 if it does. I remove the directory right after the critical section.
Now, it was pointed to me that it's not a good practice in general (independently on the language). I think it's quite OK, but I would like to ask your opinion.
edit:
to show an example, my code looked something like this:
while (!mkdir "lock_dir") {wait some time}
critical section
rmdir "lock_dir"
IMHO this is a very bad practice. What if the perl script which created the lock directory somehow got killed during the critical section? Another perl script waiting for the lock dir to be removed will wait forever, because it won't get removed by the script which originally created it.
To use safe locking, use flock() on a lock file (see perldoc -f flock).
This is fine until an unexpected failure (e.g. program crash, power failure) happens while the directory exists.
After this, the program will never run because the lock is locked forever (assuming the directory is on a persistent filesystem).
Normally I'd use flock with LOCK_EXCL instead.
Open a file for reading+writing, creating it if it doesn't exist. Then take the exclusive lock, if that fails (if you use LOCK_NB) then some other process has it locked.
After you've got the lock, you need to keep the file open.
The advantage of this approach is, if the process dies unexpected (for example, crash, is killed or the machine fails), the lock is automatically released.