Can I prevent a subsequent chained bashed command from running? - linux

I want to prevent a bash command from executing that has been chained using ; from running while the previous command is still running.
e.g. I write and submit command a; command b, but while command a is running I change my mind and want to prevent command b from running.
I cannot use kill because the subsequent command is not actually executing. Does bash have a queue of commands that can be manipulated?
To clarify, I am sure it is possible to make a new script or something that would allow me to create a queue, but that is not what this question is about. I specifically want to know if bash can prevent commands after a semicolon from running after I've 'submitted' them.

Consider these two scripts:
runner.sh
#!/bin/bash
while true
do
next_command=$(head -1 next_commands.list)
$next_command
sleep 60 #added to simulate processing time
done
next_commands.list
id
ls
echo hello
You can modify the content of the next_commands.list file to create a type of queue of which commands should be executed next.

Related

How do I execute commands serially inside a shell script?

I want to write a shell script to execute some commands in sequence (the next one starts only after the previous has finished and so on). I've never written a bash script before and I couldn't find the appropriate sequence.
I know that in the terminal you do things like yarn this && yarn that && yarn other and it works but I don't know the equivalent inside a shell script.
#!/bin/sh
direnv allow
# now wait for direnv allow to finish
direnv reload
# now wait for direnv reload to finish
yarn start:server
The shell will execute each command one after another serially as written. Servers often daemonize, i.e. fork() and have the parent return. They usually have -f flag to suppress that behavior, precisely because you may want that serial behavior.

Running multiple npm scripts on boot

I have a server that runs an express app and a react app. I needed to start both the apps on boot.
So I added two lines on rc.local but it seems like only the first line runs and the second one doesn't. Why is that and how can I solve it?
Just as in any other script, the second command will only be executed after the first one has finished. That's probably not what you want when the first command is supposed to keep running pretty much forever.
If you want the second command to execute before the first has finished, and if you want the script to exit before the second command has finished, then you must arrange for the commands to run in the background.
So, at a minimum, instead of
my-first-command
my-second-command
you want:
my-first-command &
my-second-command &
However, it's better to do something a little more complex that in addition to putting the command into the background also places the command's working directory at the root of the filesystem, disconnects the command's input from the console, delivers its standard output and standard error streams to the syslog service (which will typically append that data into /var/log/syslog) and protects it from unintended signals. Like:
( cd / && nohup sh -c 'my-first-command 2>&1 | logger -t my-first-command &' </dev/null >/dev/null 2>&1 )
and similarly for the second command.
The extra redirections at the end of the line are to keep nohup from emitting unwanted informational messages and creating an unused nohup.out file. You might want to leave the final 2>&1 out until you are sure that the rest of the command is correct and is behaving the way you want it to. When you get to the point where the only message shown is nohup: redirecting stderr to stdout then you can restore the 2>&1 to get rid of that message.

I am looking for a way to call multiple shell scripts in a serialized manner from a base shell script

I am looking for a way to call multiple shell scripts in a serialized manner from a base shell script. Scenario:
A shell script- Base_script.sh , which will internally call:
script_1.sh ,
script_2.sh ,
script_3.sh ,
Once "script_1.sh" will finish, THEN ONLY it should call "script_2.sh" and so on.
What all methods i have tried, are somehow executing all the scripts at once.
OS: RHEL using Shell/Bash
Edit: In response to some comments, I agree, i can use (which i already did):
script1.sh && script2.sh
calling each script one by one (sh script1.sh; sh script2.sh ..so on)
even tried using an array to declare the scripts and then execute each in a loop
Problem & the Solution i got:
Each script eg-"script_1.sh" was a complex one. Its all doing some kind of database benchmarking.
The scripts were having some function that was going in background (someFunction &) while execution. So even though the scripts were actually getting called one by one, yet the processing of the previous scripts kept on going in the background.
Had to redesign the entire thing to get every module & functions in the "Base_script.sh" itself.
Thanks everyone for the answers though. Appreciate it !!
In the general case, there's nothing needed here at all.
./script1
./script2
./script3
...will automatically wait for script1 to exit before running script2.
On the other hand, what you may have here is a case where your script1 intentionally backgrounds itself (an action also known as "self-daemonization"). One way to wait for a daemonized process to exit is to use filesystem-level advisory locking; the below uses the flock command for that purpose:
flock -x my.lck ./script1
flock -x my.lck ./script2
flock -x my.lck ./script3
flock -x my.lck true
Even if script1 itself exits, if it has child processes still running that hold the file descriptor on my.lck, then script2 will be blocked from starting.
As stated you can use && or if you wanted to make it more complete you can use
wait $!
which will wait for the pid of the last run command to exit. This way you can add some if statements in for validity.

Concurrency with shell scripts in failure-prone environments

Good morning all,
I am trying to implement concurrency in a very specific environment, and keep getting stuck. Maybe you can help me.
this is the situation:
-I have N nodes that can read/write in a shared folder.
-I want to execute an application in one of them. this can be anything, like a shell script, an installed code, or whatever.
-To do so, I have to send the same command to all of them. The first one should start the execution, and the rest should see that somebody else is running the desired application and exit.
-The execution of the application can be killed at any time. This is important because does not allow relying on any cleaning step after the execution.
-if the application gets killed, the user may want to execute it again. He would then send the very same command.
My current approach is to create a shell script that wraps the command to be executed. This could also be implemented in C. Not python or other languages, to avoid library dependencies.
#!/bin/sh
# (folder structure simplified for legibility)
mutex(){
lockdir=".lock"
firstTask=1 #false
if mkdir "$lockdir" &> /dev/null
then
controlFile="controlFile"
#if this is the first node, start coordinator
if [ ! -f $controlFile ]; then
firstTask=0 #true
#tell the rest of nodes that I am in control
echo "some info" > $controlFile
fi
# remove control File when script finishes
trap 'rm $controlFile' EXIT
fi
return $firstTask
}
#The basic idea is that a task executes the desire command, stated as arguments to this script. The rest do nothing
if ! mutex ;
then
exit 0
fi
#I am the first node and the only one reaching this, so I execute whatever
$#
If there are no failures, this wrapper works great. The problem is that, if the script is killed before the execution, the trap is not executed and the control file is not removed. Then, when we execute the wrapper again to restart the task, it won't work as every node will think that somebody else is running the application.
A possible solution would be to remove the control script just before the "$#" call, but that it would lead to some race condition.
Any suggestion or idea?
Thanks for your help.
edit: edited with correct solution as future reference
Your trap syntax looks wrong: According to POSIX, it should be:
trap [action condition ...]
e.g.:
trap 'rm $controlFile' HUP INT TERM
trap 'rm $controlFile' 1 2 15
Note that $controlFile will not be expanded until the trap is executed if you use single quotes.

what does "bash:no job control in this shell” mean?

I think it's related to the parent process creating new subprocess and does not have tty. Can anyone explain the detail under the hood? i.e. the related working model of bash, process creation, etc?
It may be a very broad topic so pointers to posts are also very appreciated. I've Googled for a while, all the results are about very specific case and none is about the story behind the scene. To provide more context, below is the shell script resulting the 'bash: no job control in this shell'.
#! /bin/bash
while [ 1 ]; do
st=$(netstat -an |grep 7070 |grep LISTEN -o | uniq)
if [ -z $st ]; then
echo "need to start proxy #$(date)"
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
else
echo "proxy OK #$(date)"
fi
sleep 3
done
This line:
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
is where "bash:no job control in this shell” come from.
You may need to enable job control:
#! /bin/bash
set -m
Job control is a collection of features in the shell and the tty driver which allow the user to manage multiple jobs from a single interactive shell.
A job is a single command or a pipeline. If you run ls, that's a job. If you run ls|more, that's still just one job. If the command you run starts subprocesses of its own, then they will also belong to the same job unless they are intentionally detached.
Without job control, you have the ability to put a job in the background by adding & to the command line. And that's about all the control you have.
With job control, you can additionally:
Suspend a running foreground job with CtrlZ
Resume a suspended job in the foreground with fg
Resume a suspend job in the background with bg
Bring a running background job into the foreground with fg
The shell maintains a list of jobs which you can see by running the jobs command. Each one is assigned a job number (distinct from the PIDs of the process(es) that make up the job). You can use the job number, prefixed with %, as an argument to fg or bg to select a job to foreground or background. The %jobnumber notation is also acceptable to the shell's builtin kill command. This can be convenient because the job numbers are assigned starting from 1, so they're shorter than PIDs.
There are also shortcuts %+ for the most recently foregrounded job and %- for the previously foregrounded job, so you can switch back and forth rapidly between two jobs with CtrlZ followed by fg %- (suspend the current one, resume the other one) without having to remember the numbers. Or you can use the beginning of the command itself. If you have suspended an ffmpeg command, resuming it is as easy as fg %ff (assuming no other active jobs start with "ff"). And as one last shortcut, you don't have to type the fg. Just entering %- as a command foregrounds the previous job.
"But why do we need this?" I can hear you asking. "I can just start another shell if I want to run another command." True, there are many ways of multitasking. On a normal day I have login shells running on tty1 through tty10 (yes there are more than 6, you just have to activate them), one of which will be running a screen session with 4 screens in it, another might have an ssh running on it in which there is another screen session running on the remote machine, plus my X session with 3 or 4 xterms. And I still use job control.
If I'm in the middle of vi or less or aptitude or any other interactive thing, and I need to run a couple of other quick commands to decide how to proceed, CtrlZ, run the commands, and fg is natural and quick. (In lots of cases an interactive program has a ! keybinding to run an external command for you; I don't think that's as good because you don't get the benefit of your shell's history, command line editor, and completion system.) I find it sad whenever I see someone launch a secondary xterm/screen/whatever to run one command, look at it for two seconds, and then exit.
Now about this script of yours. In general it does not appear to be competently written. The line in question:
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
is confusing. I can't figure out why the ssh command is being passed down to a separate shell instead of just being executed straight from the main script, let alone why someone added -i to it. The -i option tells the shell to run interactively, which activates job control (among other things). But it isn't actually being used interactively. Whatever the purpose was behind the separate shell and the -i, the warning about job control was a side effect. I'm guessing it was a hack to get around some undesirable feature of ssh. That's the kind of thing that when you do it, you should comment it.
One of the possible options would be not having access to the tty.
Under the hood:
bash checks whether the session is interactive, if not - no job
control.
if forced_interactive is set, then check that stderr is
attached to a tty is skipped and bash checks again, whether in can
open /dev/tty for read-write access.
then it checks whether new line discipline is used, if not, then job control is disabled too.
If (and only if) we just set our process group to our pid, thereby becoming a process group leader, and the terminal is not in the same process group as our (new) process group, then set the terminal's process group to our (new) process group. If that fails, set our process group back to what it was originally (so we can still read from the terminal) and turn off job control.
if all of the above has failed, you see the message.
I partially quoted the comments from bash source code.
As per additional request of the question author:
http://tiswww.case.edu/php/chet/bash/bashtop.html Here you can find bash itself.
If you can read the C code, get the source tarball, inside it you will find job.c - that one will explain you more "under the hood" stuff.
I ran into a problem on my own embedded system and I got rid of the "no job control" error by running the getty process with "setsid", which according to its manpage starts a process with a new session id.
Faced this problem only because I've miscopied a previously executed command together with % prefix into zsh, like % echo this instead of echo this. The error was very unclear to such a stupid typo

Resources