what does "bash:no job control in this shell” mean? - linux

I think it's related to the parent process creating new subprocess and does not have tty. Can anyone explain the detail under the hood? i.e. the related working model of bash, process creation, etc?
It may be a very broad topic so pointers to posts are also very appreciated. I've Googled for a while, all the results are about very specific case and none is about the story behind the scene. To provide more context, below is the shell script resulting the 'bash: no job control in this shell'.
#! /bin/bash
while [ 1 ]; do
st=$(netstat -an |grep 7070 |grep LISTEN -o | uniq)
if [ -z $st ]; then
echo "need to start proxy #$(date)"
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
else
echo "proxy OK #$(date)"
fi
sleep 3
done
This line:
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
is where "bash:no job control in this shell” come from.

You may need to enable job control:
#! /bin/bash
set -m

Job control is a collection of features in the shell and the tty driver which allow the user to manage multiple jobs from a single interactive shell.
A job is a single command or a pipeline. If you run ls, that's a job. If you run ls|more, that's still just one job. If the command you run starts subprocesses of its own, then they will also belong to the same job unless they are intentionally detached.
Without job control, you have the ability to put a job in the background by adding & to the command line. And that's about all the control you have.
With job control, you can additionally:
Suspend a running foreground job with CtrlZ
Resume a suspended job in the foreground with fg
Resume a suspend job in the background with bg
Bring a running background job into the foreground with fg
The shell maintains a list of jobs which you can see by running the jobs command. Each one is assigned a job number (distinct from the PIDs of the process(es) that make up the job). You can use the job number, prefixed with %, as an argument to fg or bg to select a job to foreground or background. The %jobnumber notation is also acceptable to the shell's builtin kill command. This can be convenient because the job numbers are assigned starting from 1, so they're shorter than PIDs.
There are also shortcuts %+ for the most recently foregrounded job and %- for the previously foregrounded job, so you can switch back and forth rapidly between two jobs with CtrlZ followed by fg %- (suspend the current one, resume the other one) without having to remember the numbers. Or you can use the beginning of the command itself. If you have suspended an ffmpeg command, resuming it is as easy as fg %ff (assuming no other active jobs start with "ff"). And as one last shortcut, you don't have to type the fg. Just entering %- as a command foregrounds the previous job.
"But why do we need this?" I can hear you asking. "I can just start another shell if I want to run another command." True, there are many ways of multitasking. On a normal day I have login shells running on tty1 through tty10 (yes there are more than 6, you just have to activate them), one of which will be running a screen session with 4 screens in it, another might have an ssh running on it in which there is another screen session running on the remote machine, plus my X session with 3 or 4 xterms. And I still use job control.
If I'm in the middle of vi or less or aptitude or any other interactive thing, and I need to run a couple of other quick commands to decide how to proceed, CtrlZ, run the commands, and fg is natural and quick. (In lots of cases an interactive program has a ! keybinding to run an external command for you; I don't think that's as good because you don't get the benefit of your shell's history, command line editor, and completion system.) I find it sad whenever I see someone launch a secondary xterm/screen/whatever to run one command, look at it for two seconds, and then exit.
Now about this script of yours. In general it does not appear to be competently written. The line in question:
bash -i -c "ssh -D 7070 -N user#my-ssh.example.com > /dev/null"
is confusing. I can't figure out why the ssh command is being passed down to a separate shell instead of just being executed straight from the main script, let alone why someone added -i to it. The -i option tells the shell to run interactively, which activates job control (among other things). But it isn't actually being used interactively. Whatever the purpose was behind the separate shell and the -i, the warning about job control was a side effect. I'm guessing it was a hack to get around some undesirable feature of ssh. That's the kind of thing that when you do it, you should comment it.

One of the possible options would be not having access to the tty.
Under the hood:
bash checks whether the session is interactive, if not - no job
control.
if forced_interactive is set, then check that stderr is
attached to a tty is skipped and bash checks again, whether in can
open /dev/tty for read-write access.
then it checks whether new line discipline is used, if not, then job control is disabled too.
If (and only if) we just set our process group to our pid, thereby becoming a process group leader, and the terminal is not in the same process group as our (new) process group, then set the terminal's process group to our (new) process group. If that fails, set our process group back to what it was originally (so we can still read from the terminal) and turn off job control.
if all of the above has failed, you see the message.
I partially quoted the comments from bash source code.
As per additional request of the question author:
http://tiswww.case.edu/php/chet/bash/bashtop.html Here you can find bash itself.
If you can read the C code, get the source tarball, inside it you will find job.c - that one will explain you more "under the hood" stuff.

I ran into a problem on my own embedded system and I got rid of the "no job control" error by running the getty process with "setsid", which according to its manpage starts a process with a new session id.

Faced this problem only because I've miscopied a previously executed command together with % prefix into zsh, like % echo this instead of echo this. The error was very unclear to such a stupid typo

Related

how to write a shell command to execute a command in an ALREADY open terminal window

I have a process that depends on the internet, which dies randomly due to a spotty connection.
I am writing a cron script, so that it checks every minute if the process is running, and restarts it...
the process doesn't kill the terminal window it's in.
I don't want to kill the terminal - then spawn a new one.
I want the shell script I'm writing to execute in the window that's already open...
I'm using i3-sensible-terminal right now, but any terminal would do.
if ! ps -a | grep x123abc > /dev/null ; then
$CMD
fi
I have not yet located the information I need to have that run in a specific terminal.
changing the CMD to include a terminal seems to only open up a new window...
Suggesting a different design to separate running your script from observing your script output.
Write script named worker "that depends on the internet, which dies randomly due to a spotty connection." appends ALL its output to log file \home\$USER\worker.log.
Or just redirect ALL output from script named worker to log file \home\$USER\worker.log.
worker > \home\$USER\worker.log 2>&1
Run script name worker as a restartable service with systemd unit service.
Here is a good article explaining this practice: https://dev.to/setevoy/linux-systemd-unit-files-edit-restart-on-failure-and-email-notifications-5h3k
Continue to observe the log file \home\$USER\worker.log using tailf command
tailf \home\$USER\worker.log

How to log every single command executed from shell script

I am trying to find a way to record every single command that is executed by any user on the system.
Things that I have came across earlier.
It is possible to view shell commands executed from the terminal using ~/.bashrc_history file.
There is a catch here, It logs only those commands which were executed interactively from bash shell/terminal.
This solves one of my problems. But in addition to it, I would like to log those commands also which were executed as a part of the shell script.
Note: I don't have control over shell script. Therefore, adding verbose mode like #!/bin/bash -xe is not possible.
However, this can be assumed that I have root access as a system administrator.
Eg: I have another user that has access to the system. And he runs the following shell script using from his account.
#!/bin/sh
nmap google.com
and run as "$ sh script.sh"
Now, What I want is "nmap google.com" command should be logged somewhere once this file is executed.
Thanks in advance. Even a small help is appreciated.
Edit: I would like to clarify that users are unaware that they are being monitored. So I need a solution something at system level(may be agent running with root). I cannot depend on user to log suspicious activity. Of-course everyone will avoid such tricks to put blame on someone else if they do something fishy or wrong
I am aware that you were asking for Bash and Shell scripting and tagged your question accordingly, but in respect to your requirements
Record every single command that is executed by any user on the system
Users are unaware that they are being monitored
A solution something at system level
I am under the assumption that you are looking for Audit Logging.
So you may take advantage from articles like
Log all commands run by Admins on production servers
Log every command executed by a User
You can run the script in this way:
execute bash (it will override the shebang)
ts to prefix every lines
logs both in terminal and files
bash -x script.sh |& ts | tee -a /tmp/$(date +%F).log
You may ask the other user to create an alias.
Edit:
You may also add this into /etc/profile (sourced when users login)
exec > >(tee -a /tmp/$(date +%F).log)
Do it also for error output if needed. Keep it splited.

Can I prevent a subsequent chained bashed command from running?

I want to prevent a bash command from executing that has been chained using ; from running while the previous command is still running.
e.g. I write and submit command a; command b, but while command a is running I change my mind and want to prevent command b from running.
I cannot use kill because the subsequent command is not actually executing. Does bash have a queue of commands that can be manipulated?
To clarify, I am sure it is possible to make a new script or something that would allow me to create a queue, but that is not what this question is about. I specifically want to know if bash can prevent commands after a semicolon from running after I've 'submitted' them.
Consider these two scripts:
runner.sh
#!/bin/bash
while true
do
next_command=$(head -1 next_commands.list)
$next_command
sleep 60 #added to simulate processing time
done
next_commands.list
id
ls
echo hello
You can modify the content of the next_commands.list file to create a type of queue of which commands should be executed next.

Start 2 terminals simultaneously. Execute command in first only after the second outputs text X

We have the following use case:
Run a command (third party, no control over the source code) that starts a development server with an integrated "file watcher" that recompiles the source files and restarts the server when it detects file changes (and publishes the output to a /dist folder only in the very last seconds of the ~30 second process; the /dist folder is created as a last step of the publishing process).
Run a secondary "post-processing" command that should only execute when the previous command has successfully started the server and published the files to the /dist folder (this command publishes some additional static files to the /dist folder, so it's dependant on the existence of the folder).
There are several issues with this approach:
The command from step 1 "steals" the terminal, meaning that it never terminates (due to the file watcher). So executing the two commands in sequence is not an option.
The second command should wait for Terminal 1 to output a certain string (e.g. "Server Started"), which means the /dist folder is now ready. Only then it should come in effect.
As of this moment we have put a reasonable "sleep" in the beginning of the second terminal, just so it waits for the first command to do the publishing and the /dist folder exists. The issue is that on slower computers this sleep period is not enough (the second command starts executing before the first one had enough time to create the /dist folder). On fast computers, the first command finishes rather quickly and there is a rather awkward waiting time before the second command starts. For development purposes, this is very annoying.
Is there a way to automate this two step process?
To rephrase:
Is there a way to make a terminal X, start a secondary Terminal Y and wait for Terminal Y to output a certain string before doing some action. And preferrably doing this same action when Terminal Y outputs the said string over and over.
We prefer a Windows solution if possible, but Linux is also an option if no solution is found.
This Answer is for a Linux solution;
So, Part One: Starting two terminals automatically.
Do the terminals, once automated, need to be actually seen by anyone? If not, I'd run them in screens. The screen command allows you to make virtual terminals, which can be attached and detached as needed. To autostart a screen with a command running, use
screen -S meaningful_service_name -d -m ./ ProcessA
Here, the "-S" sets the screen name to something human-readable that can be referenced later, and the "-d -m" mean start the screen in detached mode (ideal for startup scripts), and run the following command inside it. These screens will be running in the background (until the programs exit or something kills them), and can be attached from the terminal by a user with screen -r meaningful_service_nameThe user can then disconnect by pressing Ctrl-A and then D. This also allows you to monitor the processes remotely via SSH.
--
Which leads to Part Two: Starting ProcessB only after A prints to screen.
Screen allows us to take so-called "screenshots" of its current output. This is saved as a text file, since the (virtual) terminal is text-based.
screen -S meaningful_service_name -p 0 -X hardcopy /some/file/location.txt
Here, the "-p" indicates what is essentially the current window (which should usually be zero), and "-X" means execute the following command. Now that you have a text file with ProcessA's output, you can search it like any other text file.
#!/bin/bash
while true; do
screen -S meaningful_service_name -p 0 -X hardcopy /some/file/location.txt
if [[ $(cat /some/file/location.txt | grep "ProcessA's Output" | wc -l) -ge 1 ]]
then
break
fi
sleep 2
done
./ ProcessB

What if we close the terminal before finishing the command?

Let me explain better. What is gonna happen if I run a command in Linux and before it's done and you could enter another command I close the terminal. Would it still do the command or not?
Generally, you must expect that closing your terminal will hangup your command. But fear not! Linux has a solution for that too!
To ensure that your command completes, use the nohup argument first. Simply place it before whatever you are trying to do:
nohup ./some_program
nohup ./do_a_thing -frx -file input_file.txt
nohup grep "something" giant_list_of_files/* > temp_file.txt
The nohup command stands for "no hangup" and it will ensure that the command you execute continues to run, even if you close your terminal.
It depends on the process and your environment (job control shell options, VNC, etc). But typically, no. The process will get a "hangup" signal (message) from the operating system, and upon receiving that, will quit.
The nohup command, for example, arranges for processes to ignore the hangup signal from the OS. There are many ways to achieve the same result.
I would say it will abort att the status you are in just before the session close.
If you want to be sure to complete the job, you will need to use the nohup command.
http://en.wikipedia.org/wiki/Nohup
Read about nohups and daemons (-d)...
A good link is [link]What's the difference between nohup and a daemon?
Worth look at screen command, Screen command offers the ability to detach a long running process (or program, or shell-script) from a session and then attach it back at a later time.

Resources