Execute bash after terminating webpack-dev-server - linux

Here's a small annoyance. As part of a project's "quickstart" script I'm starting a webpack server in a terminal tab, along other things in other tabs, in short:
#!/usr/bin/env bash
gnome-terminal --tab --tab --command \
'bash -c "node_modules/.bin/webpack-dev-server; exec bash"' &
This almost works as supposed to -- with the exception of Ctrl+C in the server's tab. If it were, say, a Flask server, it would get stopped and a bash prompt would appear in the same tab (that's the reason for the "exec bash" part). But with node / webpack the tab just closes.
Manually sending an interrupt signal to the node process leaves the tab open (e.g. kill -INT <pid>). So the question is what is happening from the operating system perspective. What process tree is created? Which process gets the SIGINT? What gets replaced by exec (if anything)?
Possibly related: https://github.com/nodejs/node/issues/4432.

try to use trap as follow:
bash -c "trap 'exec bash' SIGINT; node_modules/.bin/webpack-dev-server;"
it must kill webpack-dev-server and exec bash on cmd+c;

Related

In a script how to get the pid of spawned terminal's shell to execute commands in it using ttyecho?

I am using ttyecho (can be installed with yay -S ttyecho-git) to execute a command in a separate terminal like so:
urxvt &
sudo ttyecho -n /proc/<pid-of-new-urxvt>/fd/0 <command>
It does not work because the /proc/pid-of-new-urxvt/fd/0 is a symlink that points to the /dev/pts/x of the parent terminal.
In the spawned urxvt I happen to run zsh. So if I use the pid of that zsh process it works:
sudo ttyecho -n /proc/<pid-of-new-zsh-within-new-urxvt>/fd/0 <command>
How can I get the pid of the new zsh process spawned within the new urxvt process when I run urxvt & ? Or is there a different solution to achieve the same result?
pgrep -P <pid-of-new-urxvt> gives the pid of the child zsh process.
Thx to #user1934428 for the brainstorming
Here is the resulting bash script:
urxvt &
term_pid=$!
# sleep here makes us wait until the child shell in the terminal is started
sleep 0.1
# we retrieve the pid of the shell launched in the new terminal
shell_pid=$(pgrep -P $term_pid)
# ttyecho executes the command in the shell of the new terminal and gives back control of the terminal so you can run further commands manually
sudo ttyecho -n /proc/${shell_pid}/fd/0 "$#"
So when I launch "script ls" it opens a new terminal, runs ls, and gives back the prompt with the terminal still open.
I just had to add ttyecho in the sudoers file.

Cannot return to shell session after script

I cannot get a script to return to bash.
The script is kicked off via the following Docker directives:
ENTRYPOINT ["/bin/bash", "-c"]
CMD ["set -e && /config/startup/init.sh"]
The init script looks like this:
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
bash
And this is the command I use to kick off the image into a container:
docker run -it --env-file ENV_LOCAL mailrelay
The init script runs as expected (and I see output from the scripts within the /etc/postfix/init.d/ directory and supervisord kicks off Postfix.
The problem is getting the script to return to the parent process (bash) instead of needing to start a new one. After it hits the supervisord the session sits there, requiring a Ctrl+C to get it to get back into a bash prompt.
If I leave off the call to bash at the end of the init.sh script, Ctrl+D exits the script AND the container, returning me to the host OS (osx). If I replace the bash call with exit, it returns to the host OS as well.
Is supervisord behaving the way it's supposed to, by running in the foreground this way? I'd like to be able to easily get back into the container shell session to check to see if things are running. Am I left with needing to Ctrl+D (into the secondary bash session) in order to do this?
UPDATE
Marc B
take out the bash line, so you don't start a new shell. and if
supervisord doesn't go into the background automatically, you could
try running it with & to force it into the background, or maybe
there's an extra cli option to force it to go into daemon mode
I've tried removing the last call to bash, but as I've mentioned it just sits there still, and Ctrl+D takes me to the host OS (exits the container).
I just tried /usr/bin/supervisord -c /etc/supervisord.conf & (and left off the call to bash at the end) and it just immediately returns to host OS, exiting the container. I assume because the container had nothing left to "do", and so stopped.
#!/bin/bash
if [ -d /etc/postfix/init.d ]; then
for f in /etc/postfix/init.d/*.sh; do
[ -f "$f" ] && . "$f"
done
fi
echo "[x] Starting supervisord ..."
/usr/bin/supervisord -c /etc/supervisord.conf
one
bash # You are spawning a new bash shell here. Remove this statement
At the end your're stuck in a child bash shell :(
Now if you're not returning to the parent shell, the last command that you have run is the culprit.
/usr/bin/supervisord -c /etc/supervisord.conf
You can either force the command to run in the background by
/usr/bin/supervisord -c /etc/supervisord.conf & #the & tells to run in background
A workaround for keeping the container open is mentioned here

create screen session that doesn't terminate with the program

I'm working on a startup script that is initiated from rc.local. I start up several programs with
screen -d -m my-prog
and that works great. However, if one of the programs has problems and exits, so does the session. I'd like to be able to have the session stick around so I can attach to it and see the output from the program before it crashed.
Is there a way to do this? I thought about
screen -d -m bash -c my-prog
But again, if my-prog terminates then so does bash and then so does screen.
You can follow the answer at https://unix.stackexchange.com/questions/47271/prevent-gnu-screen-from-terminating-session-once-executed-script-ends
They suggest something like you were trying in your second attempt, but instead of using bash to invoke the command (which terminates with the command as you noted), invoke bash after the command finishes like:
screen -dmS session_name sh -c 'my-prog; exec bash'

How can place a job of linux terminal to background after enter password?

I use this command in linux terminal to connect to a server and use it as proxy :
ssh -N -D 7070 root#ip_address
it's get the password and connect and everything is Ok but how can I put this process in background ?
I used CTRL+Z but it stop not put this process in background ...
CTRL-Z is doing exactly what it should, which is stop the process. If you then want to put it in the background, the shell command for doing that is bg:
$ ssh -N -D 7070 -l user 192.168.1.51
user#192.168.1.51's password:
^Z
[1]+ Stopped ssh -N -D 7070 -l mjfraioli 192.168.1.51
$ bg
[1]+ ssh -N -D 7070 -l user 192.168.1.51 &
That way you can enter the password interactively, and only once that is complete, stop it and put it into the background.
Try adding an ampersand to the end of your command:
ssh -N -D 7070 root#ip_address &
Explanation:
This trailing ampersand directs the shell to run the command in the background, that is, it is forked and run in a separate sub-shell, as a job, asynchronously. The shell will immediately return the return status of 0 for true and continue as normal, either processing further commands in a script or returning the cursor focus back to the user in a Linux terminal.
The shell will print out the forked process’s job number and process ID (PID) like so:
$ ./myscript.py &
[1] 1337
The stdout of the forked process will still be attached to the parent, so any output will still appear in your terminal.
After a process is forked using a single trailing ampersand &, its process ID (PID) is stored in a special variable $!. This can be used later to refer to the process:
$ echo $!
1337
Once a process is forked, it can be seen in the jobs list:
$ jobs
[1]+ Running ./myscript.py &
And it can be brought back to the command line before it finishes with the foreground command:
fg
The foreground command takes an optional argument of the job number, if you have forked multiple processes.
A single ampersand & can also delimit a list of commands to be run asynchronously.
./script.py & ./script2.py & ./script3.py &
In this example, all 3 python scripts are run at the same time, in separate sub-shells. Their stdout will still be attached to the parent shell, so if running this from a Linux terminal, you will still see the outputs.
This can also be used as a quick hack to take advantage of multiple cores with shell scripts, but be warned, it is a hack!
To detach a process completely from the shell, you may want to pipe the stdout and stderr to a file or to /dev/null. A nice way of doing this is with the nohup command.
source for above explanation: http://bashitout.com/2013/05/18/Ampersands-on-the-command-line.html
You can add option -f to make the ssh command run in background.
So the answer is ssh -f -D port username#hostname -N.

How to reset tty after exec-ed program crashes?

I am writing a Ruby wrapper around Docker and nsenter. One of the command my tool provides is to start a Bash shell within a container. Currently, I am doing it like this:
payload = "sudo nsenter --target #{pid(container_name)} --mount --uts --ipc --net --pid -- env #{env} /bin/bash -i -l;"
Kernel.exec(payload)
In Ruby, Kernel#exec relies on the exec(2) syscall, hence there is no fork.
One issue is that the container sometime dies prematurely which effectively kills my newly created Bash prompt. I then get back the prompt originally used to run my Ruby tool, but I cannot see what I am typing anymore, the tty seems broken and running reset effectively solves the issue.
I'd like to conditionally run reset if the program I exec-ed crashes. I found that the following works well:
$ ./myrubytool || reset
Except I'd like to avoid forcing people using my tool to append || reset every time.
I have tried the following:
payload = "(sudo nsenter --target #{pid(container_name)} --mount --uts --ipc --net --pid -- env #{env} /bin/bash -i -l) || reset;"
But this surprisingly puts reset in the background (i.e. I can run reset by entering fg). One benefit is that the tty is working properly, but it's not really ideal.
Would you have any idea to solve this issue?
If terminal echo has been disabled in a terminal, then you can run the command stty echo to re-enable the terminal echo. (Conversely, stty -echo disables terminal echo, and stty -a displays all terminal settings.)
This is safe to run even if terminal echo is already enabled, so if you want to play it safe, you can do something like ./myrubytool ; stty echo which will re-enable terminal echo if it is disabled regardless of the exit status of your Ruby program. You can put this in a shell script if you want to.
It might be that there is a way to execute a command when the Ruby program exits (often referred to as a "trap"), but I'm not familiar enough with Ruby to know whether such capabilities exist.
However, if you are creating a script for general use, you probably should look into more robust techniques and not rely on workarounds.
How about this? It should do exactly what you want.
It runs the command in a separate process, waits on it, and if, when it finishes, the return value is not 0, it runs the command reset.
payload = "sudo nsenter --target #{pid(container_name)} --mount --uts --ipc --net --pid -- env #{env} /bin/bash -i -l;"
fork { Kernel.exec(payload) }
pid, status = Process.wait2
unless status.exitstatus == 0
system("reset")
end
EDIT
If all you want to do is turn echo back on, change the system("reset") line to system("stty echo").

Resources