Shell Script - Run shell commands in parallel in different bash sessions [duplicate] - linux

This question already has answers here:
How do you run multiple programs in parallel from a bash script?
(19 answers)
Closed 2 years ago.
I have a requirement to run database restore commands in parallel from shell scripts. Both the commands should run in different bash sessions in parallel.
The following are the commands I need to run.
sudo su - $user -c "db2 RESTORE DATABASE ${SDBP} FROM '/dbnfs/db2main/backups/${DB2DBP}' TAKEN AT $TIMESTAMPP ON '/data1/DB2/tablespaces/${DB2DBP}' , '/data2/DB2/tablespaces/${DB2DBP}' DBPATH ON '/home/db2inst1' INTO ${DB2DBP} NEWLOGPATH '/data1/activelogs/${DB2DBP}' without rolling forward without prompting 2>&1"
sudo su - $user -c "db2 RESTORE DATABASE ${SDBS} FROM '/dbnfs/db2main/backups/${DB2DBS}' TAKEN AT $TIMESTAMPS ON '/data1/DB2/tablespaces/${DB2DBS}' , '/data2/DB2/tablespaces/${DB2DBS}' DBPATH ON '/home/db2inst1' INTO ${DB2DBS} NEWLOGPATH '/data2/activelogs/${DB2DBS}' without rolling forward without prompting 2>&1"
Let me know how to achieve it.

Since you want different bash sessions (perhaps due to long running commands), screen command might be of interest to you.
You can create new named screens (sessions), let's call it restore1 for the first command:
screen -S restore1
This will create a new screen. In this screen you can run your first command. Once command starts running, you can "detach" (ctrl+a d) from it.
Create another named screen called restore2 for SDBS:
screen -S restore2
Run the second command here, then detach from it. You can check the screens (sessions) by:
screen -list
You can reattach to any of the screens with screen -r <screen_name>, to check on the status of that command. Example:
screen -r restore1

Related

Run Linux command in background and keep runing after closing SSH [duplicate]

This question already has answers here:
How to make a program continue to run after log out from ssh? [duplicate]
(6 answers)
Closed 3 years ago.
I need to run a Perl script for several days processing something. On a linux Centos server, from the SSH terminal I run this command:
nohup perl script.cgi 2>&1 &
This runs the script in the background and writes the output to nohup.out.
The problem when I close the SSH terminal or even my internet connection disconnects the script terminates.
I need to keep this command running in the background on the server after I close
the SSH terminal.
You can use Terminal multiplexer tools like screen, byobu or tmux.
I personally use screen. so install it on remote server via sudo apt-get install screen.
ssh into server
open screen session by screen -S sessionname
Now run your command (background/foreground both works)
now detach to your session by command ctrl+a then press d.
Now shut your pc and enjoy
now come back ssh into server then use command screen -x sessionname to reconnect the detached session.
Hurray! script is still running.
you can either use screen or run the command using supervisor in linux systems.
you can install screen using sudo apt install screen
then use following command to run it.
screen -S test_command
nohup perl script.cgi 2>&1 &
Then press ctrl+a and ctrl+d to leave that session running for whatever amount of time required until your server reboots.
If you want to stop the command use screen -x test_command, then ctrl+c and use ctrl+a and ctrl+d to close screen or ctrl+a and ctrl+d to leave the screen session as it is.
The only way I was able to run the command and exit the shell and keep the command running is using the "at" tool to schedule the job like this using the full script path:
echo "perl /home/username/www/script.cgi" | at now + 1 minute

Linux: how to change maximum number of files a process can open?

I have to execute a process on a cluster of machines. Size of cluster is of order 100. So I cannot execute processes manually, I have to execute them by script(which uses ssh, currently I am using python-paramiko for this). Number of tcp sockets these processes open is more than 1024(default limit of linux.) So I need to change that using {ulimit -n 10000}. This makes the changes for that shell session only. And this command works only with root user. So my script is not able to do that.
I tried to execute this command
sudo su && ulimit -n 10000 && <commandToExecuteMyProcess>
But this didn't work. The commands after "sudo su" didn't execute at all. They execute only when I logout of the su session.
This article shows way to make the changes permanently. But when I open limits.conf, I didn't find anything there. It only has some commented notes.
Please suggest me some way to increase the limit permanently or change it by script for each session.
That's not how it works: sudo su just opens a new shell so you can introduce commands as root, and after you exit that shell it executes the rest of the line as normal user.
Second: your this is a special case because ulimit is not actually a program, but a bash shell built-in command, so it must be used within bash, that is why something like sudo ulimit -n 10000 won't work: sudo can't find that program because it doesn't exist.
So, the only alternative is a bit ugly but works:
sudo bash -c 'ulimit -n 10000 && <command>'
Everything inside '...' will execute in a bash session of the root user.
Note that you can replace && with ; in this case: that's because it is being executed as root and ulimit -n 10000 will always complete successfully.

Is it possible to detect when a application closes in bash

I need a way to figure out if an application I launch from a bash script has finished and closed so I may clean up after it.
Is there a way to detect this when launching the application from a script? Is there a function that will run the application and then block the script until the called application has finished and returned?
The purpose for this is to unlock luks partitions, launch an application that will use the data stored on them, and then once that application returns to clean up and lock the luks partitions.
Thanks.
Does something like this works for your first question:
list=$(ls -l1)
echo "ls output: $list"
The echo command will wait until the command inside $( and ) finishes.
For your second question, a better way would be (if you have sudo permissions):
sudo su -c 'echo "hi from $(whoami)"'
echo "hi from $(whoami)"

create screen session that doesn't terminate with the program

I'm working on a startup script that is initiated from rc.local. I start up several programs with
screen -d -m my-prog
and that works great. However, if one of the programs has problems and exits, so does the session. I'd like to be able to have the session stick around so I can attach to it and see the output from the program before it crashed.
Is there a way to do this? I thought about
screen -d -m bash -c my-prog
But again, if my-prog terminates then so does bash and then so does screen.
You can follow the answer at https://unix.stackexchange.com/questions/47271/prevent-gnu-screen-from-terminating-session-once-executed-script-ends
They suggest something like you were trying in your second attempt, but instead of using bash to invoke the command (which terminates with the command as you noted), invoke bash after the command finishes like:
screen -dmS session_name sh -c 'my-prog; exec bash'

Run commands in screen after creating one per bash

I have the following bash file which should create a screen, go to a directory and then start a node script:
screen -S shared // 1
cd /home/nodejsapp // 2
node start.js app.js // 3
The problem is, after executing 1, I indeed see the screen 'shared', but 2 & 3 will execute on the previous terminal, not on the screen 'shared'.
How can I achieve that commands 2 and 3 will be executed on the current screen?
You may create a detached screen and then send commands to it. For example:
screen -d -m -S shared
screen -S shared -X -p 0 stuff $'cd /home/nodejsapp\n'
screen -S shared -X -p 0 stuff $'node start.js app.js\n'
If you need to attach to the screen session afterwards, then you can add one more line:
screen -S shared -r
See screen's manual for more details:
screen options
screen commands
You could run a "server" as the program within screen, which reads commands to execute from the pseudoterminal which the "tty" program identifies. For instance, as I'm writing this, tty says (inside screen)
/dev/pts/2
and I can write to it by
date >/dev/pts/2
On the server side, the script would read line-by-line in a loop from the same device. (On some other systems, there are differently-named
devices for each side of the pseudoterminal).
That only needs a script which starts by getting the output of "tty", writing that to a file (which a corresponding client would know of), and then the client would read commands (whether from the keyboard or a file), write them to the server via the pty device.
That's doable with just a couple of shell scripts (a little more lengthy though than the usual answer here).

Resources