PID of command submitted to LSF with bsub - linux

When a command is submitted with bsub, it will start a process with res command.
res in turn will start actual command as another process
I want to know pid of this actual command
let's say, I have submitted this command. With bhist -l jobid, we can know pid of res, but unable to find a way to get pid of virtuoso
bsub -I -q interactive virtuoso &

If you run a script that calls virtuoso, you should be able to capture the PID of virtuoso from the script and then output it, something like this should work:
#!/bin/bash
jobs &>/dev/null
virtuoso &
new_job_started="$(jobs -n)"
if [ -n "$new_job_started" ];then
VAR=$!
else
VAR=
fi
echo $VAR
I don't know how useful this will be, since you probably won't be on the same machine that your interactive shell is running so you won't be able to access the process with the pid.

Related

Get PID of the jobs submitted by nohup in Linux

I'm using nohup to submit jobs in background in the machines I got through BSUB and ssh.
My primary machine is on RHEL, from there I am picking up other AIX machine by BSUB(Submits a job to LSF) and also doing a SSH login to another server.
After getting these two machines, executing a script(inner.sh) there through nohup.
I'm capturing the respective PIDs through echo $$ in the script which I am executing(inner.sh).
After submitting the nohup execution in background, I am exiting both of the machines and landing back to primary RHEL machine.
Now, from this RHEL machine, I'm trying to get the status of the nohup execution by ps -p PID with the help of the previously captured two PIDs, but not getting any process listed.
Top level wrapper script wrapper.sh:
#!/bin/bash
#login to a remote server
ssh -k xyz#abc < env_setup.sh
#picking up a AIX machine form LSF
bsub -q night -Is ksh -i env_setup.sh
ps -p process-<AIX_machine>.pid
#got no output
ps -p process-<server_machine>.pid
#got no output
Script passed to Machines picked up by BSUB/SSH to execute nohup env_setup.sh:
#!/bin/bash
nohup sh /path/to/dir/inner.sh > /path/to/dir/log-<hostname>.out &
exit
The actual script which I am trying to execute in machines picked up by BSUB/SSH inner.sh:
#!/bin/bash
echo $$ > /path/to/dir/process-<hostname>.pid
#hope this would give correct us the PID of the remote machine
#execute some other commands
Now, I am getting two process-<hostname>.pid files updated with two PIDs respectively each for both of the machines.
But ps -p at wrapper script is giving us no output.
I am picking up the process IDs from remote machines and doing ps -p at my local RHEL machine.
Is it the reason I am not getting any status update of those two processes?
Can I do anything else to get the status?
ps give the status of local processes. bsub can be used to get the status of processes on each remote machine.

ssh does not return even after execution

The following ssh command does not return to terminal. It hangs though the execution is completed. The execution hangs after echo hi command.
ssh user#testserver "echo hello;source .profile;source .bash_profile;/apps/myapp/deploytools/ciInstallAndRun.sh; echo hi"
Output
hello
<outoutfrom remote script"
hi
ciInstallAndRun.sh
echo 'starting'
cd /apps/myapp/current
./tctl kill
cd /apps/myapp
mv myapp_v1.0 "myapp_v1.0_`date '+%Y%m%d%H%M'`"
unzip -o /apps/myapp/myappdist-bin.zip
java -classpath .:/apps/myapp/deploytools/cleanup.jar se.telenor.project.cleanup.Cleanup /apps/myapp myapp_v1.0_ 3
cd /apps/myapp/myapp_v1.0
echo 'Done with deploy'
chmod -R 775 *
echo 'Done'
./tctl start test
Source OS: Redhat
Dest Os: Solaris 10 8/07
Any idea to fix this.
Any idea to fix this.
Your installation script has spawned a child process.
Add a ps -f or ptree $$ command before echo hi. You'll see a child process or multiple child processes spawned by your install script.
To stop the SSH command from hanging, you need to detach such child process(es) from your terminal's input/output. You can sedirect your script's output to a file - both stdout and stderr with > /some/output/file 2>&1, and also redirect its input with < /dev/null.
Or you can use the nohup command.
You haven't provided an MCVE, as others have noted, but this is likely the problem command in you install script, since your question implies that you see the expected output from your install script:
./tctl start test
You probably would do better to replace it with something like:
./tctl start test </dev/null >/some/log/file/path.log 2>&1

Execute a command as another user and get the PID for that process

I'm trying to capture the PID of a program that I am running for my init script so I can come back and kill it later. When I run the script without being a different user, the command works just fine, and I get the PID in a variable. I can execute the same command as a different user, however, I cannot get the PID for that command to store in a variable. This is what I get.
[root#fenix-centos ~]# PID=`su - $USER -c "$DAEMONPATH $DAEMONPATHARGS $DAEMON $DAEMONARGS > /dev/null 2>&1 & echo \$! "`
[root#fenix-centos ~]# echo $PID
...and nothing. Is there some weird thing that would prevent me from getting the PID of a process being started by a different user and storing that PID in a variable? The process still starts, but I'm not getting the PID.
After going though the link to your script, i suggest this approach:
Perform variable (that you're passing as argument to your command su) assignment in a file:
[tom#jenkins ]# cat source_file
DAEMONPATH=/usr/bin/java
DAEMONPATHARGS='-jar -Xmx768'
DAEMON=/opt/megamek-0.38.0/MegaMek.jar
DAEMONARGS='-dedicated -port 2346'
Source the above file in your command:
PID=`su - $USER -c '. source_file; $DAEMONPATH $DAEMONPATHARGS $DAEMON $DAEMONARGS > /dev/null 2>&1 & echo $! '`
It seems your syntax is not working because the $! must be getting evaluated by the original shell which is running su and not the shell that su runs.

Kill ssh or\and remote process from bash script

I am trying to run the following command as part of the bash script which suppose to open ssh channel, run the program on the remote machine, save the output to the file for 10 sec, kill the process, which was writing to the file and then give the control back to bash script.
#!/bin/bash
ssh hostname '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null; sshpid=!$; sleep 10; kill -9 $sshpid 2>/dev/null &'
Unfortunately, what it seems to be doing is starting the program: nodes-listener remotely, but it never gets any further and it doesn't give control to the bash script. So, the only way to stop the execution is to do Ctrl+C.
Killing ssh doesn't help (or rather can't be executed) since the control is not with bash script as it waits for the command within the ssh session to complete, which of course never happens as it has to be killed to stop.
Here's the command line that you're running on the remote system:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null
sshpid=!$
sleep 10
kill -9 $sshpid 2>/dev/null &
You should change it to this:
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null & <-- Ampersand goes here
sshpid=$!
sleep 10
kill -9 $sshpid 2>/dev/null
You want to start nodes-listener and then kill it after ten seconds. To do this, you need to start nodes-listener as a background process, so that the shell which is executing this command line to move on to the next command after starting nodes-listener. The & in your command line is in the wrong place, and would apply only to the kill command. You need to apply it to the nodes-listener command.
I'll also note that your sshpid=!$ line was incorrect. You want sshpid=$!. $! is the process ID of the last command started in the background.
You need to place the ampersand after the first command, then put the remaining commands onto the next line:
ssh hostname -- '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=$!; sleep 10; kill $sshpid 2>/dev/null'
Btw, ssh is returning after all commands had been executed. This does mean it will close the allocated pty as well. If there are still background jobs running in that shell session, they would being killed by SIGHUP. This means, you can probably omit the explicit kill command. (Depends on whether nodes-listener handles SIGHUP and SIGTERM differently). Having this, you could simplify the code to the following:
ssh hostname -- sh -c '/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sleep 10'
I have resolved this by pushing the shell script to the remote machine and executing it there. It is actually less tidy and relies on space being available on the remote computer.
Since my remote machine is a small physical device, the issue of the space usage is important (even for the tiny amount of space required in this case).
/root/bin/nodes-listener > /tmp/nodesListener.out </dev/null &
sshpid=!$
sleep 20
sync
# killing nodes-listener process and giving control back to the base bash
killall -9 nodes-listener 2>/dev/null && echo "nodes-listener is killed"

passing control+C in linux shell script

in a shell script i have a command like, pid -p PID, after that i have some more commands. but as soon as the pid -p PID command runs we should supply a control+C to exit from it and then only the further commands executes. so i wanna do this periodically, i have all the things i want in a shell script and i wanna put this into crontab. the only thing that bothers is, if i schedule this script in the crontab, afetr its first execution, the command pid -p PID, how will i supply the CONTRO+C command for allowing further commands to execute???? please help
my script is like this.. very simple one
top -p $1
free -m
netstat -antp|grep 3306|grep $1
jmap -dump:file=my_stack$RANDOM.bin $1
You can send signals with kill. In your case however, you can just restrict top to one or a few iterations
top -p $1 -n 1
Update:
You can redirect the output of a command to a file. Either overwrite the file each time
command.sh >file.txt 2>&1
or append to a file
command.sh >>file.txt 2>&1
If you don't want the error output, leave out the 2>&1 part.
pid -p PID &
some_pid=$!
kill -s INT $some_pid

Resources