remote ssh command not working properly - linux

The following local command on host xyz provides the following correct output
taskset -p `ps -ef | grep ripit | grep -v grep| awk '{print \$2}'`
pid 21352's current affinity mask: 1
When I run the following command and ssh to xyz host I also get correct output
ssh xyz "ps -ef | grep ripit | grep -v grep |awk '{print \$2}'"
21352
However When I add the taskset command and run remotely on host xyz host i get this incorrect output.
ssh xyz "taskset -p `ps -ef | grep ripit | grep -v grep | awk '{print \$2}'`"
sched_getaffinity: No such process
failed to get pid 27599's affinity
bash: line 1: 32127: command not found
I tried many different single and double quote combination and I used escape character all over the place to no avail. Can anyone help?
Thanks

I haven't tested with your exact commands, but
ssh host 'lsof -p $(pgrep program)'
worked for me

For running commands remotely:
#!/bin/bash
SCRIPT='
#Your commands
'
sshpass -p<pass> ssh -o 'StrictHostKeyChecking no' -p <port> user#host "$SCRIPT"

When I add the taskset command and run remotely on host xyz host
ssh xyz "taskset -p `ps -ef | grep ripit | grep -v grep | awk '{print \$2}'`"
Here, the command substitution between `` is executed on the local host and yields a local process ID - no wonder that there is No such process on the remote host. If you escape the backquotes like
ssh xyz "taskset -p \`ps -ef | grep ripit | grep -v grep | awk '{print \$2}'\`"
the command substitution is executed on the remote host and yields the correct process ID.

Related

How to make a command with grep captured fields?

I'm trying to capture a process that runs and make an strace of it in one line:
I managed to capture only the one that i want with this command
ps -ef | grep "[0-9].*[0-9] /usr/bin/python3 /home/pi/readcard.py"
outputs this:
root 676 668 99 11:00 ? 00:34:21 /usr/bin/python3 /home/pi/readcard.py
Now I'm trying to capture the process pid with this regex and use it to make another command:
ps -ef | grep "([0-9]+).*[0-9] /usr/bin/python3 /home/pi/readcard.py"
How I could make to run something like this?
sudo strace -f -p{captured_field} -s9999 -e write
Use awk for only showing the second column:
ps -ef | grep "([0-9]+).*[0-9] /usr/bin/python3 /home/pi/readcard.py" | awk '{print $2}'
This, you can use it as an input for another command, by embedding it into $(...), as follows:
sudo strace -f -p{$(ps -ef | grep "([0-9]+).*[0-9] /usr/bin/python3 /home/pi/readcard.py" | awk '{print $2}')} -s9999 -e write
Good luck

.bashrc saves previous process id and does not update in alias commands

I have made an alias in .bashrc to kill my python service.py & process
alias servicestop="kill $(ps -ef | grep -w service.py | grep -v grep | awk '{print $2}')"
Whenever I run first time servicestop command it will kill the process.
but again whenever I start process python service.py &, and execute command servicestop it gives an error.
After research, I found following things.
when I run first time python service.py & process. its process id was 512.
and, command servicestop kill that process(512).
Now when I run Second time process python service.py &. its process id was 546.(definitely it will be different).
When I run command servicestop. it will give following error:
-bash: kill: (512) - No such process
That means $(ps -ef | grep -w service.py | grep -v grep | awk '{print $2}') will return the previous pid, which is already killed.
Now please suggest the solution if any possible.
so whenever I want to run servicestop command, I have to run source .bashrc command first, then run servicestop command to make it work.
Please remove the servicestop alias from your .bashrc and add :
servicestop(){
kill $(ps -ef | grep -w service.py | grep -v grep | awk '{print $2}');
}
In a way, functions in .bashrc are "aliases 2.0" : simply better
Better : same function; but with the name of script to kill as parameter :
servicestop(){
kill $(ps -ef | grep -w $1 | grep -v servicestop | awk '{print $2}');
}
Use it like that :
servicestop service.py
servicestop otherSuperService.py

How can I get an output of one command as an argument to other linux command?

I am getting process id for a process using:
ps -ef | awk '$8=="process name" {print $2}'
How can I use the output of above command as an input to the command below:
ps -p <pid> -o %cpu,%mem,cmd
Basically I needed the above two commands executed as a single command.
Pipe it to xargs:
... | xargs -I {} ps -p {} -o %cpu,%mem
The {} is the default argument list marker which can be used to send to your final command.
Alternatively you can also use command substitution
ps -p $(ps -ef | awk ...) -o %cpu,%mem

How to pass parts of a command as variable bash?

a="grep ssh | grep -v grep"
ps -ef | $a | awk '{print $2}'
How can I make the above work? I have a section where I need to not just pass the grep term, but possible pass more than one grep term, meaning I need to pass "term1 | grep term2" as a variable.
Edit:
another.anon.coward answer below worked perfectly for me. Thank you sir!
Create a function instead:
a() {
grep ssh | grep -v grep
}
ps -ef | a | awk '{print $2}'
The other solution is to use eval but it's unsafe unless properly sanitized, and is not recommended.
a="grep ssh | grep -v grep"
ps -ef | eval "$a" | awk '{print $2}'
If you want just the pid of a process, then use pgrep.
pgrep ssh
You can put this in a bash like the following (a.bash) :
#!/bin/bash
pname=$1
pgrep "$pname"
or if you want ps -ef for other purposes as you've written, following inside a script might work:
pname=$1
ps -ef | grep "$pname" | grep -v grep | awk '{print $2}' # I would personally prefer this
OR
ps -ef | eval "$pname" | awk '{print $2}' # here $pname can be "grep ssh | grep -v grep"
change the permission to execute :
chmod a+x a.bash
./a.bash ssh

How do I get "awk" to work correctly within a "su -c" command?

I'm running a script at the end of a Jenkins build to restart Tomcat. Tomcat's shutdown.sh script is widely known not to work all in many instances and so my script is supposed to capture the PID of the Tomcat process and then attempt to manually shut it down. Here is the command I'm using to capture the PID:
ps -ef | grep Bootstrap | grep -v grep | awk '{print $2}' > tomcat.pid
The output when manually runs retrieves the PID perfectly. During the Jenkins build I have to switch users to run the command. I'm using "su user -c 'commands'" like this:
su user -c "ps -ef | grep Bootstrap | grep -v grep | awk '{print $2}' > tomcat.pid"
Whenever I do this however, the "awk" portion doesn't seem to be working. Instead of just retrieving the PID, it's capturing the entire process information. Why is this? How can I fix the command?
The issue is that $2 is being processed by the original shell before being sent to the new user. Since the value of $2 in the shell is blank, the awk command at the target shell essentially becomes awk {print }. To fix it, you just escape the $2:
su user -c "pushd $TOMCAT_HOME;ps -ef | grep Bootstrap | grep -v grep | awk '{print \$2}' > $TOMCAT_HOME/bin/tomcat.pid"
Note that you want the $TOMCAT_HOME to be processed by the original shell so that it's value is set properly.
You don't need the pushd command as you can replace the awk command with:
cut -d\ -f2
Note: two 2 spaces between -d\ and -f2

Resources