I am trying to close a putty session that is running on some other computer.
You kill the process ID of the user's login session:
kill -9 12345
Try running the w command and looking at the output. Something like:
w | grep ssh
will show all users connected via ssh. More scripting and automation is possible to help you narrow down the process ID of the login session:
pgrep -u w | grep ssh| awk '{print $1}' ssh
will give you a list of numbers that are the PIDs of the login session. You can then use ps to verify that this is the session you want to kill. See the kill(1), ps, and pgrep manual pages.
You can get fancy and make a script or shell alias to print the users and their ssh sessions (NB: quick hack for illustration, not portable):
for u in `w| grep ssh|awk '{print $1}'`
do
echo -e "\n"$u
pgrep -x -l -u $u ssh
done
... and other variation on this theme. If you are killing sessions this way oftne it's a good idea to have a script or tool that helps you identify the correct session before your kill -9 it - especially on a busy shell login host. Even more useful are tools that are cross platform and/or POSIX-ish (w who ps etc. vary slightly in their output formats). That kind of tool can be written in perl, ruby or very careful sh and awk.
Related
Accessing the IP address of a connecting SSH client is possible via environment variables (such as SSH_CONNECTION), as described in
Find the IP address of the client in an SSH session
In a GNU screen session though, those environment variables are defined by whoever started the screen to begin with. Is there any way to also get hold of the SSH connection information, for someone who enters an already-existing screen session later, like from another host?
I can't think of a way to determine this, but this can be useful in cases where screen sessions are shared between different people, for example.
If the screen session is launched as root, you can but it won't be perfectly reliable
If two users type in the same screen window, they will both interact within the same shell. One can write a command. The other can press the <enter> key.
You have to get access to the environment variable SSH_CONNECTION (or better SSH_CLIENT) which is only possible if you are root, or if you use the same user inside the screen session.
Supposing you are root inside the screen session, you can know the last user active in a screen session by using the ps command and finding the last active session.
ps h -C screen katime -o pid,user
By using the pid, and accessing the /proc/<pid>/environ file, you can get the SSH_CLIENT variable.
sed -z '/SSH_CLIENT/p;d' /proc/`ps h -C screen katime -o pid |head -1`/environ
--> SSH_CLIENT=257.31.120.12
All of this suppose that your screen is executed as root
You can also chose to log all the active connections.
For such need, I would suggest you to store both the full list of connections and their last activity.
ps eh -C screen kstime -o pid,atime | while read pid stime; do echo -n "$stime: ";\
gawk -v 'RS=\0' -F= '$1=="SSH_CLIENT" {print $2}' /proc/$pid/environ; done
Result:
00:00:00: 257.31.120.12 61608 22
00:07:11: 258.1.2.3.4 49947 22
Note that you can also parse the result of the ps eh -C screen kstime -o args command if you find it easier.
EDIT:
This is a working Debian command to get all users currently connected to the same screen session:
find /var/run/screen/
-name $(pstree -sp $$ |sed 's/.*screen(\([0-9]*\)).*/\1/;q').*
-printf "%h\n"
| cut -f2 -d-
You can check the output of the last command that would list of all IP addresses or hostnames of all connection made if sshd is the only way to connect to server.
ec2-user]# last
ec2-user pts/0 115.250.185.183 Sun May 29 13:49 still logged in
ec2-user pts/0 115.250.140.241 Sat May 28 07:26 - 10:15 (02:48)
root pts/4 113.21.68.105 Tue May 3 10:15 - 10:15 (00:00)
Alternatively (on Linux), you can check /var/log/secure where sshd will usually log all details of all the connections made even if they don't result in successful logins.
If you're trying to support the multi-display mode ('screen -x'), then as someone said above you are likely out of luck.
One the other hand, if you could assume single-user mode, then you could create a wrapper/alias for the screen command that carries along an environment variable into screen (see 'screen -X stuff ...'); in this case you are just passing along SSH_CLIENT that will have the appropriate value.
If you can assume a given username comes from a single location (or, if more than one location, then simply choose most recent), then you can do some grep/sed on output of 'last' command.
client_ip=`last -ai | grep "still logged in" | grep "$USER " | grep -v '0.0.0.0' | tail -n 1 | sed 's/.* //g'`
echo "Hello $client_ip"
If your screen is starting usually in detached mode, then in your .screenrc, add the the following:
shell -$SHELL
Then your screen will have all the the variables.
For currently running screens that you are stuck with, simply run.
source ~/.bash_profile
Replace the path and the file name to match your environment.
I'm pretty inexperienced with Linux bash. That being said, I have a CentOS7 machine that runs a COTS application server. This application server runs other processes that sometimes hang. Since I have no control over the start of these processes, I'm looking for a script that runs every 2 minutes that kills processes of the name "spicer" that have been running for longer than 10 minutes. I've looked around and have only been able to find answers for processes that are run and owned by me.
I use the command ps -eo pid, command,etime | grep spicer to get all the spicer processes. The output of this command looks like:
18216 spicer -l/opt/otmm-10.5/Spi 14:20
18415 spicer -l/opt/otmm-10.5/Spi 11:49
etc...
18588 grep --color=auto spicer
I don't know if there's a way to parse this directly in bash. I'm also not well-versed at all in other Linux tools. I know that awk (or gawk) could possibly help.
EDIT
I have no control over the data that the process is working on.
What about wrapping the executable of spicer and start it using the timeout command? Let's say it is installed in /usr/bin/spicer. Then issue:
cp /usr/bin/spicer{,.orig}
echo '#!/bin/bash' > /usr/bin/spicer
echo 'timeout 10m spicer.orig "$#"' >> /usr/bin/spicer
Another approach would be to create a cronjob defintion into /etc/cron.d/kill_spicer. Like this:
* * * * * root kill $(ps --no-headers -C spicer -o pid,etimes | awk '$2>=600{print $1}')
The cronjob will get executed minutely and uses ps to obtain a list of spicer processes that run longer than 10minutes and passes them to kill.
Probably you even want kill -9 if the process is hanging.
You can use the -C option of ps to select processes by name.
ps --no-headers -C spicer -o pid,etime
Then you can use cut to filter the results, if the spacing is consistent. On my system the pid field takes up 8 characters, so I'd use
kill $(ps --no-headers -C spicer -o pid,etime | cut -c-8)
If the spacing is inconsistent (but if so, what kind of messed up ps are you using? :-P), you can use awk { print $1 } instead of cut.
I am very new to shell scripting, can anyone help to solve a simple problem: I have written a simple shell script that does:
1. Stops few servers.
2. Kills all the process by user1
3. Starts few servers .
This script runs on the remote host. so I need to ssh to the machine copy my script and then run it. Also Command I have used for killing all the process is:
ps -efww | grep "user1"| grep -v "sshd"| awk '{print $2}' | xargs kill
Problem1: since user1 is used for ssh and running the script.It kills the process that is running the script and never goes to start the server.can anyone help me to modify the above command.
Problem2: how can I automate the process of sshing into the machine and running the script.
I have tried expect script but do I need to have a separate script for sshing and performing these tasksor can I do it in one script itself.
any help is welcomed.
Basically the answer is already in your script.
Just exclude your script from found processes like this
grep -v <your script name>
Regarding running the script automatically after you ssh, have a look here, it can be done by a special ssh configuration
Just create a simple script like:
#!/bin/bash
ssh user1#remotehost '
someservers stop
# kill processes here
someservers start
'
In order to avoid killing itself while stopping all user's processes try to add | grep -v bash after grep -v "sshd"
This is a problem with some nuance, and not straightforward to solve in shell.
The best approach
My suggestion, for easier system administration, would be to redesign. Run the killing logic as root, for example, so you may safely TERMinate any luser process without worrying about sawing off the branch you are sitting on. If your concern is runaway processes, run them under a timeout. Etc.
A good enough approach
Your ssh login shell session will have its own pseudo-tty, and all of its descendants will likely share that. So, figure out that tty name and skip anything with that tty:
TTY=$(tty | sed 's!^/dev/!!') # TTY := pts/3 e.g.
ps -eo tty=,user=,pid=,cmd= | grep luser | grep -v -e ^$TTY -e sshd | awk ...
Almost good enough approaches
The problem with "almost good enough" solutions like simply excluding the current script and sshd via ps -eo user=,pid=,cmd= | grep -v -e sshd -e fancy_script | awk ...) is that they rely heavily on the accident of invocation. ps auxf probably reveals that you have a login shell in between your script and your sshd (probably -bash) — you could put in special logic to skip that, too, but that's hardly robust if your script's invocation changes in the future.
What about question no. 2? (How can I automate sshing...?)
Good question. Off-topic. Try superuser.com.
How to auto end/destroy an existing connected putty session (any telnet or ssh) for a user, if 'root' has changed the permission for that user to not access any cmd?
I'm not clear of the situation. Are you asking how a sys admin could kill existing connections?
ps -fu joebob | awk '{print $2}' | xargs kill -9
or you could restrict it to only processes with a controlling terminal with a little bit of more programming (and not kill joebob's process that are in the background).
Or is this from the perspective of the uesr himself? exit is an internal shell command that will terminate the shell. Most shells also have an auto log out feature where it logs out after so many seconds of inactivity.
So far, it seems too contrived of a situation. What are you really trying to do?
I am accessing a server running CentOS (linux distribution) with an SSH connection.
Since I can't always stay logged in, I use "nohup [command] &" to run my programs.
I couldn't find how to get a list of all the programs I started using nohup.
"jobs" only works out before I log out. After that, if I log back again, the jobs command shows me nothing, but I can see in my log files that my programs are still running.
Is there a way to get a list of all the programs that I started using "nohup" ?
When I started with $ nohup storm dev-zookeper ,
METHOD1 : using jobs,
prayagupd#prayagupd:/home/vmfest# jobs -l
[1]+ 11129 Running nohup ~/bin/storm/bin/storm dev-zookeeper &
NOTE: jobs shows nohup processes only on the same terminal session where nohup was started. If you close the terminal session or try on new session it won't show the nohup processes. Prefer METHOD2
METHOD2 : using ps command.
$ ps xw
PID TTY STAT TIME COMMAND
1031 tty1 Ss+ 0:00 /sbin/getty -8 38400 tty1
10582 ? S 0:01 [kworker/0:0]
10826 ? Sl 0:18 java -server -Dstorm.options= -Dstorm.home=/root/bin/storm -Djava.library.path=/usr/local/lib:/opt/local/lib:/usr/lib -Dsto
10853 ? Ss 0:00 sshd: vmfest [priv]
TTY column with ? => nohup running programs.
Description
TTY column = the terminal associated with the process
STAT column = state of a process
S = interruptible sleep (waiting for an event to complete)
l = is multi-threaded (using CLONE_THREAD, like NPTL pthreads do)
Reference
$ man ps # then search /PROCESS STATE CODES
Instead of nohup, you should use screen. It achieves the same result - your commands are running "detached". However, you can resume screen sessions and get back into their "hidden" terminal and see recent progress inside that terminal.
screen has a lot of options. Most often I use these:
To start first screen session or to take over of most recent detached one:
screen -Rd
To detach from current session: Ctrl+ACtrl+D
You can also start multiple screens - read the docs.
If you have standart output redirect to "nohup.out" just see who use this file
lsof | grep nohup.out
You cannot exactly get a list of commands started with nohup but you can see them along with your other processes by using the command ps x. Commands started with nohup will have a question mark in the TTY column.
You can also just use the top command and your user ID will indicate the jobs running and the their times.
$ top
(this will show all running jobs)
$ top -U [user ID]
(This will show jobs that are specific for the user ID)
sudo lsof | grep nohup.out | awk '{print $2}' | sort -u | while read i; do ps -o args= $i; done
returns all processes that use the nohup.out file