Kill a process with a certain value in the command lying between a specific range - linux

I run a lot of curl process through a script. These curl processes specify the local ports to be used. Now I need to kill some of these processes based on their local ports. For eg i want to kill the processes with the local ports lying between 30000 and 30100.
Now how do i kill only the processes with local ports between 30000 and 30100.
I believe i can write a perl script to parse the output and extract the values of the local port then kill the process satifying my conditions, but is there a way to do it with a single nested linux command, perhaps using awk?

You can do:
ps -aux | awk '$14>=30000 && $14<=30100 && $0~/curl/ { print $2 }' | xargs kill -9
Based on your screenshot, port values appear on 14th column ($14 holds this value), putting a check of $0~/curl/ grabs only those lines with curl effectively removing the need for grep. print $2 prints the process id. We then pipe the output to xargs and kill.

You can use
kill `lsof -i TCP#<your-ip-address>:30000-30100 -t`
to kill the processes attached to those ports, where <your-ip-address> must be the IP address that those connections use on the local side (this could be "localhost" or the external IP address of your host, depending).
If you leave the IP address out, you risk killing unrelated processes (that are connected to a destination port in the given range).
See this post for the background on lsof.

You can use the pkill command like so:
pkill -f -- 'curl.*local-port 30(0[0-9][0-9]|100)'
A less strict regular expression of course works, too, if you are sure you won't kill unrelated processes. You can do pgrep -fa -- <regexp> first to check if your regexp is correct, if you think that is necessary.
Note that matching number ranges is not one of the strengths of regular expressions.

Related

listen EADDRINUSE: address already in use :::5000

In my application, I use concurrently to run both backend and front end simultaneously. After ctrl + c, strill the port 5000 is running. Also, port 3000 is running. I have to manually kill processes. How can I solve this?
Run cmd.exe as 'Administrator':
C:\Windows\System32>taskkill /F /IM node.exe
run pa -xa | grep node
you will get result with processid
4476 pts/0 Sl+ 0:01 node index.js
then kill the process with kill -9 4476
as simple as that
lsof -ti finds open files(sockets are files in nix based systems) -t removes the headers, so that we can pipe into kill(We just want the process id), -i lets lsof find the file based off the internet address. We do not have to provide the full address, we can just search by port, by using the pattern :port.
Some commands accept input from stdin, and we can pipe directly to them, kill is not one of those commands, so we must use xargs(It reads from stdin, and calls the specified command with the input from stdin).
Finally the ; lets us execute both commands irrespective of one another. Regardless of whether lsof -ti:3000 | xargs kill succeeds or fails,
lsof -ti:5000 | xargs kill will run, and vice versa.
lsof -ti:3000 | xargs kill; lsof -ti:5000 | xargs kill
Restart your laptop/server, it will release all the busy ports, then try again... you can also use
ps aux | grep node
and then kill the process using:
kill -9 PID..
You can kill all the processes that are using node it can also kill a system process
Not preferred: killall -9 node
but most of the times it wont work for nodemon, and didnt work for me.
You can fix this issue by killing the address in use or may simply restart your device.
1- For Linux to Kill the address in use, use the following command
sudo kill $(sudo lsof -t -i:8080)
where replace 8080 with your app address.
you can simply restart your laptop or change the port number it should work

Shell script to avoid killing the process that started the script

I am very new to shell scripting, can anyone help to solve a simple problem: I have written a simple shell script that does:
1. Stops few servers.
2. Kills all the process by user1
3. Starts few servers .
This script runs on the remote host. so I need to ssh to the machine copy my script and then run it. Also Command I have used for killing all the process is:
ps -efww | grep "user1"| grep -v "sshd"| awk '{print $2}' | xargs kill
Problem1: since user1 is used for ssh and running the script.It kills the process that is running the script and never goes to start the server.can anyone help me to modify the above command.
Problem2: how can I automate the process of sshing into the machine and running the script.
I have tried expect script but do I need to have a separate script for sshing and performing these tasksor can I do it in one script itself.
any help is welcomed.
Basically the answer is already in your script.
Just exclude your script from found processes like this
grep -v <your script name>
Regarding running the script automatically after you ssh, have a look here, it can be done by a special ssh configuration
Just create a simple script like:
#!/bin/bash
ssh user1#remotehost '
someservers stop
# kill processes here
someservers start
'
In order to avoid killing itself while stopping all user's processes try to add | grep -v bash after grep -v "sshd"
This is a problem with some nuance, and not straightforward to solve in shell.
The best approach
My suggestion, for easier system administration, would be to redesign. Run the killing logic as root, for example, so you may safely TERMinate any luser process without worrying about sawing off the branch you are sitting on. If your concern is runaway processes, run them under a timeout. Etc.
A good enough approach
Your ssh login shell session will have its own pseudo-tty, and all of its descendants will likely share that. So, figure out that tty name and skip anything with that tty:
TTY=$(tty | sed 's!^/dev/!!') # TTY := pts/3 e.g.
ps -eo tty=,user=,pid=,cmd= | grep luser | grep -v -e ^$TTY -e sshd | awk ...
Almost good enough approaches
The problem with "almost good enough" solutions like simply excluding the current script and sshd via ps -eo user=,pid=,cmd= | grep -v -e sshd -e fancy_script | awk ...) is that they rely heavily on the accident of invocation. ps auxf probably reveals that you have a login shell in between your script and your sshd (probably -bash) — you could put in special logic to skip that, too, but that's hardly robust if your script's invocation changes in the future.
What about question no. 2? (How can I automate sshing...?)
Good question. Off-topic. Try superuser.com.

Linux: How to find the list of daemon processes and zombie processes

I tried checking on Google, but I couldn't find much information related to the actual question.
How do I get a consolidated list of zombie processes and daemon processes?
How do I do it on different operating systems. Linux? AIX? Windows?
I am sure that, based on PID, we cannot identify the type of process. Running through a terminal might not help either.
Try out this.
ps axo pid,ppid,pgrp,tty,tpgid,sess,comm |awk '$2==1' |awk '$1==$3'
In the above command I used the very properties of a daemon to filter them out, from all of existing processes in Linux.
The parent of a daemon is always Init, so check for ppid 1.
The daemon is normally not associated with any terminal, hence we have ‘?’ under tty.
The process-id and process-group-id of a daemon are normally same
The session-id of a daemon is same as it process id.
With GNU ps on Linux:
[
$ ps --version
procps-ng version 3.3.3
]
Zombies:
ps -lA | grep '^. Z'
will get you all zombies (note that the param is lowercase 'L', i.e., 'l' followed by 'A').
Daemons:
As #Barmar said there's no way to get daemons for certain, but a clue that a process is a daemon is that it's not associated with any TTY device. The 12th column of 'ps -Al' output is TTY; the 4th is PID, 14th is the process name. Hence:
ps -lA | awk '$12 == "?" {print $4, $14}'
will get you processes that are possibly daemons; not guaranteed! :)
Daemons are started by the init process, which means they have a PPID of 1.
Therefore:
ps -ef | awk '$3 == 1'
To get the list of Zombie and daemon process just write a psudo character dev driver, where you should navigate trough the task_struct and look for state
I wrote for daemons and the "old" sysv initd, you have to check if it is working on your distro.
Good demons have well written startup scripts in /etc/initd
When changing runlevel, how does init know the running daemons ?
It looks for their names in the directory
/var/lock/subsys
So you can
get the names list from there
scan all the running processes and check if the name is inside the list: bingo !
To scan all the processes: list every subdirectory in
/proc
If its name is digits, it is the pid of a running process.
For example, the status of the process with pid 1234 is this file
/proc/1234/status
Open it and get the first line, starts with "Name:"
See
http://man7.org/linux/man-pages/man5/proc.5.html
https://linuxexplore.com/2014/03/19/use-of-subsystem-lock-files-in-init-script/

is it possible to create a non-child process inside a shell script?

I'm using a shell process pool API at Github, for a script, as below
function foobar()
{
mytask($1);
}
job_pool_init 100 0
tcpdump -i eth0 -w tempcap & #
for i in `seq 1 4`;do
mesg="hello"$i
job_pool_run foobar $mesg
sleep 5
done
job_pool_wait
pkill tcpdump #
echo 'all finish'
job_pool_shutdown
if I comment the tcpdump line,
then it works fine, as expected,
but when the tcpdump line is there,
There is a wait command in job_pool_wait, which waits for the ending of all children process, if there is no such a tcpdump line, it is as expected.
But I want to capture something until all the child processes finish, so I have to use a tcpdump. In this script, tcpdump process is a child process,
job_pool_wait will also wait for the ending of tcpdump process, which is not expected.
so a solution is to make tcpdump not a child process,
how can I do it,
or any other solutions?
thanks!
You should be able to run tcpdump in a sub-shell in the background:
(tcpdump -i eth0 -w tempcap &)
This should prevent it from appearing as a direct descendant of your script.
Answering your literal question, yes, run the command with exec. But I doubt that's what you really wanted.
I think what you really wanted is to be able to wait on specific pid. The wait command takes an optional pid. Either that round need to check when wait returns whether the process that just terminated is a process you're interested in, and wait again if it's not.

Linux Bash script to ping multiple hosts simultaneously

I have a text file with list of 500 server names. I need to ping all of them simultaneously instead of one by one in a loop, and put the pingable ones in one file and unpingable ones in another file.
Can I run each ping in background or spawn a new process for each ping? What is the quickest and most efficient way to achieve this?
You can control the parallelism by using xargs:
cat file-of-ips | xargs -n 1 -I ^ -P 50 ping ^
Here we're keeping at most 50 pings going at a time. The ip itself is inserted at the ^; you can put arguments before and after.

Resources