How to kill all specific java processes - linux

I have following search result coming from netstat:
netstat -nap | grep java | grep :::300*
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6 0 0 :::30008 :::* LISTEN 81159/java
tcp6 0 0 :::30010 :::* LISTEN 81164/java
tcp6 0 0 :::30001 :::* LISTEN 81155/java
tcp6 0 0 :::30002 :::* LISTEN 81162/java
tcp6 0 0 :::30003 :::* LISTEN 81156/java
tcp6 0 0 :::30004 :::* LISTEN 81161/java
tcp6 0 0 :::30005 :::* LISTEN 81158/java
tcp6 0 0 :::30006 :::* LISTEN 81157/java
tcp6 0 0 :::30007 :::* LISTEN 81160/java
now I need to iterate over that result, get the process id ad kill it. How can I do it ?

PIDS=( $(netstat -nap | grep java | grep :::300*| awk '{print $6}' | cut -d/ -f2 | xargs) ); for p in "${PIDS[#]}"; do kill -9 ${p}; done
i think you can do like this but i think can be more easy way

It's not good code, but the following does what you asked for (newlines added for clarity, would be equally correct without them):
kill $(netstat -nap | awk '
/java/ && /:::300[[:digit:]]{2}/ {
gsub(/\/.*$/, "", $6);
gsub(/[^[:digit:]], "", $6);
print $6
}
')
The second gsub is a safety measure, to ensure that only digits are printed; this reduces the chances of shell globbing or string splitting having unintended effects should the column ordering be off.
Consider using systemd or another process supervision framework to manage services, instead of doing this kind of thing with hand-written code.

Related

How to get process/PID responsible for creating outbound connection request (linux) (command line)

I've got an alert from my firewall that a Debian virtual machine I have tried to download a miner virus.
tcpdump shows every minute it reaching out to:
07:55:01.379558 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 7300
07:55:01.379566 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379576 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 2920
07:55:01.379584 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379593 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 5840
07:55:01.379601 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379609 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 8760
07:55:01.379617 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379657 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 7300
07:55:01.379669 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379680 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 4380
07:55:01.380974 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.381264 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:56:01.900223 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.900517 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 0
07:56:01.900553 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.900826 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 146
07:56:01.900967 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 0
07:56:01.901642 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901667 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901684 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.901696 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901705 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.901714 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901725 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901738 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901814 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 5840
07:56:01.901835 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901848 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 8760
07:56:01.901858 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901868 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901880 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901891 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.901905 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901915 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901922 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901932 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 5840
07:56:01.901939 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901949 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901955 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.902010 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.902039 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.902065 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.902076 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.902084 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:57:01.909829 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.910130 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 0
07:57:01.910157 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.910245 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 146
07:57:01.910375 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 0
07:57:01.911050 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 2920
07:57:01.911076 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911096 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 4380
07:57:01.911108 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911120 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 4380
07:57:01.911130 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911141 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 2920
07:57:01.911414 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911507 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
When I look at the firewall logs I can see it reaching out to: http://185.191.32.198/lr.sh
I can block it via the firewall, but what I'm interested in is understanding which PROCESS on my server is doing such queries, as these are outbound queries. So there is some kind of exploit or virus reaching out from the server to try and download this script.
I've tried various netstat and lsof commands I've found on here, but they don't catch the traffic when it's actually happening, they just dump out and so no active connections. Also, bear in mind, I have no local ports actively listening, these new outbound requests once a minute.
So how would one set something up to see which process / PID is making these outbound requests each minute?
netstat can be used in continuous mode with the "-p" option to log the process initiating the connections, as described here: https://unix.stackexchange.com/questions/56453/how-can-i-monitor-all-outgoing-requests-connections-from-my-machine
Use the following command to log the connection attempts and pinpoint the initiating process:
sudo netstat -nputwc | grep 185.191.32.198 | tee /tmp/nstat.txt
Interrupt with Ctrl-C when you think the connection was logged.
less /tmp/nstat.txt
Then you can analyze the <PID> (replace with the pid of the process), its environment and threads with ps:
sudo ps -ef | grep <PID>
sudo ps eww <PID>
sudo ps -T <PID>
Using mbax's & Dude Boy input you could do this:
#!/bin/bash
while true
do
PID=$(netstat -nputw | grep 185.191.32.198)
if [ $? -ne 0 ]; then
:
else
ps -ajxf
echo "PID: ${PID}"
exit
fi
done
As a oneliner:
while true; do PID=$(netstat -nputw | grep 185.191.32.198); if [ $? -ne 0 ]; then :; else ps -ajxf; echo "PID: ${PID}"; break; fi; done
Edit: The original while timer 0.1 did not detect every attempt I tested, 0.01 did.
Edit 2: Using true uses up to 2% CPU, worth it when hunting ;)
Suggesting to research your problem using nethogs traffic monitoring tool. https://www.geeksforgeeks.org/linux-monitoring-network-traffic-with-nethogs/
It might take a while to catch the offending process .
And even if you catch it, it is possible the offending process is a transient vanishing script/program that is recreated with random names.
If your system is infected, then probably you will identify the infection is applied on a legitimated process or service.
Suggesting to scan your system with anti-virus as well.

Which PID is using a PORT inside a k8s pod without net tools

Sorry about the long question post, but I think it can be useful to others to learn how this works.
What I know:
On any linux host (not using docker container), I can look at /proc/net/tcp to extract information tcp socket related.
So, I can detect the ports in LISTEN state with:
cat /proc/net/tcp |
grep " 0A " |
sed 's/^[^:]*: \(..\)\(..\)\(..\)\(..\):\(....\).*/echo $((0x\4)).$((0x\3)).$((0x\2)).$((0x\1)):$((0x\5))/g' |
bash
Results:
0.0.0.0:111
10.174.109.1:53
127.0.0.53:53
0.0.0.0:22
127.0.0.1:631
0.0.0.0:8000
/proc/net/tcp gives UID, GID, unfortunately does not provides the PID. But returns the inode. That I can use to discover the PID using it as file descriptor.
So one way is to search /proc looking for the inode socket. It's slow, but works on host:
cat /proc/net/tcp |
grep " 0A " |
sed 's/^[^:]*: \(..\)\(..\)\(..\)\(..\):\(....\).\{72\}\([^ ]*\).*/echo $((0x\4)).$((0x\3)).$((0x\2)).$((0x\1)):$((0x\5))\\\t$(find \/proc\/ -type d -name fd 2>\/dev\/null \| while read f\; do ls -l $f 2>\/dev\/null \| grep -q \6 \&\& echo $f; done)/g' |
bash
output:
0.0.0.0:111 /proc/1/task/1/fd /proc/1/fd /proc/924/task/924/fd /proc/924/fd
10.174.109.1:53 /proc/23189/task/23189/fd /proc/23189/fd
127.0.0.53:53 /proc/923/task/923/fd /proc/923/fd
0.0.0.0:22 /proc/1194/task/1194/fd /proc/1194/fd
127.0.0.1:631 /proc/13921/task/13921/fd /proc/13921/fd
0.0.0.0:8000 /proc/23122/task/23122/fd /proc/23122/fd
Permission tip 1: You will only see what you have permission to look at.
Permission tip 2: fake root used in containers does not have access to all file descriptors in /proc/*/fd. You need to query it for each user.
If you run as normal user the results are:
0.0.0.0:111
10.174.109.1:53
127.0.0.53:53
0.0.0.0:22
127.0.0.1:631
0.0.0.0:8000 /proc/23122/task/23122/fd /proc/23122/fd
Using unshare to isolate environment it works as expected:
$ unshare -r --fork --pid unshare -r --fork --pid --mount-proc -n bash
# ps -fe
UID PID PPID C STIME TTY TIME CMD
root 1 0 2 07:19 pts/6 00:00:00 bash
root 100 1 0 07:19 pts/6 00:00:00 ps -fe
# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
# python -m SimpleHTTPServer &
[1] 152
# Serving HTTP on 0.0.0.0 port 8000 ...
netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 152/python
# cat /proc/net/tcp |
> grep " 0A " |
> sed 's/^[^:]*: \(..\)\(..\)\(..\)\(..\):\(....\).\{72\}\([^ ]*\).*/echo $((0x\4)).$((0x\3)).$((0x\2)).$((0x\1)):$((0x\5))\\\t$(find \/proc\/ -type d -name fd 2>\/dev\/null \| while read f\; do ls -l $f 2>\/dev\/null \| grep -q \6 \&\& echo $f; done)/g' |
> bash
0.0.0.0:8000 /proc/152/task/152/fd /proc/152/fd
# ls -l /proc/152/fd
total 0
lrwx------ 1 root root 64 mai 25 07:20 0 -> /dev/pts/6
lrwx------ 1 root root 64 mai 25 07:20 1 -> /dev/pts/6
lrwx------ 1 root root 64 mai 25 07:20 2 -> /dev/pts/6
lrwx------ 1 root root 64 mai 25 07:20 3 -> 'socket:[52409024]'
lr-x------ 1 root root 64 mai 25 07:20 7 -> /dev/urandom
# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:1F40 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 52409024 1 0000000000000000 100 0 0 10 0
Inside a docker container in my host, it seems to work in same way.
The problem:
I have a container inside a kubernetes pod running jitsi. Inside this container, I am unable to get the PID of the service listening the ports.
Nor after installing netstat:
root#jitsi-586cb55594-kfz6m:/# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5347 0.0.0.0:* LISTEN -
tcp6 0 0 :::5222 :::* LISTEN -
tcp6 0 0 :::5269 :::* LISTEN -
tcp6 0 0 :::5280 :::* LISTEN -
# ps -fe
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 May22 ? 00:00:00 s6-svscan -t0 /var/run/s6/services
root 32 1 0 May22 ? 00:00:00 s6-supervise s6-fdholderd
root 199 1 0 May22 ? 00:00:00 s6-supervise jicofo
jicofo 203 199 0 May22 ? 00:04:17 java -Xmx3072m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/ -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava
root 5990 0 0 09:48 pts/2 00:00:00 bash
root 10926 5990 0 09:57 pts/2 00:00:00 ps -fe
Finally the Questions:
a) Why can't I read the file descriptors of the proccess listening port 5222 ?
root#jitsi-586cb55594-kfz6m:/# cat /proc/net/tcp | grep " 0A "
0: 00000000:1466 00000000:0000 0A 00000000:00000000 00:00000000 00000000 101 0 244887827 1 ffff9bd749145800 100 0 0 10 0
...
root#jitsi-586cb55594-kfz6m:/# echo $(( 0x1466 ))
5222
root#jitsi-586cb55594-kfz6m:/# ls -l /proc/*/fd/* 2>/dev/null | grep 244887827
root#jitsi-586cb55594-kfz6m:/# echo $?
1
root#jitsi-586cb55594-kfz6m:/# su - svc
svc#jitsi-586cb55594-kfz6m:~$ id -u
101
svc#jitsi-586cb55594-kfz6m:~$ ls -l /proc/*/fd/* 2>/dev/null | grep 244887827
svc#jitsi-586cb55594-kfz6m:~$ echo $?
1
b) There is another way to list inode and link it to a pid without searching /proc/*/fd ?
Update 1:
Based on Anton Kostenko tip, I looked to AppArmor. It's not the case because the server don't use AppArmor, but searching, took me to SELinux.
In a ubuntu machine where AppArmor is running, I got:
$ sudo apparmor_status | grep dock
docker-default
In the OKE(Oracle Kubernetes Engine, my case) node there is no AppArmor. I got SELinux instead:
$ man selinuxenabled | grep EXIT -A1
EXIT STATUS
It exits with status 0 if SELinux is enabled and 1 if it is not enabled.
$ selinuxenabled && echo $?
0
Now, I do believe that SELinux is blocking the /proc/*/fd listing from root inside the container. But I don't know yet how to unlock it.
References:
https://jvns.ca/blog/2016/10/10/what-even-is-a-container/
The issue is solved by adding the POSIX capability: CAP_SYS_PTRACE
I'm my case the container are under kubernetes orchestration.
this reference explains about kubectl and POSIX Capabilities
So I have
root#jitsi-55584f98bf-6cwpn:/# cat /proc/1/status | grep Cap
CapInh: 00000000a80425fb
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
So I careful read the POSIX Capabilities Manual. But even adding CAP_SYS_ADMIN, the PID does not appear on netstat. So I tested all capabilities. CAP_SYS_PTRACE is The Chosen One
root#jitsi-65c6b5d4f7-r546h:/# cat /proc/1/status | grep Cap
CapInh: 00000000a80c25fb
CapPrm: 00000000a80c25fb
CapEff: 00000000a80c25fb
CapBnd: 00000000a80c25fb
CapAmb: 0000000000000000
So here my deployment spec change:
...
spec:
...
template:
...
spec:
...
containers:
...
securityContext:
capabilities:
add:
- SYS_PTRACE
...
Yet I don't know what security reasons selinux use to do it. But for now it's good enough for me.
References:
https://man7.org/linux/man-pages/man7/capabilities.7.html
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

UDP receive queue full?

I have an application that receives heavy UDP traffic on port 12201 and I have noticed that some of the UDP packets never make into the application (received by kernel only).
When I run
netstat -c --udp -an | grep 12201
I can see that Recv-Q is almost always 126408, rarely going below, never going above:
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
Does this mean that receive queue is full? Where does number 126408 come from? How can I increase it?
Sysctl config:
# sysctl -a | grep mem
vm.overcommit_memory = 0
vm.nr_hugepages_mempolicy = 0
vm.lowmem_reserve_ratio = 256 256 32
vm.meminfo_legacy_layout = 1
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
net.core.wmem_max = 124928
net.core.rmem_max = 33554432
net.core.wmem_default = 124928
net.core.rmem_default = 124928
net.core.optmem_max = 20480
net.ipv4.igmp_max_memberships = 20
net.ipv4.tcp_mem = 365760 487680 731520
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.udp_mem = 262144 327680 393216
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096
Looks like your application using system default receive buffer, which is defined via sysctl
net.core.rmem_default = 124928
Hence you see upper limit in Recv-Q close to above. Try changing SO_RCVBUF socket option in you application to higher values, probably up to max limit. As defined in sysctl setting net.core.rmem_max = 33554432
Dropped packet count due to queue full, can be seen via netstat -us (look for packet receive errors)

How to parse netstat command to get the send-q number from the line

I have this output from netstat -naputeo:
tcp 0 0 :::44500 :::* LISTEN 2000 773788772 18117/java off (0.00/0/0)
tcp 0 0 :::22 :::* LISTEN 0 9419 4186/sshd off (0.00/0/0)
tcp 0 0 ::ffff:127.0.0.1:61666 ::ffff:127.0.0.1:43940 ESTABLISHED 2000 788032760 18122/java off (0.00/0/0)
tcp 0 0 ::ffff:192.168.1.202:56510 ::ffff:192.168.1.202:3000 ESTABLISHED 0 791652028 6804/java_ndsagent keepalive (7185.05/0/0)
tcp 0 0 ::ffff:192.168.1.202:56509 ::ffff:192.168.1.202:3000 TIME_WAIT 0 0 - timewait (41.13/0/0)
tcp 0 0 ::ffff:192.168.1.202:56508 ::ffff:192.168.1.202:3000 TIME_WAIT 0 0 - timewait (21.13/0/0)
tcp 0 4656 ::ffff:192.168.1.202:22 ::ffff:84.208.36.125:48507 ESTABLISHED 0 791474860 24141/1 on (0.19/0/0)
tcp 0 0 ::ffff:127.0.0.1:61616 ::ffff:127.0.0.1:45121 ESTABLISHED 2000 788032761 18117/java off (0.00/0/0)
tcp 0 0 ::ffff:192.168.1.202:3000 ::ffff:192.168.1.202:56510 ESTABLISHED 0 791651217 8044/rmiregistry off (0.00/0/0)
The Send-Q is the 3rd field, here the offender is port 22 and 4656KB.
The problem is that i need to output that specific line and that number/port/process to an output file [only if it is above 4000, that will be sent to my inbox and alert me.
I have seen similar answers but I can't extract the line using those suggestions. I don't know what process will be filling the Q but I know the ports. It's not just the 22 it could be more at any giving time.
I tried:
netstat -naputeo | awk '$3 == 0 && $4 ~ /[^0-9]22$/'
But that gives me the wrong line. [that is the :::22]
netstat -naputeo | awk '{if(($3)>0) print $3;}'
That is all wrong because it somehow produces all the lines of that field.
All I need is that number and line sent to a csv and that's all. I can deal with error checking later and maybe refine it.
Any suggestions??
Used this and it worked for now but there is room for improvement
filterQs() {
while read recv send address pid_program; do
ip=${address%%:*}
port=${address##*:}
pid=${pid_program%%/*}
program=${pid_program#*/}
echo "recv=${recv} send=${send} ip=${ip} port=${port} pid=${pid} program=${program}"
if [[ ${port} -eq 35487|| ${port} -eq 65485|| ${port} -eq CalorisPort || ${port} -eq 22 ]]
then
echo "recv=${recv} send=${send} ip=${ip} port=${port} pid=${pid} program=${program}" >> Qmonitor.txt
fi
done < <(netstat -napute 2>/dev/null | awk '$1 ~ /^(tcp|udp)/ && ($2 > 500 || $3 > 500) { print $2, $3, $4, $9 }')
}
Thanks all
Something like
$ netstat -naputeo 2>/dev/null | awk -v OFS=';' '$1 ~ /^tcp/ && $3 > 4000 { sub(/^.+:/, "", $4); print $3, $4, $9 }'
?
That would output the 3rd column (Send-Q), the port part of the 4th column (Local Address) and the 9th column (PID/Program name) if Send-Q > 4000, separated by semicolons so you can pipe it into your CSV.
E.g. (for Send-Q > 0 on my box)
$ netstat -naputeo 2>/dev/null | awk -v OFS=';' '$1 ~ /^tcp/ && $3 > 0 { sub(/^.+:/, "", $4); print $3, $4, $9 }'
52;22;4363/sshd:
EDIT:
If you really need to further process the values in bash, then you can just print the respective columns via awk and iterate over the lines like this:
#!/bin/bash
while read recv send address pid_program; do
ip=${address%%:*}
port=${address##*:}
pid=${pid_program%%/*}
program=${pid_program#*/}
echo "recv=${recv} send=${send} ip=${ip} port=${port} pid=${pid} program=${program}"
# do stuff here
done < <(netstat -naputeo 2>/dev/null | awk '$1 ~ /^(tcp|udp)/ && ($2 > 4000 || $3 > 4000) { print $2, $3, $4, $9 }')
E.g.:
$ ./t.sh
recv=0 send=52 ip=x.x.x.x port=22 pid=12345 program=sshd:
Note: I don't understand why you need the -o switch to netstat since you don't seem to be interested in the timers output, so you could probably drop that.
Try this:
netstat -naputeo | awk '{ if (($3 + 0) >= 4000) { sub(/.*:/, "", $4); print $3, $4, $9;} }'
This filters out the header line, and extracts the port number from the field $4.
Pure bash solution:
#!/bin/bash
filterHuge() {
while read -r -a line; do
if (( line[2] > 4000 )) && [[ ${line[3]##*:} == '22' ]]; then # if Send-Q is higher than 4000 and port number is 22
echo "Size: ${line[2]} Whole line: ${line[#]}"
fi
done
}
netstat -naputeo | filterHuge
I have a lineage2 server and have some problems with sent-q
I use your script and ....:
Size: 84509 Whole line: tcp 0 84509 144.217.255.80:6254 179.7.212.0:35176 ESTABLISHED 0 480806 2286/java on (46.42/11/0)
Size: 12130 Whole line: tcp 0 12130 144.217.255.80:6254 200.120.203.238:52295 ESTABLISHED 0 410043 2286/java on (0.69/0/0)
Size: 13774 Whole line: tcp 0 13774 144.217.255.80:6254 190.30.75.253:63749 ESTABLISHED 0 469361 2286/java on (0.76/0/0)
Size: 12319 Whole line: tcp 0 12319 144.217.255.80:6254 200.120.203.238:52389 ESTABLISHED 0 487569 2286/java on (0.37/0/0)
Size: 9800 Whole line: tcp 0 9800 144.217.255.80:6254 186.141.200.7:63572 ESTABLISHED 0 478974 2286/java on (0.38/0/0)
Size: 12150 Whole line: tcp 0 12150 144.217.255.80:6254 200.120.203.238:52298 ESTABLISHED 0 410128 2286/java on (0.26/0/0)
Size: 9626 Whole line: tcp 0 9626 144.217.255.80:6254 186.141.200.7:63569 ESTABLISHED 0 482721 2286/java on (0.44/0/0)
Size: 11443 Whole line: tcp 0 11443 144.217.255.80:6254 200.120.203.238:52291 ESTABLISHED 0 411061 2286/java on (0.89/0/0)
Size: 79254 Whole line: tcp 0 79254 144.217.255.80:6254 179.7.212.0:6014 ESTABLISHED 0 501998 2286/java on (89.42/10/0)
Size: 10722 Whole line: tcp 0 10722 144.217.255.80:6254 179.7.111.208:12925 ESTABLISHED 0 488352 2286/java on (0.23/0/0)
Size: 126708 Whole line: tcp 0 126708 144.217.255.80:6254 190.11.106.181:3481 ESTABLISHED 0 487867 2286/java on (85.32/7/0)
Problem are in one port : 6254
Which I could place for connections that are greater than 4000 in sent to the restart to 0 or dropping them

get jsvc to listen to ipv4

I am setting up email archiving and it requires jsvc. I have set this up but running the following command shows it is not listening to ipv4. How can i change this?
netstat -pan --tcp | grep jsvc
tcp 0 0 :::8888 :::* LISTEN 3180/jsvc.exec
tcp 0 0 :::2526 :::* LISTEN 3180/jsvc.exec
tcp 0 0 :::2527 :::* LISTEN 3180/jsvc.exec
tcp 0 0 ::ffff:172.16.0.3:43930 ::ffff:172.16.0.3:27017 ESTABLISHED 3180/jsvc.exec
tcp 0 0 ::ffff:172.16.0.3:43929 ::ffff:172.16.0.3:27017 ESTABLISHED 3180/jsvc.exec

Resources