UDP receive queue full? - linux

I have an application that receives heavy UDP traffic on port 12201 and I have noticed that some of the UDP packets never make into the application (received by kernel only).
When I run
netstat -c --udp -an | grep 12201
I can see that Recv-Q is almost always 126408, rarely going below, never going above:
Proto Recv-Q Send-Q Local Address Foreign Address State
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
udp 126408 0 :::12201 :::*
Does this mean that receive queue is full? Where does number 126408 come from? How can I increase it?
Sysctl config:
# sysctl -a | grep mem
vm.overcommit_memory = 0
vm.nr_hugepages_mempolicy = 0
vm.lowmem_reserve_ratio = 256 256 32
vm.meminfo_legacy_layout = 1
vm.memory_failure_early_kill = 0
vm.memory_failure_recovery = 1
net.core.wmem_max = 124928
net.core.rmem_max = 33554432
net.core.wmem_default = 124928
net.core.rmem_default = 124928
net.core.optmem_max = 20480
net.ipv4.igmp_max_memberships = 20
net.ipv4.tcp_mem = 365760 487680 731520
net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.udp_mem = 262144 327680 393216
net.ipv4.udp_rmem_min = 4096
net.ipv4.udp_wmem_min = 4096

Looks like your application using system default receive buffer, which is defined via sysctl
net.core.rmem_default = 124928
Hence you see upper limit in Recv-Q close to above. Try changing SO_RCVBUF socket option in you application to higher values, probably up to max limit. As defined in sysctl setting net.core.rmem_max = 33554432
Dropped packet count due to queue full, can be seen via netstat -us (look for packet receive errors)

Related

How to kill all specific java processes

I have following search result coming from netstat:
netstat -nap | grep java | grep :::300*
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp6 0 0 :::30008 :::* LISTEN 81159/java
tcp6 0 0 :::30010 :::* LISTEN 81164/java
tcp6 0 0 :::30001 :::* LISTEN 81155/java
tcp6 0 0 :::30002 :::* LISTEN 81162/java
tcp6 0 0 :::30003 :::* LISTEN 81156/java
tcp6 0 0 :::30004 :::* LISTEN 81161/java
tcp6 0 0 :::30005 :::* LISTEN 81158/java
tcp6 0 0 :::30006 :::* LISTEN 81157/java
tcp6 0 0 :::30007 :::* LISTEN 81160/java
now I need to iterate over that result, get the process id ad kill it. How can I do it ?
PIDS=( $(netstat -nap | grep java | grep :::300*| awk '{print $6}' | cut -d/ -f2 | xargs) ); for p in "${PIDS[#]}"; do kill -9 ${p}; done
i think you can do like this but i think can be more easy way
It's not good code, but the following does what you asked for (newlines added for clarity, would be equally correct without them):
kill $(netstat -nap | awk '
/java/ && /:::300[[:digit:]]{2}/ {
gsub(/\/.*$/, "", $6);
gsub(/[^[:digit:]], "", $6);
print $6
}
')
The second gsub is a safety measure, to ensure that only digits are printed; this reduces the chances of shell globbing or string splitting having unintended effects should the column ordering be off.
Consider using systemd or another process supervision framework to manage services, instead of doing this kind of thing with hand-written code.

How to get process/PID responsible for creating outbound connection request (linux) (command line)

I've got an alert from my firewall that a Debian virtual machine I have tried to download a miner virus.
tcpdump shows every minute it reaching out to:
07:55:01.379558 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 7300
07:55:01.379566 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379576 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 2920
07:55:01.379584 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379593 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 5840
07:55:01.379601 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379609 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 8760
07:55:01.379617 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379657 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 7300
07:55:01.379669 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.379680 IP 185.191.32.198.80 > 192.168.1.205.49126: tcp 4380
07:55:01.380974 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:55:01.381264 IP 192.168.1.205.49126 > 185.191.32.198.80: tcp 0
07:56:01.900223 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.900517 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 0
07:56:01.900553 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.900826 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 146
07:56:01.900967 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 0
07:56:01.901642 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901667 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901684 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.901696 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901705 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.901714 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901725 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901738 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901814 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 5840
07:56:01.901835 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901848 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 8760
07:56:01.901858 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901868 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901880 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901891 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.901905 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901915 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901922 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901932 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 5840
07:56:01.901939 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.901949 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 2920
07:56:01.901955 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.902010 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.902039 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.902065 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:56:01.902076 IP 185.191.32.198.80 > 192.168.1.205.49128: tcp 4380
07:56:01.902084 IP 192.168.1.205.49128 > 185.191.32.198.80: tcp 0
07:57:01.909829 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.910130 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 0
07:57:01.910157 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.910245 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 146
07:57:01.910375 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 0
07:57:01.911050 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 2920
07:57:01.911076 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911096 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 4380
07:57:01.911108 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911120 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 4380
07:57:01.911130 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911141 IP 185.191.32.198.80 > 192.168.1.205.49130: tcp 2920
07:57:01.911414 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
07:57:01.911507 IP 192.168.1.205.49130 > 185.191.32.198.80: tcp 0
When I look at the firewall logs I can see it reaching out to: http://185.191.32.198/lr.sh
I can block it via the firewall, but what I'm interested in is understanding which PROCESS on my server is doing such queries, as these are outbound queries. So there is some kind of exploit or virus reaching out from the server to try and download this script.
I've tried various netstat and lsof commands I've found on here, but they don't catch the traffic when it's actually happening, they just dump out and so no active connections. Also, bear in mind, I have no local ports actively listening, these new outbound requests once a minute.
So how would one set something up to see which process / PID is making these outbound requests each minute?
netstat can be used in continuous mode with the "-p" option to log the process initiating the connections, as described here: https://unix.stackexchange.com/questions/56453/how-can-i-monitor-all-outgoing-requests-connections-from-my-machine
Use the following command to log the connection attempts and pinpoint the initiating process:
sudo netstat -nputwc | grep 185.191.32.198 | tee /tmp/nstat.txt
Interrupt with Ctrl-C when you think the connection was logged.
less /tmp/nstat.txt
Then you can analyze the <PID> (replace with the pid of the process), its environment and threads with ps:
sudo ps -ef | grep <PID>
sudo ps eww <PID>
sudo ps -T <PID>
Using mbax's & Dude Boy input you could do this:
#!/bin/bash
while true
do
PID=$(netstat -nputw | grep 185.191.32.198)
if [ $? -ne 0 ]; then
:
else
ps -ajxf
echo "PID: ${PID}"
exit
fi
done
As a oneliner:
while true; do PID=$(netstat -nputw | grep 185.191.32.198); if [ $? -ne 0 ]; then :; else ps -ajxf; echo "PID: ${PID}"; break; fi; done
Edit: The original while timer 0.1 did not detect every attempt I tested, 0.01 did.
Edit 2: Using true uses up to 2% CPU, worth it when hunting ;)
Suggesting to research your problem using nethogs traffic monitoring tool. https://www.geeksforgeeks.org/linux-monitoring-network-traffic-with-nethogs/
It might take a while to catch the offending process .
And even if you catch it, it is possible the offending process is a transient vanishing script/program that is recreated with random names.
If your system is infected, then probably you will identify the infection is applied on a legitimated process or service.
Suggesting to scan your system with anti-virus as well.

Which PID is using a PORT inside a k8s pod without net tools

Sorry about the long question post, but I think it can be useful to others to learn how this works.
What I know:
On any linux host (not using docker container), I can look at /proc/net/tcp to extract information tcp socket related.
So, I can detect the ports in LISTEN state with:
cat /proc/net/tcp |
grep " 0A " |
sed 's/^[^:]*: \(..\)\(..\)\(..\)\(..\):\(....\).*/echo $((0x\4)).$((0x\3)).$((0x\2)).$((0x\1)):$((0x\5))/g' |
bash
Results:
0.0.0.0:111
10.174.109.1:53
127.0.0.53:53
0.0.0.0:22
127.0.0.1:631
0.0.0.0:8000
/proc/net/tcp gives UID, GID, unfortunately does not provides the PID. But returns the inode. That I can use to discover the PID using it as file descriptor.
So one way is to search /proc looking for the inode socket. It's slow, but works on host:
cat /proc/net/tcp |
grep " 0A " |
sed 's/^[^:]*: \(..\)\(..\)\(..\)\(..\):\(....\).\{72\}\([^ ]*\).*/echo $((0x\4)).$((0x\3)).$((0x\2)).$((0x\1)):$((0x\5))\\\t$(find \/proc\/ -type d -name fd 2>\/dev\/null \| while read f\; do ls -l $f 2>\/dev\/null \| grep -q \6 \&\& echo $f; done)/g' |
bash
output:
0.0.0.0:111 /proc/1/task/1/fd /proc/1/fd /proc/924/task/924/fd /proc/924/fd
10.174.109.1:53 /proc/23189/task/23189/fd /proc/23189/fd
127.0.0.53:53 /proc/923/task/923/fd /proc/923/fd
0.0.0.0:22 /proc/1194/task/1194/fd /proc/1194/fd
127.0.0.1:631 /proc/13921/task/13921/fd /proc/13921/fd
0.0.0.0:8000 /proc/23122/task/23122/fd /proc/23122/fd
Permission tip 1: You will only see what you have permission to look at.
Permission tip 2: fake root used in containers does not have access to all file descriptors in /proc/*/fd. You need to query it for each user.
If you run as normal user the results are:
0.0.0.0:111
10.174.109.1:53
127.0.0.53:53
0.0.0.0:22
127.0.0.1:631
0.0.0.0:8000 /proc/23122/task/23122/fd /proc/23122/fd
Using unshare to isolate environment it works as expected:
$ unshare -r --fork --pid unshare -r --fork --pid --mount-proc -n bash
# ps -fe
UID PID PPID C STIME TTY TIME CMD
root 1 0 2 07:19 pts/6 00:00:00 bash
root 100 1 0 07:19 pts/6 00:00:00 ps -fe
# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
# python -m SimpleHTTPServer &
[1] 152
# Serving HTTP on 0.0.0.0 port 8000 ...
netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 152/python
# cat /proc/net/tcp |
> grep " 0A " |
> sed 's/^[^:]*: \(..\)\(..\)\(..\)\(..\):\(....\).\{72\}\([^ ]*\).*/echo $((0x\4)).$((0x\3)).$((0x\2)).$((0x\1)):$((0x\5))\\\t$(find \/proc\/ -type d -name fd 2>\/dev\/null \| while read f\; do ls -l $f 2>\/dev\/null \| grep -q \6 \&\& echo $f; done)/g' |
> bash
0.0.0.0:8000 /proc/152/task/152/fd /proc/152/fd
# ls -l /proc/152/fd
total 0
lrwx------ 1 root root 64 mai 25 07:20 0 -> /dev/pts/6
lrwx------ 1 root root 64 mai 25 07:20 1 -> /dev/pts/6
lrwx------ 1 root root 64 mai 25 07:20 2 -> /dev/pts/6
lrwx------ 1 root root 64 mai 25 07:20 3 -> 'socket:[52409024]'
lr-x------ 1 root root 64 mai 25 07:20 7 -> /dev/urandom
# cat /proc/net/tcp
sl local_address rem_address st tx_queue rx_queue tr tm->when retrnsmt uid timeout inode
0: 00000000:1F40 00000000:0000 0A 00000000:00000000 00:00000000 00000000 0 0 52409024 1 0000000000000000 100 0 0 10 0
Inside a docker container in my host, it seems to work in same way.
The problem:
I have a container inside a kubernetes pod running jitsi. Inside this container, I am unable to get the PID of the service listening the ports.
Nor after installing netstat:
root#jitsi-586cb55594-kfz6m:/# netstat -ntpl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:5222 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5269 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:8888 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5280 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:5347 0.0.0.0:* LISTEN -
tcp6 0 0 :::5222 :::* LISTEN -
tcp6 0 0 :::5269 :::* LISTEN -
tcp6 0 0 :::5280 :::* LISTEN -
# ps -fe
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 May22 ? 00:00:00 s6-svscan -t0 /var/run/s6/services
root 32 1 0 May22 ? 00:00:00 s6-supervise s6-fdholderd
root 199 1 0 May22 ? 00:00:00 s6-supervise jicofo
jicofo 203 199 0 May22 ? 00:04:17 java -Xmx3072m -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/tmp -Dnet.java.sip.communicator.SC_HOME_DIR_LOCATION=/ -Dnet.java.sip.communicator.SC_HOME_DIR_NAME=config -Djava
root 5990 0 0 09:48 pts/2 00:00:00 bash
root 10926 5990 0 09:57 pts/2 00:00:00 ps -fe
Finally the Questions:
a) Why can't I read the file descriptors of the proccess listening port 5222 ?
root#jitsi-586cb55594-kfz6m:/# cat /proc/net/tcp | grep " 0A "
0: 00000000:1466 00000000:0000 0A 00000000:00000000 00:00000000 00000000 101 0 244887827 1 ffff9bd749145800 100 0 0 10 0
...
root#jitsi-586cb55594-kfz6m:/# echo $(( 0x1466 ))
5222
root#jitsi-586cb55594-kfz6m:/# ls -l /proc/*/fd/* 2>/dev/null | grep 244887827
root#jitsi-586cb55594-kfz6m:/# echo $?
1
root#jitsi-586cb55594-kfz6m:/# su - svc
svc#jitsi-586cb55594-kfz6m:~$ id -u
101
svc#jitsi-586cb55594-kfz6m:~$ ls -l /proc/*/fd/* 2>/dev/null | grep 244887827
svc#jitsi-586cb55594-kfz6m:~$ echo $?
1
b) There is another way to list inode and link it to a pid without searching /proc/*/fd ?
Update 1:
Based on Anton Kostenko tip, I looked to AppArmor. It's not the case because the server don't use AppArmor, but searching, took me to SELinux.
In a ubuntu machine where AppArmor is running, I got:
$ sudo apparmor_status | grep dock
docker-default
In the OKE(Oracle Kubernetes Engine, my case) node there is no AppArmor. I got SELinux instead:
$ man selinuxenabled | grep EXIT -A1
EXIT STATUS
It exits with status 0 if SELinux is enabled and 1 if it is not enabled.
$ selinuxenabled && echo $?
0
Now, I do believe that SELinux is blocking the /proc/*/fd listing from root inside the container. But I don't know yet how to unlock it.
References:
https://jvns.ca/blog/2016/10/10/what-even-is-a-container/
The issue is solved by adding the POSIX capability: CAP_SYS_PTRACE
I'm my case the container are under kubernetes orchestration.
this reference explains about kubectl and POSIX Capabilities
So I have
root#jitsi-55584f98bf-6cwpn:/# cat /proc/1/status | grep Cap
CapInh: 00000000a80425fb
CapPrm: 00000000a80425fb
CapEff: 00000000a80425fb
CapBnd: 00000000a80425fb
CapAmb: 0000000000000000
So I careful read the POSIX Capabilities Manual. But even adding CAP_SYS_ADMIN, the PID does not appear on netstat. So I tested all capabilities. CAP_SYS_PTRACE is The Chosen One
root#jitsi-65c6b5d4f7-r546h:/# cat /proc/1/status | grep Cap
CapInh: 00000000a80c25fb
CapPrm: 00000000a80c25fb
CapEff: 00000000a80c25fb
CapBnd: 00000000a80c25fb
CapAmb: 0000000000000000
So here my deployment spec change:
...
spec:
...
template:
...
spec:
...
containers:
...
securityContext:
capabilities:
add:
- SYS_PTRACE
...
Yet I don't know what security reasons selinux use to do it. But for now it's good enough for me.
References:
https://man7.org/linux/man-pages/man7/capabilities.7.html
https://kubernetes.io/docs/tasks/configure-pod-container/security-context/

How can you obtain a queue field from netstat -i in Linux?

In Solaris, the output of 'netstat -i' gives something like the following:
root# netstat -i
Name Mtu Net/Dest Address Ipkts Ierrs Opkts Oerrs Collis Queue
lo0 8232 loopback localhost 136799 0 136799 0 0 0
igb0 1500 vulture vulture 1272272 0 347277 0 0 0
Note that there is a Queue field on the end.
In Linux, 'netstat -i' gives output with no Queue field:
[root#roseate ~]# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 2806170 0 0 0 791768 0 0 0 BMRU
eth1 1500 0 0 0 0 0 0 0 0 0 BMU
eth2 1500 0 0 0 0 0 0 0 0 0 BMU
eth3 1500 0 0 0 0 0 0 0 0 0 BMU
lo 16436 0 1405318 0 0 0 1405318 0 0 0 LRU
I've figured out how to get collisions in Linux by adding the -e option, but is there a way to get the Queue in Linux?
The only reference to queue I ever saw in netstat on Linux was when using -s, but that's probably too garrulous for your use-case?
$ netstat -na | awk 'BEGIN { RecvQ=0; SendQ=0; } { RecvQ+=$2; SendQ+=$3; } END { print "RecvQ " RecvQ/1024; print "SendQ " SendQ/1024; }'
RecvQ 255.882
SendQ 0.0507812
For per interface, I have dirty way
[spatel#us04 ~]$ for qw in `/sbin/ifconfig | grep 'inet addr:' | cut -d: -f2 | awk '{print $1}'`; do echo `/sbin/ip addr | grep $qw | awk '{print $7}'` : ; echo `netstat -na | grep $qw | awk 'BEGIN { RecvQ=0; SendQ=0; } { RecvQ+=$2; SendQ+=$3; } END { print "RecvQ " RecvQ/1024; print "SendQ " SendQ/1024; }'`; done
eth0 :
RecvQ 0 SendQ 0
eth2 :
RecvQ 0.0703125 SendQ 1.56738
:
RecvQ 0 SendQ 0
I ended up using
tc -s -d qdisc
[root#roseate ~]# tc -s -d qdisc
qdisc mq 0: dev eth2 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
qdisc mq 0: dev eth3 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
Sent 218041403 bytes 1358829 pkt (dropped 0, overlimits 0 requeues 1)
rate 0bit 0pps backlog 0b 0p requeues 1
qdisc mq 0: dev eth1 root
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
rate 0bit 0pps backlog 0b 0p requeues 0
which gives backlog bytes and packets.
Source

get jsvc to listen to ipv4

I am setting up email archiving and it requires jsvc. I have set this up but running the following command shows it is not listening to ipv4. How can i change this?
netstat -pan --tcp | grep jsvc
tcp 0 0 :::8888 :::* LISTEN 3180/jsvc.exec
tcp 0 0 :::2526 :::* LISTEN 3180/jsvc.exec
tcp 0 0 :::2527 :::* LISTEN 3180/jsvc.exec
tcp 0 0 ::ffff:172.16.0.3:43930 ::ffff:172.16.0.3:27017 ESTABLISHED 3180/jsvc.exec
tcp 0 0 ::ffff:172.16.0.3:43929 ::ffff:172.16.0.3:27017 ESTABLISHED 3180/jsvc.exec

Resources