I use the simple netstat command "netstat -nltp" which shows me all active TCP connections along with the PID and process name.
However even after playing around with parameters, I am unabe to get an important information from the command.
That is:
I want to see the number of packets received and sent from/to this PID
I learnt that Recv-Q and Send-Q are not indicative of this. Also, the statistics parameter seems to sum up for all processes. How can I see the packets received and sent to a PID?
Thanks
Use inner process counters for that:
cat /proc/<PID>/net/netstat
You want to do network traffic accounting per process.
There are number of applications that allow you to do that in real-time (i.e: nethogs), but the problem is keeping traffic counters over time.
I would suggest you to do so using iptables, assuming you can clearly distinguish your processes using a network port.
This article is still OK for your use case: https://www.cyberciti.biz/faq/linux-configuring-ip-traffic-accounting/
PS: This sort of questions is best in Server Fault
Related
netstat lists all the connections (incoming and outgoing)
how do I filter out just the incoming connections
I have tried the netstat command but it lists all the connections but j need only the incoming connections
Once sockets are created, there isn't really such thing as inbound and outbound, as connections go both ways. What you really care about are connections you started, vs connections others started. To do that, you have to monitor for new connections, and log them as they are created.
tcpdump is a great tool for this. There are tons of guides on the internet, but a command to get you started would be tcpdump -Qin ....
With netstat you may identify the state of the socket, but in many cases there are no states in raw mode and usually no states used in UDP and UDPLite. You may try to display the listening state for incoming connections by running netstat with the following argument:
netstat --listening
However, as far as I understood from your question it is better to use the tcpdump tool as mentioned in other comments.
I'm working on Linux 4.13.x. And I'm considering to send a packet response in the kernel.
Consider an echo TCP or UDP server running in the userland and there is also another node running a TCP or UDP client. Clients are sending requests to the server. I want to send the packet response back to the client without any involvement of server application running at userspace.
Here is my thoughts about this problem:
I started thinking how it is possible and I come across to a solution like netfilter. If I can capture the packets in NF_INET_PRE_ROUTING and then try to swap the source and destination IP addresses of IP header and also swapping the ports in the TCP header, then according to this answers and this presumably modified packet should be forwarded to the originator throughout the routing system.
Actually, I tried this scenario and it seems it is not possible to do so from netfilter hooks, however, I'm not sure of it. I thought that it is not working since it has problem with checksums because I'm manipulating packets so I did another experiment to figure this issue out. I just change the packet data and everything worked well. I think checksums don't have any problem since they will be check at NIC while receiving and also same situation while sending so manipulation in between doesn't make anything wrong. I also activate the IPv4 forwarding at the server host(sysctl.config) still nothing changes.
I don't want to create new packet, I only want to alter this packet and send it back. There is another similar question which is creating another packet. Moreover, I'm just thinking why this scenario is not working? But based on the netfilter's architecture it should work.
Thank you
I am also working on this, actually kernel validate the source ip address after ip_rcv function in NF_HOOK which check the source ip address. So just try below command:-
sudo sysctl -w "net.ipv4.conf.all.rp_filter=0"
after doing this also disable your interface from which you send and receive packet just like below:-
sudo sysctl -w "net.ipv4.conf.enp2s0.rp_filter=0"
What is the use of the Recv-Q and Send-Q columns in netstat's output? How do I use use this in a realistic scenario?
On my system, both of the columns are always shown as zero. What is the meaning for that?
From my man page:
Recv-Q
Established: The count of bytes not copied by the user program
connected to this socket.
Listening: Since Kernel 2.6.18 this column contains the current syn
backlog.
Send-Q
Established: The count of bytes not acknowledged by the remote
host.
Listening: Since Kernel 2.6.18 this column contains the maximum size
of the syn backlog.
If you have this stuck to 0, this just mean that your applications, on both side of the connection, and the network between them, are doing OK. Actual instant values may be different from 0, but in such a transient, fugitive manner that you don't get a chance to actually observe it.
Example of real-life scenario where this might be different from 0 (on established connections, but I think you'll get the idea):
I recently worked on a Linux embedded device talking to a (poorly designed) third party device. On this third party device, the application clearly got stuck sometimes, not reading the data it received on TCP connection, resulting in its TCP window going down to 0 and staying stuck there for tens of seconds (phenomenon observed via wireshark on a mirrored port between the 2 hosts). In such case:
Recv-Q: running netstat on the third party device (which I had no
mean to do) may have show an increasing Recv-Q, up to some roof value
where the other side (me) stop sending data because the window get
down to 0, since the application does not read the data available on
its socket, and these data stay buffered in the TCP implementation in
the OS, not going to the stuck application -> from the receiver
side, application issue.
Send-Q: running netstat on my side (which I did not tried because
1/ the problem was clear from wireshark and was the first case above
and 2/ this was not 100% reproducible) may have show a non-zero
Send-Q, if the other side TCP implementation at the OS level have
been stucked and stopped ACKnowleding my data -> from the sender
side, receiving TCP implementation (typically at the OS level) issue.
Note that the Send-Q scenario depicted above may also be a sending side issue (my side) if my Linux TCP implementation was misbehaving and continued to send data after the TCP window went down to 0: the receiving side then has no more room for this data -> does not ACKnowledge.
Note also that the Send-Q issue may be caused not because of the receiver, but by some routing issue somewhere between the sender and the receiver. Some packets are "on the fly" between the 2 hosts, but not ACKnowledge yet. On the other hand, the Recv-Q issue is definitly on a host: packets received, ACKnowledged, but not read from the application yet.
EDIT:
In real life, with non-crappy hosts and applications as you can reasonably expect, I'd bet the Send-Q issue to be caused most of the time by some routing issue/network poor performances between the sending and receiving side. The "on the fly" state of packets should never be forgotten:
The packet may be on the network between the sender and the receiver,
(or received but ACK not send yet, see above)
or the ACK may be on the network between the receiver and the sender.
It takes a RTT (round time trip) for a packet to be send and then ACKed.
the accepted answer #jbm.
Listening: Since Kernel 2.6.18 this column contains the current syn
backlog. Listening: Since Kernel 2.6.18 this column contains the
maximum size of the syn backlog.
they are not syn backlog, they are listen backlog.
arangod is running for some time without any problems, but at some point no more connections can be made.
aranogsh then shows the following error message:
Error message 'Could not connect to 'tcp://127.0.0.1:8529' 'connect() failed with #99 - Cannot assign requested address''
In the log file arangod still writes more trace information.
After restarting aranogd it is running without problems again, until the problem suddenly reoccurs.
Why is this happening?
Since this question was sort of answered by time, I'll use this answer to elaborate howto dig into such a situation and to get a valuable analysis on which operating system parameters to look. I'll base this on linux targets.
First we need to find out whats currently going on using the netstat tool as a root user (we care for tcp ports only):
netstat -alnpt
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
...
tcp 0 0 0.0.0.0:8529 0.0.0.0:* LISTEN 3478/arangod
tcp 0 0 127.0.0.1:45218 127.0.0.1:8529 ESTABLISHED 6902/arangosh
tcp 1 0 127.0.0.1:46985 127.0.0.1:8529 CLOSE_WAIT 485/arangosh
We see an overview of the 3 possible value groups:
LISTEN: These are daemon processes offering tcp services to remote ends, in this case the arangod process with its server socket. It binds port 8529 on all available ipv4 addresses of the system (0.0.0.0) and accepts connections from any remote location (0.0.0.0:*)
ESTABLISHED: this is an active tcp connection in this case between arangosh and arangod; Arangosh has its client port (45218) in the higher range connecting arangod on port 8529.
CLOSE_WAIT: this is a connection in termination state. Its normal to have them. The TCP stack of the operating system keeps them around for a while to have a knowledge where to sort in stray TCP-packages that may have been sent, but did not arive on time.
As you see TCP ports are 16 bits unsigned integers, ranging from 0 to 65535. Server sockets start from the lower end, and most operating systems require processes to be running as root to bind ports below 1024. Client sockets start from the upper end and range down to a specified limit on the client. Since multiple clients can connect one server, while the server port range seems narrow, its usually the client side ports that wear out. If the client frequently closes and reopens the connection you may see many sockets in CLOSE_WAIT state, as many discussions across the net hint, these are the symptoms of your system eventually running out of resources. In general the solution to this problem is to to re-use existing connections through the keepalive feature.
As the solaris ndd command explains thoroughly which parameters it may modify with which consequences in the solaris kernel, the terms explained there are rather generic to tcp sockets, and may be found on many other operating systems in other ways; in linux - which we focus on here - through the /proc/sys/net-filesystem.
Some valuable switches there are:
ipv4/ip_local_port_range This is the range for the local sockets. You can try to narrow it, and use arangob --keep-alive false to explore whats happening if your system runs out of these.
time wait (often shorted to tw) is the section that controls what the TCP-Stack should do with already closed sockets in CLOSE_WAIT state. The Linux kernel can do a trick here - it can instantly re-use connections in that state for new connections. Vincent Bernat explains very nicely which screws to turn and what the differnt parameters in the kernel mean.
So once you decided to change some of your values in /proc so your host better scales to the given situation, you need to make them reboot persistant - since /proc is volatile and won't remember values across reboots.
Most linux systems therefore offer the /etc/sysctl.[d|conf] file; It maps slashes in the proc filesystem to dots, so /proc/sys/net/ipv4/tcp_tw_reuse will translate into net.ipv4.tcp_tw_reuse.
I'm an Automation Developer and lately I've taken it upon myself to control an IP Phone on my desk (Cisco 7940).
I have a third party application that can control the IP phone with SCCP (Skinny) packets. Through Wireshark, I see that the application will send 4 unique SCCP packets and then receives a TCP ACK message.
SCCP is not very well known, but it looks like this:
Ethernet( IP( TCP( SCCP( ))))
Using a Python packet builder: Scapy, I've been able to send the same 4 packets to the IP Phone, however I never get the ACK. In my packets, I have correctly set the sequence, port and acknowledge values in the TCP header. The ID field in the IP header is also correct.
The only thing I can imagine wrong is that it takes Python a little more than a full second to send the four packets. Whereas the application takes significantly less time. I've tried raising the priority for the Python shell with no luck.
Does anyone have an idea why I may not be receiving the ACK back?
This website may be helpful in debugging why on your machine you aren't seeing the traffic you expect, and taking steps to modify your environment to produce the desired output.
Normally, the Linux kernel takes care of setting up and sending and
receiving network traffic. It automatically sets appropriate header
values and even knows how to complete a TCP 3 way handshake. Uising
the kernel services in this way is using a "cooked" socket.
Scapy does not use these kernel services. It creates a "raw" socket. The
entire TCP/IP stack of the OS is circumvented. Because of this, Scapy
give us compete control over the traffic. Traffic to and from Scapy
will not be filtered by iptables. Also, we will have to take care of
the TCP 3 way handshake ourselves.
http://www.packetlevel.ch/html/scapy/scapy3way.html