Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
We have an home-brewed XMPP server and I was asked what is our server's MSL (Maximum Segment Lifetime).
What does it mean and how can I obtain it? Is it something in the Linux /proc TCP settings?
The MSL (Maximum Segment Lifetime) is the longest time (in seconds) that a TCP segment is expected to exist in the network. It most notably comes into play during the closing of a TCP connection -- between the CLOSE_WAIT and CLOSED state, the machine waits 2 MSL's (conceptually a round trip to the end of the internet and back) for any late packets. During this time, the machine is holding resources for the mostly-closed connection. If a server is busy, then the resources held this way can become an issue. One "fix" is to lower the MSL so that they are released sooner. Generally this works OK, but occasionally it can cause confusing failure scenarios.
On Linux (RHEL anyway, which is what I am familiar with), the "variable" /proc/sys/net/ipv4/tcp_fin_timeout is the 2*MSL value. It is normally 60 (seconds).
To see it, do:
cat /proc/sys/net/ipv4/tcp_fin_timeout
To change it, do something like:
echo 5 > /proc/sys/net/ipv4/tcp_fin_timeout
Here is a TCP STATE DIAGRAM. You can find the wait in question at the bottom.
You can also see a countdown timer for sockets using -o in netstat or ss, which helps show concrete numbers about how long things will wait. For instance, TIME_WAIT does NOT use tcp_fin_timeout (it is based on TCP_TIMEWAIT_LEN which is usually hardcoded to 60s).
cat /proc/sys/net/ipv4/tcp_fin_timeout
3
# See countdown timer for all TIME_WAIT sockets in 192.168.0.0-255
ss --numeric -o state time-wait dst 192.168.0.0/24
NetidRecv-Q Send-Q Local Address:Port Peer Address:Port
tcp 0 0 192.168.100.1:57516 192.168.0.10:80 timer:(timewait,55sec,0)
tcp 0 0 192.168.100.1:57356 192.168.0.10:80 timer:(timewait,25sec,0)
tcp 0 0 192.168.100.1:57334 192.168.0.10:80 timer:(timewait,22sec,0)
tcp 0 0 192.168.100.1:57282 192.168.0.10:80 timer:(timewait,12sec,0)
tcp 0 0 192.168.100.1:57418 192.168.0.10:80 timer:(timewait,38sec,0)
tcp 0 0 192.168.100.1:57458 192.168.0.10:80 timer:(timewait,46sec,0)
tcp 0 0 192.168.100.1:57252 192.168.0.10:80 timer:(timewait,7.436ms,0)
tcp 0 0 192.168.100.1:57244 192.168.0.10:80 timer:(timewait,6.536ms,0)
This looks like it can answer your question:
http://seer.support.veritas.com/docs/264886.htm
I suggest that you ask why someone asked you this and find out how that applies to XMPP.
TCP/IP Illustrated volume 1 is online and describes 2MSL in more detail: Here
MSL is also described in the TCP RFC 793 as mentioned in wikipedia
Related
I founds this information in /proc which displays sockets:
$ cat /proc/net/sockstat
sockets: used 8278
TCP: inuse 1090 orphan 2 tw 18 alloc 1380 mem 851
UDP: inuse 6574
RAW: inuse 1
FRAG: inuse 0 memory 0
Can you help me to find what these values means? Also are these values enough reliable or I need to search for it somewhere else?
Is these other way to find information about the TCP/UDP connections in Linux?
Can you help me to find what these values means?
As per code here the values are number of sockets in use (TCP / UDP), number of orphan TCP sockets (socket that applications have no more handles to, they already called close()). TCP tw I am not sure but, based on structure name (tcp_death_row), those are the sockets to be definitively destroyed in near future? sockets represent the number of allocated sockets (as per my understand, contemplates TCP sockets in different states) and mem is number of pages allocated by TCP sockets (memory usage).
This article has some discussions around this topic.
In my understanding the /proc/net/sockstat is The most reliable place to look for that information. I often use it myself, and to have one single server to manage 1MM simultaneous connections that was the only place I could reliably count that information.
You can use the netstat command which itself utilizes the /proc filesystem but prints information more readable for humans.
If you want to display the current tcp connections for example, you can issue the following command:
netstat -t
Check man netstat for the numerous options.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
hopefully someone can help me.
I have a DHCPD/PXE Server that seems to be assigning 2 IP addresses for the same mac address.
I need computers getting assigned ip addresses to be sequential
I have tried "allow duplicates;" and "deny duplicates;"
I can see the ONLY difference is this "uid" line.
Other than this annoyance -- my dhcpd/pxe server works fine.
Heres a snippet from my Leases File:
lease 10.11.46.227 {
starts 4 2014/10/02 15:01:06;
ends 0 2150/11/08 21:29:20;
cltt 4 2014/10/02 15:01:06;
binding state active;
next binding state free;
hardware ethernet 00:1e:67:b9:32:f6;
uid "\000\215\013\011b\345\227\021\343\270\270\000\036g\2712\366";
}
lease 10.11.46.228 {
starts 4 2014/10/02 15:09:13;
ends 0 2150/11/08 21:37:27;
cltt 4 2014/10/02 15:09:13;
binding state active;
next binding state free;
hardware ethernet 00:1e:67:b9:32:f6;
}
Here is my dhcpd.conf
allow booting;
allow bootp;
authoritive;
deny duplicates;
class "pxeclients" {
match if substring(option vendor-class-identifier,0,9) = "PXEClient";
next-server 10.11.0.1;
filename "pxelinux.0";
}
subnet 10.11.0.0 netmask 255.255.0.0 {
range 10.11.1.1 10.11.25.200;
default-lease-time 4294967294;
max-lease-time 4294967294;
min-lease-time 4294967294;
}
# Pxe Server so it doesnt get changed.
host masterPXE {
hardware ethernet 00:1E:67:98:D5:EB;
fixed-address 10.11.0.1;
}
I've seen this with linux systems that have ip=dhcp enabled in kernel options and then a secondary dhcp client that runs in userland that re-requests an IP address.
The easiest solution is to just set your max-lease-time to something small like 5 minutes, or remove the dhcp host id from the user-land client so that it looks like the client is just asking for another DHCP address when it already has one.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 years ago.
Improve this question
I wrote trivial a node.js client/server pair to test local limits on concurrent connections. No data is sent between them: 10.000 clients connect, and wait.
Each time I run the test, I spawn a server and 10 clients that create 1000 connections each.
It takes a little over 2 seconds to reach ~8000 concurrent connections. Then it stops. No errors happen (on 'error' callbacks don't fire, close doesn't fire either). Connections "block", with no result or timeout.
I've already raised the max file descriptor limit (ulimit -n), and allowed more read/write memory to be consumed by the TCP stack via sysctl (net.ipv4.tcp_rmem and wmem).
What's the cap I'm hitting? How can I lift it?
-- EDIT --
Server program, with logging code stripped:
clients = []
server = net.createServer()
server.on 'connection', (socket) ->
clients.push socket
server.listen 5050
Client (this runs n times):
sockets = []
for [1..num_sockets]
socket = new net.Socket
sockets.push socket
socket.connect 5050
These are the system limits:
sysctl -w net.ipv4.ip_local_port_range="500 65535"
sysctl -w net.ipv4.tcp_tw_recycle="1"
sysctl -w net.ipv4.tcp_tw_reuse="1"
sysctl -w net.ipv4.tcp_rmem="4096 87380 16777216"
sysctl -w net.ipv4.tcp_wmem="4096 16384 16777216"
sysctl -w fs.file-max="655300"
sysctl -w fs.nr_open="3000000"
ulimit -n 2000000
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I want to know, what the average transfer rate on a particular (VPN) interface of my linux system is.
I have the following info from netstat:
# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 264453 0 0 0 145331 0 0 0 BMRU
lo 16436 0 382692 0 0 0 382692 0 0 0 LRU
tun0 1500 0 13158 0 0 0 21264 0 12 0 MOPRU
The VPN interface is tun0. So this interface received 13158 packets and sent 21264 packets. My question based on this:
what is the time-frame during which these stats are collected? Since the computer was started?
# uptime
15:05:49 up 7 days, 20:40, 1 user, load average: 0.19, 0.08, 0.06
how to convert the 13158 "packets" to kB of data so as to get kbps?
Or should I use a completely other method?
Question 1:
The time frame is from the time the device was brought up until now (maybe days or weeks ago, try and figure from the logs!).
Which means that to get a practical average kbps number comparable to what you'd see in a system monitor or what e.g. top or uptime display for the CPU, you will want to read the current value twice (with, say, 1 second in between), and subtract the second value from the first. Then divide by the time (which is not necessary if you have a 1-second delay), multiply by 8, and divide by 1,000 to get kbps.
Question 2:
You don't. There is no way to convert "packets" to "bytes" as packets are variable sized. There is a "bytes" field that you can read.
Test case on my NAS box with some traffic going on:
nas:# grep eth0 /proc/net/dev ; sleep 1 ; grep eth0 /proc/net/dev
eth0:137675373 166558 0 0 0 0 0 0 134406802 41228 0 0 0 0 0 0
eth0:156479566 182767 0 0 0 0 0 0 155912310 44479 0 0 0 0 0 0
The result is: (155912310 - 134406802)*8/1000 = 172044 kbps (172 Mbps usage on a 1Gbps network).
If you look in /proc/net/dev instead of netstat -i, you can get bytes transmitted/received (also available via ifconfig or netstat -ie, but more easily parsed from /proc/net/dev). The counts are typically since the interface was created, which is usually boot time for "real" interfaces. For a tun interface, it's likely when the tunnel was started, which might be different than system boot, depending on when/how you're creating it...
I'd like to lookup a counter of the TCP payload activity (total bytes received) either for a given file descriptor or a given interface. Preferably the given file descriptor, but for the interface would be sufficient. Ideally I'd really like to know about any bytes that have been ack-ed, even ones which I have not read into userspace (yet?).
I've seen the TCP_INFO feature of getsockopt() but none of the fields appear to store "Total bytes received" or "total bytes transmitted (acked, e.g.)" so far as I can tell.
I've also seen the netlink IFLA_STATS+RTNL_TC_BYTES and the SIOCETHTOOL+ETHTOOL_GSTATS ioctl() (rx_bytes field) for the interfaces, and those are great, but I don't think they'll be able to discriminate between the overhead/headers of the other layers and the actual payload bytes.
procfs has /proc/net/tcp but this doesn't seem to contain what I'm looking for either.
Is there any way to get this particular data?
EDIT: promiscuous mode has an unbearable impact on throughput, so I can't leverage anything that uses it. Not to mention that implementing large parts of the IP stack to determine which packets are appropriate is beyond my intended scope for this solution.
The goal is to have an overarching/no-trust/second-guess of what values I store from recvmsg().
The Right Thing™ to do is to keep track of those values correctly, but it would be valuable to have a simple "Hey OS? How many bytes have I really received on this socket?"
One could also use ioctl call with SIOCINQ to get the amount of queued unread data in the receive buffer. Here is usage from the man page: http://man7.org/linux/man-pages/man7/tcp.7.html
int value;
error = ioctl(tcp_socket_fd, SIOCINQ, &value);
For interface TCP stats, we can use " netstat -i -p tcp" to find stats on a per-interface basis.
Do you want this for diagnosis, or for development?
If diagnosis, tcpdump can tell you exactly what's happening on the network, filtered by the port and host details.
If for development, perhaps a bit more information about what you're trying to achieve would help...
ifconfig gives RX and TX totals.
ifconfig gets these details from /proc/net/dev (as you can see via strace ifconfig).
There are also the Send/Receive-Q values given by netstat -t, if that's closer to what you want.
Perhaps the statistics in /proc/net/dev can help. I am not familiar with counting payload versus full packets including headers, so that makes the question harder to answer.
As for statistics on individual file descriptors, I am not aware of any standard means to get that information.
If it's possible to control startup of the programs for which the statistics are needed, it is possible to use an "interceptor" library which implements its own read(), write(), sendto(), and recvfrom() calls, passthrough the calls to the standard C library (or directly to system call), keep counters of the activity, and find a way to publish those values.
In case you don't want to just count total RX/TX per interface (which is already available in ifconfig/iproute2 tools)...
If you look into /proc a bit more, you can get somewhat more information. More specifically /proc/<pid>/net/dev.
Sample output:
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
eth0: 12106810846 8527175 0 15842 0 0 0 682866 198923814 1503063 0 0 0 0 0 0
lo: 270255057 3992930 0 0 0 0 0 0 270255057 3992930 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
If you start looking, the information is coming from net/core/net-procfs.c from Linux kernel (procfs just uses this info). All of this of course means you need specific process to track.
You can either peruse information available in /proc or if you need something more stable, then duplicating net-procfs functionality specifically for your application might make sense.