Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
I want to know, what the average transfer rate on a particular (VPN) interface of my linux system is.
I have the following info from netstat:
# netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-OVR Flg
eth0 1500 0 264453 0 0 0 145331 0 0 0 BMRU
lo 16436 0 382692 0 0 0 382692 0 0 0 LRU
tun0 1500 0 13158 0 0 0 21264 0 12 0 MOPRU
The VPN interface is tun0. So this interface received 13158 packets and sent 21264 packets. My question based on this:
what is the time-frame during which these stats are collected? Since the computer was started?
# uptime
15:05:49 up 7 days, 20:40, 1 user, load average: 0.19, 0.08, 0.06
how to convert the 13158 "packets" to kB of data so as to get kbps?
Or should I use a completely other method?
Question 1:
The time frame is from the time the device was brought up until now (maybe days or weeks ago, try and figure from the logs!).
Which means that to get a practical average kbps number comparable to what you'd see in a system monitor or what e.g. top or uptime display for the CPU, you will want to read the current value twice (with, say, 1 second in between), and subtract the second value from the first. Then divide by the time (which is not necessary if you have a 1-second delay), multiply by 8, and divide by 1,000 to get kbps.
Question 2:
You don't. There is no way to convert "packets" to "bytes" as packets are variable sized. There is a "bytes" field that you can read.
Test case on my NAS box with some traffic going on:
nas:# grep eth0 /proc/net/dev ; sleep 1 ; grep eth0 /proc/net/dev
eth0:137675373 166558 0 0 0 0 0 0 134406802 41228 0 0 0 0 0 0
eth0:156479566 182767 0 0 0 0 0 0 155912310 44479 0 0 0 0 0 0
The result is: (155912310 - 134406802)*8/1000 = 172044 kbps (172 Mbps usage on a 1Gbps network).
If you look in /proc/net/dev instead of netstat -i, you can get bytes transmitted/received (also available via ifconfig or netstat -ie, but more easily parsed from /proc/net/dev). The counts are typically since the interface was created, which is usually boot time for "real" interfaces. For a tun interface, it's likely when the tunnel was started, which might be different than system boot, depending on when/how you're creating it...
Related
I have a ZFS pool with the following layout and errors:
config:
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000039ff3d3b114-part2 ONLINE 0 0 0
wwn-0x5000039ff4d3b513-part1 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000c500a42783bc-part1 ONLINE 0 0 2
wwn-0x5000c500a426d50b-part1 ONLINE 0 0 2
errors: Permanent errors have been detected in the following files:
tank/foo/bar#veryOldSnapshot;corruptFile.qcow2
So it looks as if on two different devices the same record has been corrupted at the same time. The data in question is on this disks since 2019 and the pool is scrubbed every week. How are the chances? IMHO this cannot be a real "bits are flipped due to cosmical radiation or hdd failure" case, because the probability that exactly the same blocks are corrupted on both disks and no other block is are really low.
What else can have caused this? I ran memtestx86 without problems and scrubbing again does not find any other errors. However since the block is used in a long chain of snapshots, even removing the snapshot in question does just make the problem move to the next snapshot.
I want to know the size of L1 and L2 cache as well as other metrics of my Raspberry Pi 3B I'm using.
However, for whatever reason I cannot get any information regarding the cache. Commands such as getconf -a | grep CACHE give me:
LEVEL1_ICACHE_SIZE 0
LEVEL1_ICACHE_ASSOC 0
LEVEL1_ICACHE_LINESIZE 0
LEVEL1_DCACHE_SIZE 0
LEVEL1_DCACHE_ASSOC 0
LEVEL1_DCACHE_LINESIZE 0
LEVEL2_CACHE_SIZE 0
LEVEL2_CACHE_ASSOC 0
LEVEL2_CACHE_LINESIZE 0
LEVEL3_CACHE_SIZE 0
LEVEL3_CACHE_ASSOC 0
LEVEL3_CACHE_LINESIZE 0
LEVEL4_CACHE_SIZE 0
LEVEL4_CACHE_ASSOC 0
LEVEL4_CACHE_LINESIZE 0
Other tools like lshw give no cache information also.
What is the cause of this and how can I get this cache info?
It says on the Raspberry Pi Wikipedia page:
The Raspberry Pi 4 uses a Broadcom BCM2711 SoC with a 1.5 GHz 64-bit
quad-core ARM Cortex-A72 processor, with 1 MB shared L2 cache.
And yes, I would would also like to have the commands working like on the most other chips/OSs (I don't know exactly who is responsible for that).
I am currently facing a trouble in my code.
Mainly, I emulate the connection between two computers, connected via an ethernet bridge (Raspberry Pi, Raspbian). So I am able to influence parameters of this connection (like bandwidth, latency and much more) via tc qdisc.
This works out fine, as you can see in the code down below.
But now to my problem:
I am also trying to exclude specific port ranges, what means ports that aren't influenced by my given parameters (latency etc..).
For that I created two prio bands. The prio band 0 (higher priority) handles my port exclusion (already in the parent root).
Afterwards in prio band 1 (lower priority), I decline a latency via netem.
The whole data traffic will pass through my influenced prio band 1, the remaining (excluded data) will pass uninfluenced through prio band 0.
I don't get kernel errors while executing my code! But I only receive filter parent 1: protocol ip pref 1 basic after typing sudo tc filter show dev eth1.
My match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
THIS IS MY CODE (in right order of executioning):
PARENT ROOT
sudo tc qdisc add dev eth1 root handle 1: prio bands 2 priomap 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
This creates two priobands (1:1 and 1:2)
BAND 0 [PORT EXCLUSION | port 100 - 800]
sudo tc qdisc add dev eth1 parent 1:1 handle 10: tbf rate 512kbit buffer 1600 limit 3000
Creates a tbf (Token Bucket Filter) to set bandwidth
sudo tc filter add dev eth1 parent 1: protocol ip prio 1 handle 0x10 basic match "cmp(u16 at 0 layer transport lt 100) and cmp(u16 at 0 layer transport gt 800)" flowid 1:1
Creates a filter with specific handle, that excludes port 100 to 800 from the prioband 1 (the influenced data packets)
BAND 1 [NET EMULATION]
sudo tc qdisc add dev eth1 parent 1:2 handle 20: tbf rate 1024kbit buffer 1600 limit 3000
Compare with tbf above
sudo tc qdisc add dev eth1 parent 20:1 handle 21: netem delay 200ms
Creates via netem a delay of 200ms
Here you can see my hierarchy as an image
The question again:
My filter match is not even mentioned. What did I wrong?
Can you explain me why I don't get my excpected output?
I appreciate any kind of help! Thanks for your efforts!
~rotsechs
It seems like I have to neglect the missing output! Nevertheless, it works perfectly.
I established a SSH connection to my Ethernet Bridge (via MobaXterm). Afterwards I laid a delay of 400ms on it. The console inputs slowed down as expected.
Finally I created the filter and excluded the port range from 20 to 24 (SSH has port 22). The delay of my SSH connection disappeared immediately!
I'd like to lookup a counter of the TCP payload activity (total bytes received) either for a given file descriptor or a given interface. Preferably the given file descriptor, but for the interface would be sufficient. Ideally I'd really like to know about any bytes that have been ack-ed, even ones which I have not read into userspace (yet?).
I've seen the TCP_INFO feature of getsockopt() but none of the fields appear to store "Total bytes received" or "total bytes transmitted (acked, e.g.)" so far as I can tell.
I've also seen the netlink IFLA_STATS+RTNL_TC_BYTES and the SIOCETHTOOL+ETHTOOL_GSTATS ioctl() (rx_bytes field) for the interfaces, and those are great, but I don't think they'll be able to discriminate between the overhead/headers of the other layers and the actual payload bytes.
procfs has /proc/net/tcp but this doesn't seem to contain what I'm looking for either.
Is there any way to get this particular data?
EDIT: promiscuous mode has an unbearable impact on throughput, so I can't leverage anything that uses it. Not to mention that implementing large parts of the IP stack to determine which packets are appropriate is beyond my intended scope for this solution.
The goal is to have an overarching/no-trust/second-guess of what values I store from recvmsg().
The Right Thing™ to do is to keep track of those values correctly, but it would be valuable to have a simple "Hey OS? How many bytes have I really received on this socket?"
One could also use ioctl call with SIOCINQ to get the amount of queued unread data in the receive buffer. Here is usage from the man page: http://man7.org/linux/man-pages/man7/tcp.7.html
int value;
error = ioctl(tcp_socket_fd, SIOCINQ, &value);
For interface TCP stats, we can use " netstat -i -p tcp" to find stats on a per-interface basis.
Do you want this for diagnosis, or for development?
If diagnosis, tcpdump can tell you exactly what's happening on the network, filtered by the port and host details.
If for development, perhaps a bit more information about what you're trying to achieve would help...
ifconfig gives RX and TX totals.
ifconfig gets these details from /proc/net/dev (as you can see via strace ifconfig).
There are also the Send/Receive-Q values given by netstat -t, if that's closer to what you want.
Perhaps the statistics in /proc/net/dev can help. I am not familiar with counting payload versus full packets including headers, so that makes the question harder to answer.
As for statistics on individual file descriptors, I am not aware of any standard means to get that information.
If it's possible to control startup of the programs for which the statistics are needed, it is possible to use an "interceptor" library which implements its own read(), write(), sendto(), and recvfrom() calls, passthrough the calls to the standard C library (or directly to system call), keep counters of the activity, and find a way to publish those values.
In case you don't want to just count total RX/TX per interface (which is already available in ifconfig/iproute2 tools)...
If you look into /proc a bit more, you can get somewhat more information. More specifically /proc/<pid>/net/dev.
Sample output:
Inter-| Receive | Transmit
face |bytes packets errs drop fifo frame compressed multicast|bytes packets errs drop fifo colls carrier compressed
eth0: 12106810846 8527175 0 15842 0 0 0 682866 198923814 1503063 0 0 0 0 0 0
lo: 270255057 3992930 0 0 0 0 0 0 270255057 3992930 0 0 0 0 0 0
sit0: 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
If you start looking, the information is coming from net/core/net-procfs.c from Linux kernel (procfs just uses this info). All of this of course means you need specific process to track.
You can either peruse information available in /proc or if you need something more stable, then duplicating net-procfs functionality specifically for your application might make sense.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
We have an home-brewed XMPP server and I was asked what is our server's MSL (Maximum Segment Lifetime).
What does it mean and how can I obtain it? Is it something in the Linux /proc TCP settings?
The MSL (Maximum Segment Lifetime) is the longest time (in seconds) that a TCP segment is expected to exist in the network. It most notably comes into play during the closing of a TCP connection -- between the CLOSE_WAIT and CLOSED state, the machine waits 2 MSL's (conceptually a round trip to the end of the internet and back) for any late packets. During this time, the machine is holding resources for the mostly-closed connection. If a server is busy, then the resources held this way can become an issue. One "fix" is to lower the MSL so that they are released sooner. Generally this works OK, but occasionally it can cause confusing failure scenarios.
On Linux (RHEL anyway, which is what I am familiar with), the "variable" /proc/sys/net/ipv4/tcp_fin_timeout is the 2*MSL value. It is normally 60 (seconds).
To see it, do:
cat /proc/sys/net/ipv4/tcp_fin_timeout
To change it, do something like:
echo 5 > /proc/sys/net/ipv4/tcp_fin_timeout
Here is a TCP STATE DIAGRAM. You can find the wait in question at the bottom.
You can also see a countdown timer for sockets using -o in netstat or ss, which helps show concrete numbers about how long things will wait. For instance, TIME_WAIT does NOT use tcp_fin_timeout (it is based on TCP_TIMEWAIT_LEN which is usually hardcoded to 60s).
cat /proc/sys/net/ipv4/tcp_fin_timeout
3
# See countdown timer for all TIME_WAIT sockets in 192.168.0.0-255
ss --numeric -o state time-wait dst 192.168.0.0/24
NetidRecv-Q Send-Q Local Address:Port Peer Address:Port
tcp 0 0 192.168.100.1:57516 192.168.0.10:80 timer:(timewait,55sec,0)
tcp 0 0 192.168.100.1:57356 192.168.0.10:80 timer:(timewait,25sec,0)
tcp 0 0 192.168.100.1:57334 192.168.0.10:80 timer:(timewait,22sec,0)
tcp 0 0 192.168.100.1:57282 192.168.0.10:80 timer:(timewait,12sec,0)
tcp 0 0 192.168.100.1:57418 192.168.0.10:80 timer:(timewait,38sec,0)
tcp 0 0 192.168.100.1:57458 192.168.0.10:80 timer:(timewait,46sec,0)
tcp 0 0 192.168.100.1:57252 192.168.0.10:80 timer:(timewait,7.436ms,0)
tcp 0 0 192.168.100.1:57244 192.168.0.10:80 timer:(timewait,6.536ms,0)
This looks like it can answer your question:
http://seer.support.veritas.com/docs/264886.htm
I suggest that you ask why someone asked you this and find out how that applies to XMPP.
TCP/IP Illustrated volume 1 is online and describes 2MSL in more detail: Here
MSL is also described in the TCP RFC 793 as mentioned in wikipedia