The average Jitter calculated in Iperf - delay

In the RFC 1889 the delay jitter is given by
J=J+(|(Rj-Sj)-(Ri-Si)|-J)/16
I use iperf to measure delay jitters between two network nodes. I got
[ 3] 0.0- 1.0 sec 18.7 KBytes 153 Kbits/sec 8.848 ms 0/ 13 (0%)
[ 3] 1.0- 2.0 sec 15.8 KBytes 129 Kbits/sec 8.736 ms 0/ 11 (0%)
[ 3] 2.0- 3.0 sec 15.8 KBytes 129 Kbits/sec 17.437 ms 0/ 11 (0%)
[ 3] 3.0- 4.0 sec 15.8 KBytes 129 Kbits/sec 15.929 ms 0/ 11 (0%)
[ 3] 4.0- 5.0 sec 12.9 KBytes 106 Kbits/sec 11.942 ms 0/ 9 (0%)
[ 3] 0.0- 5.0 sec 80.4 KBytes 131 Kbits/sec 18.451 ms 0/ 56 (0%)
I want to know jitters every second and do statistics but I found I can't calculate the average jitter at the end of the iperf report.
What's the relationship of jitter between the average and every-second record.

Related

netem ratelimit stops limiting rate after short time and restores to default

I have experienced a couple of times that netem stops working when I apply qdisc. This happened with both rate limit as well as loss.
For example, consider a scenario:
Internet <------>(eth1) A (eth2)<------> (eth3)B
PC A is connected to an internet access point via ethernet port eth1. PC B is connected to PC A via port eth2 of PC A. So, basically, PC A is a bridge that I configure using OvS. I apply netem rule on eth2 and expect it to be reflected on PC B.
Now, in PC A, I applied a rate limit on eth2 of 30Mbps with a limit of 1000 using the command:
tc qdisc add dev eth2 root handle 1:0 netem rate 30000kbit limit 1000
Then I ran the iperf3 server on PC B and test the bandwidth by running the iperf3 client on a different PC (say 'C' that is connected to the network). The iperf3 result is:
[ 4] local xx.xx.xx.xx port 54838 connected to yy.yy.yy.yy port 5009
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 3.92 MBytes 32.8 Mbits/sec 0 185 KBytes
[ 4] 1.00-2.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 2.00-3.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 3.00-4.00 sec 3.38 MBytes 28.3 Mbits/sec 0 210 KBytes
[ 4] 4.00-5.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 5.00-6.00 sec 3.38 MBytes 28.3 Mbits/sec 0 210 KBytes
[ 4] 6.00-7.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 7.00-8.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 8.00-9.00 sec 65.6 MBytes 550 Mbits/sec 142 210 KBytes
[ 4] 9.00-10.00 sec 109 MBytes 918 Mbits/sec 0 210 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 203 MBytes 170 Mbits/sec 142 sender
[ 4] 0.00-10.00 sec 203 MBytes 170 Mbits/sec receiver
Initially, I am getting around 30Mbps but in the last two runs, the throughput is much higher than 30Mbps. I tried iperf3 again multiple times. Then it was fine. Why netem has this inconsistent behavior?
Another example where I cap rate to 50Mbps, the first iperf3 gave correct rate-limiting but one the second iperf3 attempt I got the inconsistent rate-limiting (as shown below):
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 6.05 MBytes 50.8 Mbits/sec 0 210 KBytes
[ 4] 1.00-2.00 sec 5.75 MBytes 48.2 Mbits/sec 0 210 KBytes
[ 4] 2.00-3.00 sec 5.75 MBytes 48.2 Mbits/sec 0 210 KBytes
[ 4] 3.00-4.00 sec 5.75 MBytes 48.2 Mbits/sec 0 210 KBytes
[ 4] 4.00-5.00 sec 29.8 MBytes 250 Mbits/sec 143 210 KBytes
[ 4] 5.00-6.00 sec 110 MBytes 920 Mbits/sec 0 210 KBytes
[ 4] 6.00-7.00 sec 109 MBytes 914 Mbits/sec 0 210 KBytes
[ 4] 7.00-8.00 sec 109 MBytes 915 Mbits/sec 0 210 KBytes
[ 4] 8.00-9.00 sec 109 MBytes 914 Mbits/sec 0 210 KBytes
[ 4] 9.00-10.00 sec 109 MBytes 914 Mbits/sec 0 210 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 599 MBytes 502 Mbits/sec 143 sender
[ 4] 0.00-10.00 sec 598 MBytes 502 Mbits/sec receiver
After the fourth run, it appears that netem ratelimit is simply gone and the network is back to its default rate.
I have seen this behaviour when I introduce loss using netem. Please, any help to fix this or explain this inconsistent netem behaviour would be helpful. Thanks.
I found a solution to this problem.
PC A is the one acting as a bridge for two ethernet connections. Once we set OvS bridge, the internet is no longer available on PC A. By default, the OS (ubuntu 20 in this case) is configured to attempt connecting to the internet automatically after regular intervals on both eth1 and eth2. Because of OvS configuration, the connection attempt is bound to fail. After the failure notification, the qdisc is removed on the network devices. This results in the iperf3 behavior as described in the question.
The fix is simple, just go to the network settings and disable auto-connect on both eth1 and eth2, and save the setting. Once done, netem works fine.
Hope this helps those who may face this issue in future.

openwrt wifi udp multicast loss packet and speed limited

I use iperf between two routers for multicast testing.
The first test picture is that the two routers are directly connected to the LAN ports of the two routers with a cable. The multicast test is normal.
****192.168.3.1****
root#OpenWrt:~# iperf -c 224.0.0.1 -p 8080 -u -T -t 10 -i 1 -b 20M
iperf: ignoring extra argument -- 10
------------------------------------------------------------
Client connecting to 224.0.0.1, UDP port 8080
Sending 1470 byte datagrams, IPG target: 560.76 us (kalman adjust)
Setting multicast TTL to 0
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.1 port 57229 connected with 224.0.0.1 port 8080
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 1.0- 2.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 2.0- 3.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 3.0- 4.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 4.0- 5.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 5.0- 6.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 6.0- 7.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 7.0- 8.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 8.0- 9.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 0.0-10.0 sec 25.0 MBytes 21.0 Mbits/sec
[ 3] Sent 0 datagrams
****192.168.3.2****
root#OpenWrt:~# iperf -s -u -B 224.0.0.1 -p 8080 -i 1
multicast join failed: No such device
------------------------------------------------------------
Server listening on UDP port 8080
Binding to local address 224.0.0.1
Joining multicast group 224.0.0.1
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
multicast join failed: No such device
[ 3] local 224.0.0.1 port 8080 connected with 192.168.3.1 port 60332
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 2.50 MBytes 21.0 Mbits/sec 0.076 ms 0/ 1783 (0%)
[ 3] 1.0- 2.0 sec 2.51 MBytes 21.1 Mbits/sec 0.007 ms 0/ 1790 (0%)
[ 3] 2.0- 3.0 sec 2.50 MBytes 21.0 Mbits/sec 0.007 ms 0/ 1783 (0%)
[ 3] 3.0- 4.0 sec 2.50 MBytes 21.0 Mbits/sec 0.005 ms 0/ 1784 (0%)
[ 3] 4.0- 5.0 sec 2.50 MBytes 21.0 Mbits/sec 0.016 ms 0/ 1783 (0%)
[ 3] 5.0- 6.0 sec 2.50 MBytes 20.9 Mbits/sec 0.032 ms 0/ 1780 (0%)
[ 3] 6.0- 7.0 sec 2.50 MBytes 21.0 Mbits/sec 0.076 ms 0/ 1785 (0%)
[ 3] 7.0- 8.0 sec 2.50 MBytes 21.0 Mbits/sec 0.017 ms 0/ 1784 (0%)
[ 3] 8.0- 9.0 sec 2.50 MBytes 21.0 Mbits/sec 0.007 ms 0/ 1784 (0%)
[ 3] 0.0-10.0 sec 25.0 MBytes 21.0 Mbits/sec 0.028 ms 0/17833 (0%)
The second test picture is that the two routers use 2.4g for Mesh Point network connection, point-to-point test, and found that everything is normal (packet loss may be acceptable due to signal interference).
****192.168.3.1****
root#OpenWrt:~# iperf -c 192.168.3.2 -u -T -t 100 -i 1 -b 20M
iperf: ignoring extra argument -- 100
------------------------------------------------------------
Client connecting to 192.168.3.2, UDP port 5001
Sending 1470 byte datagrams, IPG target: 560.76 us (kalman adjust)
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.1 port 39586 connected with 192.168.3.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 1.27 MBytes 10.7 Mbits/sec
[ 3] 1.0- 2.0 sec 2.37 MBytes 19.9 Mbits/sec
[ 3] 2.0- 3.0 sec 1.38 MBytes 11.6 Mbits/sec
[ 3] 3.0- 4.0 sec 2.34 MBytes 19.6 Mbits/sec
[ 3] 4.0- 5.0 sec 1.60 MBytes 13.4 Mbits/sec
[ 3] 5.0- 6.0 sec 2.07 MBytes 17.4 Mbits/sec
[ 3] 6.0- 7.0 sec 1.70 MBytes 14.2 Mbits/sec
[ 3] 7.0- 8.0 sec 1.94 MBytes 16.3 Mbits/sec
[ 3] 8.0- 9.0 sec 1.95 MBytes 16.3 Mbits/sec
[ 3] 0.0-10.0 sec 18.5 MBytes 15.5 Mbits/sec
[ 3] Sent 0 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 18.3 MBytes 15.3 Mbits/sec 1.554 ms 154/13181 (1.2%)
****192.168.3.2****
root#OpenWrt:~# iperf -s -u -B 192.168.3.2 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 192.168.3.2
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.2 port 5001 connected with 192.168.3.1 port 39586
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 1.17 MBytes 9.85 Mbits/sec 4.352 ms 25/ 863 (2.9%)
[ 3] 1.0- 2.0 sec 2.34 MBytes 19.6 Mbits/sec 0.889 ms 5/ 1671 (0.3%)
[ 3] 2.0- 3.0 sec 1.36 MBytes 11.4 Mbits/sec 1.646 ms 14/ 986 (1.4%)
[ 3] 3.0- 4.0 sec 2.33 MBytes 19.6 Mbits/sec 0.442 ms 5/ 1668 (0.3%)
[ 3] 4.0- 5.0 sec 1.59 MBytes 13.3 Mbits/sec 1.294 ms 8/ 1141 (0.7%)
[ 3] 5.0- 6.0 sec 2.09 MBytes 17.5 Mbits/sec 0.941 ms 10/ 1500 (0.67%)
[ 3] 6.0- 7.0 sec 1.62 MBytes 13.6 Mbits/sec 2.204 ms 31/ 1186 (2.6%)
[ 3] 7.0- 8.0 sec 1.92 MBytes 16.1 Mbits/sec 0.532 ms 32/ 1401 (2.3%)
[ 3] 8.0- 9.0 sec 1.95 MBytes 16.4 Mbits/sec 2.025 ms 14/ 1405 (1%)
[ 3] 9.0-10.0 sec 1.81 MBytes 15.2 Mbits/sec 0.534 ms 10/ 1299 (0.77%)
[ 3] 0.0-10.0 sec 18.3 MBytes 15.3 Mbits/sec 1.554 ms 154/13181 (1.2%)
The third test picture is that the two routers use 2.4g for Mesh Point networking connection and perform multicast test, and found that the speed is only 50kbye / s and the packet loss rate is very high. Excuse me, what is this question?
****192.168.3.1****
root#OpenWrt:~# iperf -c 224.0.0.1 -p 8080 -u -T -t 10 -i 1 -b 20M
iperf: ignoring extra argument -- 10
------------------------------------------------------------
Client connecting to 224.0.0.1, UDP port 8080
Sending 1470 byte datagrams, IPG target: 560.76 us (kalman adjust)
Setting multicast TTL to 0
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.1 port 59175 connected with 224.0.0.1 port 8080
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 1.0- 2.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 2.0- 3.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 3.0- 4.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 4.0- 5.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 5.0- 6.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 6.0- 7.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 7.0- 8.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 8.0- 9.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 0.0-10.0 sec 25.0 MBytes 21.0 Mbits/sec
[ 3] Sent 0 datagrams
****192.168.3.2****
root#OpenWrt:/# iperf -s -u -B 224.0.0.1 -p 8080 -i 1
multicast join failed: No such device
------------------------------------------------------------
Server listening on UDP port 8080
Binding to local address 224.0.0.1
Joining multicast group 224.0.0.1
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
multicast join failed: No such device
[ 3] local 224.0.0.1 port 8080 connected with 192.168.3.1 port 59175
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 54.6 KBytes 447 Kbits/sec 24.013 ms 16/ 54 (30%)
[ 3] 1.0- 2.0 sec 53.1 KBytes 435 Kbits/sec 28.932 ms 1260/ 1297 (97%)
[ 3] 2.0- 3.0 sec 51.7 KBytes 423 Kbits/sec 28.925 ms 1766/ 1802 (98%)
[ 3] 3.0- 4.0 sec 51.7 KBytes 423 Kbits/sec 22.785 ms 1741/ 1777 (98%)
[ 3] 4.0- 5.0 sec 53.1 KBytes 435 Kbits/sec 27.523 ms 1727/ 1764 (98%)
[ 3] 5.0- 6.0 sec 51.7 KBytes 423 Kbits/sec 24.211 ms 1723/ 1759 (98%)
[ 3] 6.0- 7.0 sec 57.4 KBytes 470 Kbits/sec 22.281 ms 1876/ 1916 (98%)
[ 3] 7.0- 8.0 sec 48.8 KBytes 400 Kbits/sec 28.154 ms 1593/ 1627 (98%)
[ 3] 8.0- 9.0 sec 56.0 KBytes 459 Kbits/sec 26.905 ms 1874/ 1913 (98%)
[ 3] 9.0-10.0 sec 51.7 KBytes 423 Kbits/sec 27.160 ms 1671/ 1707 (98%)
[ 3] 10.0-11.0 sec 51.7 KBytes 423 Kbits/sec 21.963 ms 747/ 783 (95%)
[ 3] 11.0-12.0 sec 60.3 KBytes 494 Kbits/sec 13.801 ms 631/ 673 (94%)
[ 3] 12.0-13.0 sec 57.4 KBytes 470 Kbits/sec 15.173 ms 625/ 665 (94%)
[ 3] 0.0-13.2 sec 711 KBytes 440 Kbits/sec 17.047 ms 17338/17833 (97%)
/*This is send again*/
multicast join failed: No such device
[ 4] local 224.0.0.1 port 8080 connected with 192.168.3.1 port 47463
[ 4] 0.0- 1.0 sec 54.6 KBytes 447 Kbits/sec 24.254 ms 19/ 57 (33%)
[ 4] 1.0- 2.0 sec 50.2 KBytes 412 Kbits/sec 24.307 ms 1237/ 1272 (97%)
[ 4] 2.0- 3.0 sec 53.1 KBytes 435 Kbits/sec 26.490 ms 1715/ 1752 (98%)
[ 4] 3.0- 4.0 sec 54.6 KBytes 447 Kbits/sec 24.730 ms 1676/ 1714 (98%)
[ 4] 4.0- 5.0 sec 53.1 KBytes 435 Kbits/sec 22.588 ms 1838/ 1875 (98%)
[ 4] 5.0- 6.0 sec 53.1 KBytes 435 Kbits/sec 23.496 ms 1786/ 1823 (98%)
[ 4] 6.0- 7.0 sec 53.1 KBytes 435 Kbits/sec 25.347 ms 1708/ 1745 (98%)
[ 4] 7.0- 8.0 sec 53.1 KBytes 435 Kbits/sec 24.269 ms 1711/ 1748 (98%)
[ 4] 8.0- 9.0 sec 53.1 KBytes 435 Kbits/sec 30.231 ms 1772/ 1809 (98%)
[ 4] 9.0-10.0 sec 54.6 KBytes 447 Kbits/sec 24.565 ms 1741/ 1779 (98%)
[ 4] 10.0-11.0 sec 54.6 KBytes 447 Kbits/sec 20.195 ms 842/ 880 (96%)
[ 4] 11.0-12.0 sec 50.2 KBytes 412 Kbits/sec 18.220 ms 547/ 582 (94%)
[ 4] 12.0-13.0 sec 51.7 KBytes 423 Kbits/sec 18.280 ms 634/ 670 (95%)
[ 4] 0.0-13.3 sec 701 KBytes 433 Kbits/sec 19.081 ms 17345/17833 (97%)
Is wireless UDP multicast restricted by the router? Or where did I not configure it well.
After brushing the two routers, only 192.168.1.1 to 192.168.3.x were modified, and no other modifications were made.

BeautifulSoup and urlopen aren't fetching the right table

I'm trying to practice BeautifulSoup and urlopen by using Basketball-Reference datasets. When I try and get individual player's stats, everything works fine, but then I tried to use the same code for Team's stats and apparently urlopen isn't finding the right table.
The following code is to get the "headers" from the page.
def fetch_years():
#Determine the urls
url = "https://www.basketball-reference.com/leagues/NBA_2000.html?sr&utm_source=direct&utm_medium=Share&utm_campaign=ShareTool#team-stats-per_game::none"
html = urlopen(url)
soup = BeautifulSoup(html)
soup.find_all('tr')
headers = [th.get_text() for th in soup.find_all('tr')[0].find_all('th')]
headers = headers[1:]
print(headers)
I'm trying to get the Team's stats per game data, in a format like:
['Tm', 'G', 'MP', 'FG', ...]
Instead, the header data I'm getting is:
['W', 'L', 'W/L%', ...]
which is the very first table in the 1999-2000 season information about the teams (under the name 'Division Standings').
If you use that same code for a player's data such as this one, you get the result I'm looking for:
Age Tm Lg Pos G GS MP FG ... DRB TRB AST STL BLK TOV PF PTS
0 20 OKC NBA PG 82 65 32.5 5.3 ... 2.7 4.9 5.3 1.3 0.2 3.3 2.3 15.3
1 21 OKC NBA PG 82 82 34.3 5.9 ... 3.1 4.9 8.0 1.3 0.4 3.3 2.5 16.1
2 22 OKC NBA PG 82 82 34.7 7.5 ... 3.1 4.6 8.2 1.9 0.4 3.9 2.5 21.9
3 23 OKC NBA PG 66 66 35.3 8.8 ... 3.1 4.6 5.5 1.7 0.3 3.6 2.2 23.6
4 24 OKC NBA PG 82 82 34.9 8.2 ... 3.9 5.2 7.4 1.8 0.3 3.3 2.3 23.2
The code to webscrape came originally from here.
the sports -reference.com sites are trickier than your standard ones. The tables are rendered after loading the page (with the exception of a few tables on the pages), so you'd need to use Selenium to let it render first, then pull the html source code.
However, the other option is if you look at the html source, you'll see those tables are within the comments. You could use BeautifulSoup to pull out the comments tags, then search through those for the table tags.
This will return a list of dataframes, and the Team Per Game stats are the table in index position 1:
import requests
from bs4 import BeautifulSoup
from bs4 import Comment
import pandas as pd
def fetch_years():
#Determine the urls
url = "https://www.basketball-reference.com/leagues/NBA_2000.html?sr&utm_source=direct&utm_medium=Share&utm_campaign=ShareTool#team-stats-per_game::none"
html = requests.get(url)
soup = BeautifulSoup(html.text)
comments = soup.find_all(string=lambda text: isinstance(text, Comment))
tables = []
for each in comments:
if 'table' in each:
try:
tables.append(pd.read_html(each)[0])
except:
continue
return tables
tables = fetch_years()
Output:
print (tables[1].to_string())
Rk Team G MP FG FGA FG% 3P 3PA 3P% 2P 2PA 2P% FT FTA FT% ORB DRB TRB AST STL BLK TOV PF PTS
0 1.0 Sacramento Kings* 82 241.5 40.0 88.9 0.450 6.5 20.2 0.322 33.4 68.7 0.487 18.5 24.6 0.754 12.9 32.1 45.0 23.8 9.6 4.6 16.2 21.1 105.0
1 2.0 Detroit Pistons* 82 241.8 37.1 80.9 0.459 5.4 14.9 0.359 31.8 66.0 0.481 23.9 30.6 0.781 11.2 30.0 41.2 20.8 8.1 3.3 15.7 24.5 103.5
2 3.0 Dallas Mavericks 82 240.6 39.0 85.9 0.453 6.3 16.2 0.391 32.6 69.8 0.468 17.2 21.4 0.804 11.4 29.8 41.2 22.1 7.2 5.1 13.7 21.6 101.4
3 4.0 Indiana Pacers* 82 240.6 37.2 81.0 0.459 7.1 18.1 0.392 30.0 62.8 0.478 19.9 24.5 0.811 10.3 31.9 42.1 22.6 6.8 5.1 14.1 21.8 101.3
4 5.0 Milwaukee Bucks* 82 242.1 38.7 83.3 0.465 4.8 13.0 0.369 33.9 70.2 0.483 19.0 24.2 0.786 12.4 28.9 41.3 22.6 8.2 4.6 15.0 24.6 101.2
5 6.0 Los Angeles Lakers* 82 241.5 38.3 83.4 0.459 4.2 12.8 0.329 34.1 70.6 0.482 20.1 28.9 0.696 13.6 33.4 47.0 23.4 7.5 6.5 13.9 22.5 100.8
6 7.0 Orlando Magic 82 240.9 38.6 85.5 0.452 3.6 10.6 0.338 35.1 74.9 0.468 19.2 26.1 0.735 14.0 31.0 44.9 20.8 9.1 5.7 17.6 24.0 100.1
7 8.0 Houston Rockets 82 241.8 36.6 81.3 0.450 7.1 19.8 0.358 29.5 61.5 0.480 19.2 26.2 0.733 12.3 31.5 43.8 21.6 7.5 5.3 17.4 20.3 99.5
8 9.0 Boston Celtics 82 240.6 37.2 83.9 0.444 5.1 15.4 0.331 32.2 68.5 0.469 19.8 26.5 0.745 13.5 29.5 43.0 21.2 9.7 3.5 15.4 27.1 99.3
9 10.0 Seattle SuperSonics* 82 241.2 37.9 84.7 0.447 6.7 19.6 0.339 31.2 65.1 0.480 16.6 23.9 0.695 12.7 30.3 43.0 22.9 8.0 4.2 14.0 21.7 99.1
10 11.0 Denver Nuggets 82 242.1 37.3 84.3 0.442 5.7 17.0 0.336 31.5 67.2 0.469 18.7 25.8 0.724 13.1 31.6 44.7 23.3 6.8 7.5 15.6 23.9 99.0
11 12.0 Phoenix Suns* 82 241.5 37.7 82.6 0.457 5.6 15.2 0.368 32.1 67.4 0.477 17.9 23.6 0.759 12.5 31.2 43.7 25.6 9.1 5.3 16.7 24.1 98.9
12 13.0 Minnesota Timberwolves* 82 242.7 39.3 84.3 0.467 3.0 8.7 0.346 36.3 75.5 0.481 16.8 21.6 0.780 12.4 30.1 42.5 26.9 7.6 5.4 13.9 23.3 98.5
13 14.0 Charlotte Hornets* 82 241.2 35.8 79.7 0.449 4.1 12.2 0.339 31.7 67.5 0.469 22.7 30.0 0.758 10.8 32.1 42.9 24.7 8.9 5.9 14.7 20.4 98.4
14 15.0 New Jersey Nets 82 241.8 36.3 83.9 0.433 5.8 16.8 0.347 30.5 67.2 0.454 19.5 24.9 0.784 12.7 28.2 40.9 20.6 8.8 4.8 13.6 23.3 98.0
15 16.0 Portland Trail Blazers* 82 241.2 36.8 78.4 0.470 5.0 13.8 0.361 31.9 64.7 0.493 18.8 24.7 0.760 11.8 31.2 43.0 23.5 7.7 4.8 15.2 22.7 97.5
16 17.0 Toronto Raptors* 82 240.9 36.3 83.9 0.433 5.2 14.3 0.363 31.2 69.6 0.447 19.3 25.2 0.765 13.4 29.9 43.3 23.7 8.1 6.6 13.9 24.3 97.2
17 18.0 Cleveland Cavaliers 82 242.1 36.3 82.1 0.442 4.2 11.2 0.373 32.1 70.9 0.453 20.2 26.9 0.750 12.3 30.5 42.8 23.7 8.7 4.4 17.4 27.1 97.0
18 19.0 Washington Wizards 82 241.5 36.7 81.5 0.451 4.1 10.9 0.376 32.6 70.6 0.462 19.1 25.7 0.743 13.0 29.7 42.7 21.6 7.2 4.7 16.1 26.2 96.6
19 20.0 Utah Jazz* 82 240.9 36.1 77.8 0.464 4.0 10.4 0.385 32.1 67.4 0.476 20.3 26.2 0.773 11.4 29.6 41.0 24.9 7.7 5.4 14.9 24.5 96.5
20 21.0 San Antonio Spurs* 82 242.1 36.0 78.0 0.462 4.0 10.8 0.374 32.0 67.2 0.476 20.1 27.0 0.746 11.3 32.5 43.8 22.2 7.5 6.7 15.0 20.9 96.2
21 22.0 Golden State Warriors 82 240.9 36.5 87.1 0.420 4.2 13.0 0.323 32.3 74.0 0.437 18.3 26.2 0.697 15.9 29.7 45.6 22.6 8.9 4.3 15.9 24.9 95.5
22 23.0 Philadelphia 76ers* 82 241.8 36.5 82.6 0.442 2.5 7.8 0.323 34.0 74.8 0.454 19.2 27.1 0.708 14.0 30.1 44.1 22.2 9.6 4.7 15.7 23.6 94.8
23 24.0 Miami Heat* 82 241.8 36.3 78.8 0.460 5.4 14.7 0.371 30.8 64.1 0.481 16.4 22.3 0.736 11.2 31.9 43.2 23.5 7.1 6.4 15.0 23.7 94.4
24 25.0 Atlanta Hawks 82 241.8 36.6 83.0 0.441 3.1 9.9 0.317 33.4 73.1 0.458 18.0 24.2 0.743 14.0 31.3 45.3 18.9 6.1 5.6 15.4 21.0 94.3
25 26.0 Vancouver Grizzlies 82 242.1 35.3 78.5 0.449 4.0 11.0 0.361 31.3 67.6 0.463 19.4 25.1 0.774 12.3 28.3 40.6 20.7 7.4 4.2 16.8 22.9 93.9
26 27.0 New York Knicks* 82 241.8 35.3 77.7 0.455 4.3 11.4 0.375 31.0 66.3 0.468 17.2 22.0 0.781 9.8 30.7 40.5 19.4 6.3 4.3 14.6 24.2 92.1
27 28.0 Los Angeles Clippers 82 240.3 35.1 82.4 0.426 5.2 15.5 0.339 29.9 67.0 0.446 16.6 22.3 0.746 11.6 29.0 40.6 18.0 7.0 6.0 16.2 22.2 92.0
28 29.0 Chicago Bulls 82 241.5 31.3 75.4 0.415 4.1 12.6 0.329 27.1 62.8 0.432 18.1 25.5 0.709 12.6 28.3 40.9 20.1 7.9 4.7 19.0 23.3 84.8
29 NaN League Average 82 241.5 36.8 82.1 0.449 4.8 13.7 0.353 32.0 68.4 0.468 19.0 25.3 0.750 12.4 30.5 42.9 22.3 7.9 5.2 15.5 23.3 97.5

Get timestamps by line with iperf3 in bash script

I'm currently getting this output from iperf3
2016-03-03 21:33:50 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:50 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
I want to create Graphics from this data, and as iperf3 can't update timestamps by line (as far as I know..) I'm looking for a way to increment the output file line by line.
result should be like:
2016-03-03 21:33:50 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:51 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:52 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
so an action (+1) has to be done on each line containing Mbits/sec until the end of the file.
I guess that sed and/or date command may be helpful and a loop may be useful but can't see how to build it with time values..
awk '$10=="Mbits/sec"\
{command="date -d "$2" +%s";command |getline $2;close(command)};1' 1txt \
| awk -vi=1 '$10=="Mbits/sec"{$2=$2+i};i=i+1'\
| awk '$10=="Mbits/sec"{command="date -d #"$2" +%T";command|getline $2;close(command)};1'
tested it on a file 1txt having values:
2016-03-03 21:33:50 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:50 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 bits/sec 0
the output as expected after execution was:
2016-03-03 21:33:51 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:52 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:53 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 bits/sec 0
P.S: you can ofcourse make it more compact and efficient by combining the awk's in a single command. But this helps in better understanding of whats going on.
You can do this using sed, but this is not trivial... It is much easier to do it using perl:
perl -lne 'print $1.($2 + ($.) - 1).$3 if /(.+)(50)(.+)/' file.txt
-l enable line ending processing, specifies line terminator
-n assume loop around program
-e one line of program
print print command
. string concatenation
$number variables contain the parts of the string that matched the capture groups ()
$. the current record number
($2 + ($.) - 1) means: 50 + 'current record number' - 1
if /(.+)(50)(.+)/' statement with regular expression referred to by print
file.txt file with your datas

When I using MTR, why farther node values lower?

The mtr report like this:
shell> mtr --report ec2-122-248-229-83.ap-southeast-1.compute.amazonaws.com
HOST: macserver.local Loss% Snt Last Avg Best Wrst StDev
1.|-- 192.168.12.1 0.0% 10 1.2 2.9 0.9 7.4 2.3
2.|-- 101.36.89.49 0.0% 10 6.8 5.7 2.1 16.6 4.3
3.|-- 192.168.17.37 0.0% 10 53.8 164.9 4.9 904.4 304.0
4.|-- 220.181.105.25 0.0% 10 5.1 11.1 5.1 26.9 7.1
5.|-- 220.181.0.5 0.0% 10 68.5 15.1 4.9 68.5 19.4
6.|-- 220.181.0.41 0.0% 10 12.6 10.2 5.0 27.1 6.5
7.|-- 202.97.53.82 0.0% 10 7.2 9.9 4.9 28.1 6.7
8.|-- 202.97.58.94 0.0% 10 16.5 10.0 5.2 16.5 3.9
9.|-- 202.97.61.98 0.0% 10 49.2 46.4 39.0 76.7 11.2
10.|-- 202.97.121.98 0.0% 10 41.1 43.5 41.1 46.3 1.6
11.|-- 63-218-213-206.static.pcc 0.0% 10 87.2 77.6 70.3 92.2 7.4
12.|-- 203.83.223.62 0.0% 10 71.9 74.8 69.9 87.2 5.1
13.|-- 203.83.223.77 0.0% 10 73.6 73.8 70.2 80.9 3.0
14.|-- ec2-175-41-128-238.ap-sou 0.0% 10 70.4 73.9 70.4 84.1 4.0
15.|-- ??? 100.0 10 0.0 0.0 0.0 0.0 0.0
16.|-- ec2-122-248-229-83.ap-sou 10.0% 10 82.3 76.0 70.6 88.7 6.1
Why is the average of the 16 lines lower than line 11?
Routers are designed to route packets as quickly as possible. They're not designed to generate and transmit ICMP errors as quickly as possible. Apparently, the machine at line 11 is very slow at generating ICMP errors.
When you see a lower time past a hop than at that hop, you know that most likely it took that router a significant amount of time to generate an ICMP error and get it going back to you.
And, of course, you have to ignore line 15. You didn't get any replies from that router.

Resources