Get timestamps by line with iperf3 in bash script - linux

I'm currently getting this output from iperf3
2016-03-03 21:33:50 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:50 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
I want to create Graphics from this data, and as iperf3 can't update timestamps by line (as far as I know..) I'm looking for a way to increment the output file line by line.
result should be like:
2016-03-03 21:33:50 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:51 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:52 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
so an action (+1) has to be done on each line containing Mbits/sec until the end of the file.
I guess that sed and/or date command may be helpful and a loop may be useful but can't see how to build it with time values..

awk '$10=="Mbits/sec"\
{command="date -d "$2" +%s";command |getline $2;close(command)};1' 1txt \
| awk -vi=1 '$10=="Mbits/sec"{$2=$2+i};i=i+1'\
| awk '$10=="Mbits/sec"{command="date -d #"$2" +%T";command|getline $2;close(command)};1'
tested it on a file 1txt having values:
2016-03-03 21:33:50 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:50 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 bits/sec 0
the output as expected after execution was:
2016-03-03 21:33:51 [ 4] 0.00-1.00 sec 113 MBytes 950 Mbits/sec
2016-03-03 21:33:52 [ 4] 1.00-2.00 sec 112 MBytes 941 Mbits/sec 0
2016-03-03 21:33:53 [ 4] 2.00-3.00 sec 113 MBytes 944 Mbits/sec 0
2016-03-03 21:33:50 [ 4] 2.00-3.00 sec 113 MBytes 944 bits/sec 0
P.S: you can ofcourse make it more compact and efficient by combining the awk's in a single command. But this helps in better understanding of whats going on.

You can do this using sed, but this is not trivial... It is much easier to do it using perl:
perl -lne 'print $1.($2 + ($.) - 1).$3 if /(.+)(50)(.+)/' file.txt
-l enable line ending processing, specifies line terminator
-n assume loop around program
-e one line of program
print print command
. string concatenation
$number variables contain the parts of the string that matched the capture groups ()
$. the current record number
($2 + ($.) - 1) means: 50 + 'current record number' - 1
if /(.+)(50)(.+)/' statement with regular expression referred to by print
file.txt file with your datas

Related

netem ratelimit stops limiting rate after short time and restores to default

I have experienced a couple of times that netem stops working when I apply qdisc. This happened with both rate limit as well as loss.
For example, consider a scenario:
Internet <------>(eth1) A (eth2)<------> (eth3)B
PC A is connected to an internet access point via ethernet port eth1. PC B is connected to PC A via port eth2 of PC A. So, basically, PC A is a bridge that I configure using OvS. I apply netem rule on eth2 and expect it to be reflected on PC B.
Now, in PC A, I applied a rate limit on eth2 of 30Mbps with a limit of 1000 using the command:
tc qdisc add dev eth2 root handle 1:0 netem rate 30000kbit limit 1000
Then I ran the iperf3 server on PC B and test the bandwidth by running the iperf3 client on a different PC (say 'C' that is connected to the network). The iperf3 result is:
[ 4] local xx.xx.xx.xx port 54838 connected to yy.yy.yy.yy port 5009
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 3.92 MBytes 32.8 Mbits/sec 0 185 KBytes
[ 4] 1.00-2.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 2.00-3.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 3.00-4.00 sec 3.38 MBytes 28.3 Mbits/sec 0 210 KBytes
[ 4] 4.00-5.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 5.00-6.00 sec 3.38 MBytes 28.3 Mbits/sec 0 210 KBytes
[ 4] 6.00-7.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 7.00-8.00 sec 3.50 MBytes 29.4 Mbits/sec 0 210 KBytes
[ 4] 8.00-9.00 sec 65.6 MBytes 550 Mbits/sec 142 210 KBytes
[ 4] 9.00-10.00 sec 109 MBytes 918 Mbits/sec 0 210 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 203 MBytes 170 Mbits/sec 142 sender
[ 4] 0.00-10.00 sec 203 MBytes 170 Mbits/sec receiver
Initially, I am getting around 30Mbps but in the last two runs, the throughput is much higher than 30Mbps. I tried iperf3 again multiple times. Then it was fine. Why netem has this inconsistent behavior?
Another example where I cap rate to 50Mbps, the first iperf3 gave correct rate-limiting but one the second iperf3 attempt I got the inconsistent rate-limiting (as shown below):
[ ID] Interval Transfer Bandwidth Retr Cwnd
[ 4] 0.00-1.00 sec 6.05 MBytes 50.8 Mbits/sec 0 210 KBytes
[ 4] 1.00-2.00 sec 5.75 MBytes 48.2 Mbits/sec 0 210 KBytes
[ 4] 2.00-3.00 sec 5.75 MBytes 48.2 Mbits/sec 0 210 KBytes
[ 4] 3.00-4.00 sec 5.75 MBytes 48.2 Mbits/sec 0 210 KBytes
[ 4] 4.00-5.00 sec 29.8 MBytes 250 Mbits/sec 143 210 KBytes
[ 4] 5.00-6.00 sec 110 MBytes 920 Mbits/sec 0 210 KBytes
[ 4] 6.00-7.00 sec 109 MBytes 914 Mbits/sec 0 210 KBytes
[ 4] 7.00-8.00 sec 109 MBytes 915 Mbits/sec 0 210 KBytes
[ 4] 8.00-9.00 sec 109 MBytes 914 Mbits/sec 0 210 KBytes
[ 4] 9.00-10.00 sec 109 MBytes 914 Mbits/sec 0 210 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval Transfer Bandwidth Retr
[ 4] 0.00-10.00 sec 599 MBytes 502 Mbits/sec 143 sender
[ 4] 0.00-10.00 sec 598 MBytes 502 Mbits/sec receiver
After the fourth run, it appears that netem ratelimit is simply gone and the network is back to its default rate.
I have seen this behaviour when I introduce loss using netem. Please, any help to fix this or explain this inconsistent netem behaviour would be helpful. Thanks.
I found a solution to this problem.
PC A is the one acting as a bridge for two ethernet connections. Once we set OvS bridge, the internet is no longer available on PC A. By default, the OS (ubuntu 20 in this case) is configured to attempt connecting to the internet automatically after regular intervals on both eth1 and eth2. Because of OvS configuration, the connection attempt is bound to fail. After the failure notification, the qdisc is removed on the network devices. This results in the iperf3 behavior as described in the question.
The fix is simple, just go to the network settings and disable auto-connect on both eth1 and eth2, and save the setting. Once done, netem works fine.
Hope this helps those who may face this issue in future.

openwrt wifi udp multicast loss packet and speed limited

I use iperf between two routers for multicast testing.
The first test picture is that the two routers are directly connected to the LAN ports of the two routers with a cable. The multicast test is normal.
****192.168.3.1****
root#OpenWrt:~# iperf -c 224.0.0.1 -p 8080 -u -T -t 10 -i 1 -b 20M
iperf: ignoring extra argument -- 10
------------------------------------------------------------
Client connecting to 224.0.0.1, UDP port 8080
Sending 1470 byte datagrams, IPG target: 560.76 us (kalman adjust)
Setting multicast TTL to 0
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.1 port 57229 connected with 224.0.0.1 port 8080
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 1.0- 2.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 2.0- 3.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 3.0- 4.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 4.0- 5.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 5.0- 6.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 6.0- 7.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 7.0- 8.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 8.0- 9.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 0.0-10.0 sec 25.0 MBytes 21.0 Mbits/sec
[ 3] Sent 0 datagrams
****192.168.3.2****
root#OpenWrt:~# iperf -s -u -B 224.0.0.1 -p 8080 -i 1
multicast join failed: No such device
------------------------------------------------------------
Server listening on UDP port 8080
Binding to local address 224.0.0.1
Joining multicast group 224.0.0.1
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
multicast join failed: No such device
[ 3] local 224.0.0.1 port 8080 connected with 192.168.3.1 port 60332
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 2.50 MBytes 21.0 Mbits/sec 0.076 ms 0/ 1783 (0%)
[ 3] 1.0- 2.0 sec 2.51 MBytes 21.1 Mbits/sec 0.007 ms 0/ 1790 (0%)
[ 3] 2.0- 3.0 sec 2.50 MBytes 21.0 Mbits/sec 0.007 ms 0/ 1783 (0%)
[ 3] 3.0- 4.0 sec 2.50 MBytes 21.0 Mbits/sec 0.005 ms 0/ 1784 (0%)
[ 3] 4.0- 5.0 sec 2.50 MBytes 21.0 Mbits/sec 0.016 ms 0/ 1783 (0%)
[ 3] 5.0- 6.0 sec 2.50 MBytes 20.9 Mbits/sec 0.032 ms 0/ 1780 (0%)
[ 3] 6.0- 7.0 sec 2.50 MBytes 21.0 Mbits/sec 0.076 ms 0/ 1785 (0%)
[ 3] 7.0- 8.0 sec 2.50 MBytes 21.0 Mbits/sec 0.017 ms 0/ 1784 (0%)
[ 3] 8.0- 9.0 sec 2.50 MBytes 21.0 Mbits/sec 0.007 ms 0/ 1784 (0%)
[ 3] 0.0-10.0 sec 25.0 MBytes 21.0 Mbits/sec 0.028 ms 0/17833 (0%)
The second test picture is that the two routers use 2.4g for Mesh Point network connection, point-to-point test, and found that everything is normal (packet loss may be acceptable due to signal interference).
****192.168.3.1****
root#OpenWrt:~# iperf -c 192.168.3.2 -u -T -t 100 -i 1 -b 20M
iperf: ignoring extra argument -- 100
------------------------------------------------------------
Client connecting to 192.168.3.2, UDP port 5001
Sending 1470 byte datagrams, IPG target: 560.76 us (kalman adjust)
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.1 port 39586 connected with 192.168.3.2 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 1.27 MBytes 10.7 Mbits/sec
[ 3] 1.0- 2.0 sec 2.37 MBytes 19.9 Mbits/sec
[ 3] 2.0- 3.0 sec 1.38 MBytes 11.6 Mbits/sec
[ 3] 3.0- 4.0 sec 2.34 MBytes 19.6 Mbits/sec
[ 3] 4.0- 5.0 sec 1.60 MBytes 13.4 Mbits/sec
[ 3] 5.0- 6.0 sec 2.07 MBytes 17.4 Mbits/sec
[ 3] 6.0- 7.0 sec 1.70 MBytes 14.2 Mbits/sec
[ 3] 7.0- 8.0 sec 1.94 MBytes 16.3 Mbits/sec
[ 3] 8.0- 9.0 sec 1.95 MBytes 16.3 Mbits/sec
[ 3] 0.0-10.0 sec 18.5 MBytes 15.5 Mbits/sec
[ 3] Sent 0 datagrams
[ 3] Server Report:
[ 3] 0.0-10.0 sec 18.3 MBytes 15.3 Mbits/sec 1.554 ms 154/13181 (1.2%)
****192.168.3.2****
root#OpenWrt:~# iperf -s -u -B 192.168.3.2 -i 1
------------------------------------------------------------
Server listening on UDP port 5001
Binding to local address 192.168.3.2
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.2 port 5001 connected with 192.168.3.1 port 39586
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 1.17 MBytes 9.85 Mbits/sec 4.352 ms 25/ 863 (2.9%)
[ 3] 1.0- 2.0 sec 2.34 MBytes 19.6 Mbits/sec 0.889 ms 5/ 1671 (0.3%)
[ 3] 2.0- 3.0 sec 1.36 MBytes 11.4 Mbits/sec 1.646 ms 14/ 986 (1.4%)
[ 3] 3.0- 4.0 sec 2.33 MBytes 19.6 Mbits/sec 0.442 ms 5/ 1668 (0.3%)
[ 3] 4.0- 5.0 sec 1.59 MBytes 13.3 Mbits/sec 1.294 ms 8/ 1141 (0.7%)
[ 3] 5.0- 6.0 sec 2.09 MBytes 17.5 Mbits/sec 0.941 ms 10/ 1500 (0.67%)
[ 3] 6.0- 7.0 sec 1.62 MBytes 13.6 Mbits/sec 2.204 ms 31/ 1186 (2.6%)
[ 3] 7.0- 8.0 sec 1.92 MBytes 16.1 Mbits/sec 0.532 ms 32/ 1401 (2.3%)
[ 3] 8.0- 9.0 sec 1.95 MBytes 16.4 Mbits/sec 2.025 ms 14/ 1405 (1%)
[ 3] 9.0-10.0 sec 1.81 MBytes 15.2 Mbits/sec 0.534 ms 10/ 1299 (0.77%)
[ 3] 0.0-10.0 sec 18.3 MBytes 15.3 Mbits/sec 1.554 ms 154/13181 (1.2%)
The third test picture is that the two routers use 2.4g for Mesh Point networking connection and perform multicast test, and found that the speed is only 50kbye / s and the packet loss rate is very high. Excuse me, what is this question?
****192.168.3.1****
root#OpenWrt:~# iperf -c 224.0.0.1 -p 8080 -u -T -t 10 -i 1 -b 20M
iperf: ignoring extra argument -- 10
------------------------------------------------------------
Client connecting to 224.0.0.1, UDP port 8080
Sending 1470 byte datagrams, IPG target: 560.76 us (kalman adjust)
Setting multicast TTL to 0
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
[ 3] local 192.168.3.1 port 59175 connected with 224.0.0.1 port 8080
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 1.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 1.0- 2.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 2.0- 3.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 3.0- 4.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 4.0- 5.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 5.0- 6.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 6.0- 7.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 7.0- 8.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 8.0- 9.0 sec 2.50 MBytes 21.0 Mbits/sec
[ 3] 0.0-10.0 sec 25.0 MBytes 21.0 Mbits/sec
[ 3] Sent 0 datagrams
****192.168.3.2****
root#OpenWrt:/# iperf -s -u -B 224.0.0.1 -p 8080 -i 1
multicast join failed: No such device
------------------------------------------------------------
Server listening on UDP port 8080
Binding to local address 224.0.0.1
Joining multicast group 224.0.0.1
Receiving 1470 byte datagrams
UDP buffer size: 160 KByte (default)
------------------------------------------------------------
multicast join failed: No such device
[ 3] local 224.0.0.1 port 8080 connected with 192.168.3.1 port 59175
[ ID] Interval Transfer Bandwidth Jitter Lost/Total Datagrams
[ 3] 0.0- 1.0 sec 54.6 KBytes 447 Kbits/sec 24.013 ms 16/ 54 (30%)
[ 3] 1.0- 2.0 sec 53.1 KBytes 435 Kbits/sec 28.932 ms 1260/ 1297 (97%)
[ 3] 2.0- 3.0 sec 51.7 KBytes 423 Kbits/sec 28.925 ms 1766/ 1802 (98%)
[ 3] 3.0- 4.0 sec 51.7 KBytes 423 Kbits/sec 22.785 ms 1741/ 1777 (98%)
[ 3] 4.0- 5.0 sec 53.1 KBytes 435 Kbits/sec 27.523 ms 1727/ 1764 (98%)
[ 3] 5.0- 6.0 sec 51.7 KBytes 423 Kbits/sec 24.211 ms 1723/ 1759 (98%)
[ 3] 6.0- 7.0 sec 57.4 KBytes 470 Kbits/sec 22.281 ms 1876/ 1916 (98%)
[ 3] 7.0- 8.0 sec 48.8 KBytes 400 Kbits/sec 28.154 ms 1593/ 1627 (98%)
[ 3] 8.0- 9.0 sec 56.0 KBytes 459 Kbits/sec 26.905 ms 1874/ 1913 (98%)
[ 3] 9.0-10.0 sec 51.7 KBytes 423 Kbits/sec 27.160 ms 1671/ 1707 (98%)
[ 3] 10.0-11.0 sec 51.7 KBytes 423 Kbits/sec 21.963 ms 747/ 783 (95%)
[ 3] 11.0-12.0 sec 60.3 KBytes 494 Kbits/sec 13.801 ms 631/ 673 (94%)
[ 3] 12.0-13.0 sec 57.4 KBytes 470 Kbits/sec 15.173 ms 625/ 665 (94%)
[ 3] 0.0-13.2 sec 711 KBytes 440 Kbits/sec 17.047 ms 17338/17833 (97%)
/*This is send again*/
multicast join failed: No such device
[ 4] local 224.0.0.1 port 8080 connected with 192.168.3.1 port 47463
[ 4] 0.0- 1.0 sec 54.6 KBytes 447 Kbits/sec 24.254 ms 19/ 57 (33%)
[ 4] 1.0- 2.0 sec 50.2 KBytes 412 Kbits/sec 24.307 ms 1237/ 1272 (97%)
[ 4] 2.0- 3.0 sec 53.1 KBytes 435 Kbits/sec 26.490 ms 1715/ 1752 (98%)
[ 4] 3.0- 4.0 sec 54.6 KBytes 447 Kbits/sec 24.730 ms 1676/ 1714 (98%)
[ 4] 4.0- 5.0 sec 53.1 KBytes 435 Kbits/sec 22.588 ms 1838/ 1875 (98%)
[ 4] 5.0- 6.0 sec 53.1 KBytes 435 Kbits/sec 23.496 ms 1786/ 1823 (98%)
[ 4] 6.0- 7.0 sec 53.1 KBytes 435 Kbits/sec 25.347 ms 1708/ 1745 (98%)
[ 4] 7.0- 8.0 sec 53.1 KBytes 435 Kbits/sec 24.269 ms 1711/ 1748 (98%)
[ 4] 8.0- 9.0 sec 53.1 KBytes 435 Kbits/sec 30.231 ms 1772/ 1809 (98%)
[ 4] 9.0-10.0 sec 54.6 KBytes 447 Kbits/sec 24.565 ms 1741/ 1779 (98%)
[ 4] 10.0-11.0 sec 54.6 KBytes 447 Kbits/sec 20.195 ms 842/ 880 (96%)
[ 4] 11.0-12.0 sec 50.2 KBytes 412 Kbits/sec 18.220 ms 547/ 582 (94%)
[ 4] 12.0-13.0 sec 51.7 KBytes 423 Kbits/sec 18.280 ms 634/ 670 (95%)
[ 4] 0.0-13.3 sec 701 KBytes 433 Kbits/sec 19.081 ms 17345/17833 (97%)
Is wireless UDP multicast restricted by the router? Or where did I not configure it well.
After brushing the two routers, only 192.168.1.1 to 192.168.3.x were modified, and no other modifications were made.

How do I compare values in two dataframe in an efficient way

df1
df2
I am new with python, pandas and Stack Overflow, so I will appreciate any help. I have two panda dataframes, the first one is in ascending order(values from 0 to 100 in steps of 0.1), the second one has 26000 values from 2.3 to 38.5, in no order, some values are also repeated in that dataframe. What I am trying to do is, for each value in the first dataframe, find how many values in the second dataframe are less than or equal to that value in an efficient way.
My code below does it in 45 seconds, but I'd like it to be done in around 10.
Thanks in advance:
Code:
def get_CDF2(df1, df2):
x=df1 #The first dataframe is already sorted in ascending order
y = np.sort(df2, axis=0) #Sort the columns of the second dataframe in ascending order
df_res = [] # keep the results here
yi = iter(y) # Use of an iterator to move over y
yindex = 0
flag = 0 #Flag, when set to 1 no comparison is done
y_val = next(yi)
for value in x:
if flag >=1:
df_res.append(largest_ind)#append the number of y_val smaller than value
#yindex+1
else:
# Search through y to find the index of an item bigger than value
while (y_val) <= (value) and yindex < len(y)-1:
y_val= next(yi) #Point at the next value in df2
yindex += 1 #Keep track of how many y_val are smaller than value
'''if for any value in df1 we iterate through the entire df2 and they are all less, that means
the rest of values in df1 will have the same effect since df1 is in ascending other, so no need to iterate again,
just set flag to 1'''
if ((yindex==len(y)-1)) and ((y_val <= float(value))):
flag=1
largest_ind=yindex+1
df_res.append(largest_ind)#append the number of y_val smaller than value
else:
df_res.append(yindex) #append the number of y_val smaller than value
return df_res
df1:
0. , 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8,
0.9, 1. , 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7,
1.8, 1.9, 2. , 2.1, 2.2, 2.3, 2.4, 2.5, 2.6,
2.7, 2.8, 2.9, 3. , 3.1, 3.2, 3.3, 3.4, 3.5,
3.6, 3.7, 3.8, 3.9, 4. , 4.1, 4.2, 4.3, 4.4,
4.5, 4.6, 4.7, 4.8, 4.9, 5. , 5.1, 5.2, 5.3,
5.4, 5.5, 5.6, 5.7, 5.8, 5.9, 6. , 6.1, 6.2,
6.3, 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 7. , 7.1,
7.2, 7.3, 7.4, 7.5, 7.6, 7.7, 7.8, 7.9, 8. ,
8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7, 8.8, 8.9,
9. , 9.1, 9.2, 9.3, 9.4, 9.5, 9.6, 9.7, 9.8,
9.9, 10. , 10.1, 10.2, 10.3, 10.4, 10.5, 10.6, 10.7,
10.8, 10.9, 11. , 11.1, 11.2, 11.3, 11.4, 11.5, 11.6,
11.7, 11.8, 11.9, 12. , 12.1, 12.2, 12.3, 12.4, 12.5,
12.6, 12.7, 12.8, 12.9, 13. , 13.1, 13.2, 13.3, 13.4,
13.5, 13.6, 13.7, 13.8, 13.9, 14. , 14.1, 14.2, 14.3,
14.4, 14.5, 14.6, 14.7, 14.8, 14.9, 15. , 15.1, 15.2,
15.3, 15.4, 15.5, 15.6, 15.7, 15.8, 15.9, 16. , 16.1,
16.2, 16.3, 16.4, 16.5, 16.6, 16.7, 16.8, 16.9, 17. ,
17.1, 17.2, 17.3, 17.4, 17.5, 17.6, 17.7, 17.8, 17.9,
18. , 18.1, 18.2, 18.3, 18.4, 18.5, 18.6, 18.7, 18.8,
18.9, 19. , 19.1, 19.2, 19.3, 19.4, 19.5, 19.6, 19.7,
19.8, 19.9, 20. , 20.1, 20.2, 20.3, 20.4, 20.5, 20.6,
20.7, 20.8, 20.9, 21. , 21.1, 21.2, 21.3, 21.4, 21.5,
21.6, 21.7, 21.8, 21.9, 22. , 22.1, 22.2, 22.3, 22.4,
22.5, 22.6, 22.7, 22.8, 22.9, 23. , 23.1, 23.2, 23.3,
23.4, 23.5, 23.6, 23.7, 23.8, 23.9, 24. , 24.1, 24.2,
24.3, 24.4, 24.5, 24.6, 24.7, 24.8, 24.9, 25. , 25.1,
25.2, 25.3, 25.4, 25.5, 25.6, 25.7, 25.8, 25.9, 26. ,
26.1, 26.2, 26.3, 26.4, 26.5, 26.6, 26.7, 26.8, 26.9,
27. , 27.1, 27.2, 27.3, 27.4, 27.5, 27.6, 27.7, 27.8,
27.9, 28. , 28.1, 28.2, 28.3, 28.4, 28.5, 28.6, 28.7,
28.8, 28.9, 29. , 29.1, 29.2, 29.3, 29.4, 29.5, 29.6
df2:
0 12.993
1 12.054
2 21.957
3 10.917
4 33.890
5 10.597
6 22.911
7 7.431
8 10.437
9 19.165
10 12.169
11 14.847
12 10.093
13 10.795
14 14.419
15 27.199
16 15.045
17 12.764
18 7.766
19 18.066
20 10.254
21 16.922
22 7.011
23 10.322
24 11.619
25 25.719
26 18.142
27 14.557
28 26.367
29 13.443
30 17.318
31 10.971
32 6.073
33 20.050
34 11.863
35 25.619
36 18.326
37 30.830
38 13.130
39 11.734
40 14.457
41 22.659
42 16.479
43 17.845
44 23.712
45 16.670
46 10.322
47 16.250
48 20.920
49 17.479
50 15.526
51 15.732
52 19.836
53 10.513
54 24.818
55 10.933
56 14.785
57 25.253
58 15.732
59 14.290
60 23.979
61 24.788
62 12.420
63 21.324
64 9.658
65 24.307
66 17.601
67 12.352
68 18.089
69 23.353
70 12.718
71 18.707
72 9.147
73 17.494
74 8.743
75 22.407
76 16.227
77 15.396
78 16.807
79 26.733
80 14.084
81 19.516
82 15.106
83 21.187
84 13.008
85 13.618
86 16.266
87 19.706
88 6.591
89 14.999
90 16.449
91 18.883
92 15.243
93 15.976
94 18.242
95 16.662
96 6.691
97 16.952
98 25.940
99 23.018
100 29.365
101 14.564
102 15.625
103 9.727
104 7.652
105 12.726
106 7.263
107 19.943
108 17.540
109 7.469
110 10.360
111 17.898
112 20.393
113 7.011
114 15.999
115 12.985
116 16.624
117 18.753
118 12.520
119 13.488
120 17.959
121 16.433
122 14.518
123 12.909
124 19.752
125 9.277
126 25.566
127 19.272
128 10.360
129 22.148
130 20.294
131 18.402
132 17.631
133 17.341
134 13.672
135 19.600
136 20.653
137 15.999
138 15.480
139 30.655
140 15.426
141 16.067
142 29.838
143 13.099
144 12.184
145 15.693
146 26.031
147 16.052
148 8.087
149 16.754
150 17.029
151 16.601
152 9.956
153 20.363
154 11.215
155 15.106
156 13.809
157 23.178
158 21.484
159 13.359
160 31.860
161 14.564
162 19.737
163 19.424
164 29.556
165 15.678
166 22.148
167 28.389
168 21.309
169 22.262
170 11.314
171 8.018
172 24.551
173 14.740
174 15.716
175 24.269
176 20.042
177 15.968
178 11.337
179 27.618
180 22.522
181 19.066
182 9.323
183 20.622
184 13.092
185 15.464
186 21.171
187 11.604
188 19.050
189 15.823
190 33.859
191 15.106
192 13.549
193 17.296
194 13.740
195 12.054
196 10.955
197 21.164
198 14.427
199 9.719
200 12.176
201 9.742
202 21.278
203 20.515
204 18.265
205 9.666
206 13.870
207 15.968
208 13.313
209 16.517
210 18.417
211 15.419
212 20.523
213 15.655
214 26.977
215 13.084
216 31.349
217 29.854
218 13.008
219 11.306
220 22.384
221 20.798
222 17.433
223 12.916
224 11.284
225 20.248
226 9.803
227 10.376
228 9.315
229 14.976
230 16.327
231 9.590
232 16.830
233 23.979
234 11.558
235 13.183
236 18.776
237 20.416
238 9.163
239 10.345
240 28.252
241 22.888
242 20.538
243 6.912
244 24.040
245 8.682
246 31.929
247 14.908
248 19.195
249 17.112
250 18.379
251 15.869
252 13.794
253 14.129
254 12.458
255 10.795
256 25.291
257 26.382
258 20.881
Try this. It will add a column called check to df1. The column will contain the count of the values in df2 that are <= each value in df1.
df1['check'] = df1[0].apply(lambda x: df2[df2[0] <= x].size)
You may have to replace the [0] with the names of the first column in your data frames.

The average Jitter calculated in Iperf

In the RFC 1889 the delay jitter is given by
J=J+(|(Rj-Sj)-(Ri-Si)|-J)/16
I use iperf to measure delay jitters between two network nodes. I got
[ 3] 0.0- 1.0 sec 18.7 KBytes 153 Kbits/sec 8.848 ms 0/ 13 (0%)
[ 3] 1.0- 2.0 sec 15.8 KBytes 129 Kbits/sec 8.736 ms 0/ 11 (0%)
[ 3] 2.0- 3.0 sec 15.8 KBytes 129 Kbits/sec 17.437 ms 0/ 11 (0%)
[ 3] 3.0- 4.0 sec 15.8 KBytes 129 Kbits/sec 15.929 ms 0/ 11 (0%)
[ 3] 4.0- 5.0 sec 12.9 KBytes 106 Kbits/sec 11.942 ms 0/ 9 (0%)
[ 3] 0.0- 5.0 sec 80.4 KBytes 131 Kbits/sec 18.451 ms 0/ 56 (0%)
I want to know jitters every second and do statistics but I found I can't calculate the average jitter at the end of the iperf report.
What's the relationship of jitter between the average and every-second record.

Using the ash shell "for x in y"

First, bash is not installed on the system I am using. So, no bash based answers please. Ash does not do ifs with Regexes.
In an ash shell script I have a list of acceptable responses:
# NB: IFS = the default IFS = <space><tab><newline>
# 802.11 channel names
channels="1 2 3 4 5 5u 6 7 7u 8 8u 9 9u 10 10u 11 11u 2l 36 3l 40 40u 44 48
48u 4l 52 56 56u 5l 60 61 64 64u 7l 100 104 104u 108 112
112u 116 132 136 136u 140 144 144u 149 153 153u 157 161 161u 165 36l
44l 52l 60l 100l 108l 132l 140l 149l 157l 36/80 40/80 44/80 48/80 52/80 56/80 60/80
64/80 100/80 104/80 108/80 112/80 132/80 136/80 140/80 144/80 149/80 153/80 157/80 161/80"
A menu routine has returned "MENU_response" containing a possibly matching response
I want to see if I got back a valid response.
for t in "$channels"; do
echo "B MENU_response = \"${MENU_response}\" test = \"${t}\""
if [ "${MENU_response}" = "${t}" ]; then
break
fi
done
The echo in the loop is reposting that $t = all of $channels, which makes no sense. I have used this technique in several other places and it works fine.
Can someone tell me why this is happening? Do I need to wrap quotes around each individual channel?
Removing the quotes around "$channels" works for me:
$ channels="1 2 3 4 5 5u 6 7 7u 8 8u 9 9u 10 10u 11 11u 2l 36 3l 40 40u 44 48
> 48u 4l 52 56 56u 5l 60 61 64 64u 7l 100 104 104u 108 112
> 112u 116 132 136 136u 140 144 144u 149 153 153u 157 161 161u 165 36l
> 44l 52l 60l 100l 108l 132l 140l 149l 157l 36/80 40/80 44/80 48/80 52/80 56/80 60/80
> 64/80 100/80 104/80 108/80 112/80 132/80 136/80 140/80 144/80 149/80 153/80 157/80 161/80"
$ for t in $channels; do echo $t; done
1
2
3
4
5
# etc.

Resources