interpret ntpd configuration used on raspberry pi - linux

I am still very new to the nptd program so my question should be relatively general. I understand that the program slowly adjusts device's clock rate and syncs to the server time through polling. In the following result, IP address that has a * in the front is the actual time source whereas a + is the backup time source. The backup has a better offset than the actual time source. My question is, in the configuration file, there are 4 servers listed. I wonder which server is the one that the program is syncing with?
server 0.us.pool.ntp.org iburst
server 1.us.pool.ntp.org iburst
server 2.us.pool.ntp.org iburst
server 3.us.pool.ntp.org iburst
remote refid st t when poll reach delay offset jitter
==============================================================================
+173.255.215.209 80.72.67.48 3 u 60 64 377 30.713 -0.762 2.808
-64.113.44.55 129.6.15.29 2 u 142 64 274 75.356 3.479 0.970
*144.172.126.201 129.7.1.66 2 u 62 64 375 47.863 2.927 0.418
+108.61.56.35 216.218.254.202 2 u 6 64 377 83.564 1.172 1.423

After some digging, when you run ntpq -pn it will tell you which stratum the device is currently synced to, which is denoted by *.

Related

run script when chrony steps clock

I need to start a certain service after system clock was correctly stepped by crony.
System time is maintained by chrony (chronyd (chrony) version 3.5 (+CMDMON +NTP +REFCLOCK +RTC -PRIVDROP -SCFILTER -SIGND +ASYNCDNS -SECHASH +IPV6 -DEBUG)).
Chrony setup, if relevant, is:
server 192.168.100.1 trust minpoll 2 maxpoll 4 polltarget 30
refclock PPS /dev/pps0 refid KPPS trust lock GNSS maxdispersion 3 poll 2
refclock SOCK /var/run/chrony.sock refid GNSS maxdispersion 0.2 noselect
makestep 0.1 -1
driftfile /var/lib/chrony/drift
rtcsync
example of a "normal, tracking status" is:
/ # chronyc tracking
Reference ID : C0A86401 (192.168.100.1)
Stratum : 2
Ref time (UTC) : Wed Dec 01 11:52:08 2021
System time : 0.000004254 seconds fast of NTP time
Last offset : +0.000000371 seconds
RMS offset : 0.000011254 seconds
Frequency : 17.761 ppm fast
Residual freq : +0.001 ppm
Skew : 0.185 ppm
Root delay : 0.000536977 seconds
Root dispersion : 0.000051758 seconds
Update interval : 16.2 seconds
Leap status : Normal
while "unsynchronized" (initial) status is:
/ # chronyc tracking
Reference ID : 00000000 ()
Stratum : 0
Ref time (UTC) : Thu Jan 01 00:00:00 1970
System time : 0.000000000 seconds fast of NTP time
Last offset : +0.000000000 seconds
RMS offset : 0.000000000 seconds
Frequency : 0.000 ppm slow
Residual freq : +0.000 ppm
Skew : 0.000 ppm
Root delay : 1.000000000 seconds
Root dispersion : 1.000000000 seconds
Update interval : 0.0 seconds
Leap status : Not synchronised
I seem to remember crony can call a script whenever stratus level changes, but I was unable to find references.
In any case:
Is there any way to instruct crony to run a script/program or otherwise send some signal whenever acquires/loses tracking with a valid server?
I am currently relying on a rather ugly: while chronyc tracking | grep -q "Not synchronised"; do sleep 1; done but a proactive signalling by chronyd would be preferred.
Details:
System is a (relatively) small IoT device running Linux (Yocto)
It has no RTC (it always starts with clock set to Epoch).
System has no connection to the Internet (initially).
System has connection to a device having a GNSS
receiver and correct time is derived from there.
There may be a (sometimes 'very') long time before GNSS acquires a fix and thus can propagate time.
At a certain point chrony finally gets the right time
and steps system clock. After this is done I need to start a service
(or run a script or whatever).
I am currently polling chronyc tracking and parsing status, but that is not really nice.
I was looking to do the same and came up empty-handed.
I did, however, find chronyc waitsync, which appears to be a built-in way to do the polling, without the need to parse and sleep explicitly. This works well enough for my case, since I only need to delay a single start-up action.
The existence of this command also hints (albeit by no means proves) that direct triggering may not be supported. If triggering is a hard requirement, rsyslogd can help.
BTW, one can only admire the enthusiasm of systemd fans, spreading the love even when their purported answer is obviously and completely irrelevant.
Clearly, the target system does NOT use systemd. The question is about chronyd, not about systemd-timesyncd, while systemd-time-wait-sync.service applies only to the latter.
Suggesting to investigate systemd-time-wait-sync.service in here.
The suggested technique is to use systemd service unit that waits for systemd-time-wait-sync.service to synchronize kernel clock.
Using after command in the service unit file or pipe.
These technique are described here and here.

What is the minimum packet latency that can be achieved with classic bluetooth (2.1) devices?

I'm using RN42 (http://www.microchip.com/wwwproducts/en/RN42) bluetooth module at 115200 bauds (UART SSP mode) to send very small (1 - 20 byte) serial data messages between a computer and a ATMega328 MCU. I find that the latency for one message is somewhere around 60 - 100 ms. I would need 10 ms or less in my application. I'm wondering if this is even possible with bluetooth 2.1 devices.ยด
I know that theoretically bluetooth packets can be sent every 2 * 625 us from one end since 625 us is the frequency hopping interval. Also, one package will always have a minimum of 126 + payload bits. If we are sending 10 bytes (80 bits), the minimum latency based on 115200 baud rate should be (126+80)/115200*1000 + 2 * 0.625 = 3 ms. However, when I measure the latency with a test code, the minimum latency will not go below 60 ms. This suggests that the baud rate or the frequency hopping are not the dominant causes of latency and for some reason the packages are not sent at a maximum rate.
Does someone know if < 60 ms latency is technically possible with this setup and if so, how to achieve it?

How to make server respond faster on First Byte?

My website has seen ever decreasing traffic, so I've been working to increase speed and usability. On WebPageTest.org I've worked most of my grades up but First Byte is still horrible.
F First Byte Time
A Keep-alive Enabled
A Compress Transfer
A Compress Images
A Progressive JPEGs
B Cache static
First Byte Time (back-end processing): 0/100
1081 ms First Byte Time
90 ms Target First Byte Time
I use the Rackspace Cloud Server system,
CentOS 6.4 2gig of Ram 80 gig harddrive,
Next Generation Server
Linux 2.6.32-358.18.1.el6.x86_64
Apache/2.2.15 (CentOS)
MySQL 5.1.69
PHP: 5.3.3 / Zend: 2.3.0
Website system Tomatocart Shopping Cart.
Any help would be much appreciated.
Traceroute #1 to 198.61.171.121
Hop Time (ms) IP Address FQDN
0.855 - 199.193.244.67
0.405 - 184.105.250.41 - gige-g2-14.core1.mci3.he.net
15.321 - 184.105.222.117 - 10gigabitethernet1-4.core1.chi1.he.net
12.737 - 206.223.119.14 - bbr1.ord1.rackspace.NET
14.198 - 184.106.126.144 - corea.ord1.rackspace.net
14.597 - 50.56.6.129 - corea-core5.ord1.rackspace.net
13.915 - 50.56.6.111 - core5-aggr1501a-1.ord1.rackspace.net
16.538 - 198.61.171.121 - mail.aboveallhousplans.com
#JXH's advise I did a packet capture and analyzed it using wireshark.
during a hit and leave visit to the site I got
6 lines of BAD TCP happening at about lines 28-33
warning that I have TCP Retransmission and TCP Dup ACK...
2 of each of these warnings 3 times.
Under the expanded panel viewing a
Retransmission/ TCP analysis flags - "retransmission suspected" "security level NOTE" RTO of 1.19 seconds.
Under the expanded panel viewing
DCP Dup ACK/ TCP analysis flags - Duplicate ACK" "security level NOTE" RTT of 0.09 seconds.
This is all gibberish to me...
I don't know if this is wise to do or not, but I've uploaded my packet capture dump file.
If anyone cares to take a look at my flags and let me know what they think.
I wonder if the retransmission warnings are saying that the HTTP file is sending duplicate information? I have a few things in twice that seems a little redundant. like user agent vary is duplicated.
# Set header information for proxies
Header append Vary User-Agent
# Set header information for proxies
Header append Vary User-Agent
Server fixed the retransmission and dup ack's a few days ago but lag in initial server response remains.
http://www.aboveallhouseplans.com/images/firstbyte001.jpg
http://www.aboveallhouseplans.com/images/firstbyte002.jpg
First byte of 600ms remains...

How to automate measuring of bandwidth usage between two hosts

I have an application that has a TCP client and a server. I set up the client and server on separate machines. Now I want to measure how much bandwidth is being consumed ( bytes sent and received during a single run of the application). I have discovered that wireshark is one such tool that can help me get this statistic. However, wireshark seems to be GUI dependent. What I wanted was a way to automate the measuring and reporting of this statistic. I dont care about the information about individual packets captured by wireshark. I dont need that information. Is there some way to run wireshark so that all it does is write to a file, the total bytes sent and received between two hosts while the application was running on both ends?
Also, is there a better way to capture this statistic ? Through netstat or /proc/dev/net or any other tool ?
Both my machines have ubuntu 10.04 or later running on them.
Bro is an appropriate tool to measure connection-oriented statistics. You can either record a trace of your application communication or analyze it in realtime:
bro -r <trace>
bro -i <interface>
Thereafter, have a look at the connection log (conn.log) in the same directory for the amount of bytes sent and received by the application. Specifically, you're interested in the TCP payload size, which conn.log exposes via the columns orig_bytes and resp_bytes. Here is an example:
bro-cut id.orig_h id.resp_h conn_state orig_bytes resp_bytes < conn.log | head
which yields the following output:
192.168.1.102 192.168.1.1 SF 301 300
192.168.1.103 192.168.1.255 S0 350 0
192.168.1.102 192.168.1.255 S0 350 0
192.168.1.103 192.168.1.255 S0 560 0
192.168.1.102 192.168.1.255 S0 348 0
192.168.1.104 192.168.1.255 S0 350 0
192.168.1.104 192.168.1.255 S0 549 0
192.168.1.103 192.168.1.1 SF 303 300
192.168.1.102 192.168.1.255 S0 - -
192.168.1.104 192.168.1.1 SF 311 300
Each row represents a single connection, transport-layer ports omitted. The last two columns represent the bytes sent by the originator (first column) and responder (second column). The column conn_state represents the connection status. Please refer to the documentation for all possible field values. Some important values are:
S0: Connection attempt seen, no reply.
S1: Connection established, not terminated.
SF: Normal establishment and termination. Note that this is the same symbol as for state S1. You can tell the two apart because for S1 there will not be any byte counts in the summary, while for SF there will be.
REJ: Connection attempt rejected.

Traffic shaping with tc is inaccurate with high bandwidth and delay

I'm using tc with kernel 2.6.38.8 for traffic shaping. Limit bandwidth works, adding delay works, but when shaping both bandwidth with delay, the achieved bandwidth is always much lower than the limit if the limit is >1.5 Mbps or so.
Example:
tc qdisc del dev usb0 root
tc qdisc add dev usb0 root handle 1: tbf rate 2Mbit burst 100kb latency 300ms
tc qdisc add dev usb0 parent 1:1 handle 10: netem limit 2000 delay 200ms
Yields a delay (from ping) of 201 ms, but a capacity of just 1.66 Mbps (from iperf). If I eliminate the delay, the bandwidth is precisely 2 Mbps. If I specify a bandwidth of 1 Mbps and 200 ms RTT, everything works. I've also tried ipfw + dummynet, which yields similar results.
I've tried using rebuilding the kernel with HZ=1000 in Kconfig -- that didn't fix the problem. Other ideas?
It's actually not a problem, it behaves just as it should. Because you've added a 200ms latency, the full 2Mbps pipe isn't used at it's full potential. I would suggest you study the TCP/IP protocol in more detail, but here is a short summary of what is happening with iperf: your default window size is maybe 3 packets (likely 1500 bytes each). You fill your pipe with 3 packets, but now have to wait until you get an acknowledgement back (this is part of the congestion control mechanism). Since you delay the sending for 200ms, this will take a while. Now your window size will double in size and you can next send 6 packets, but will again have to wait 200ms. Then the window size doubles again, but by the time your window is completely open, the default 10 second iperf test is close to over and your average bandwidth will obviously be smaller.
Think of it like this:
Suppose you set your latency to 1 hour, and your speed to 2 Mbit/s.
2 Mbit/s requires (for example) 50 Kbit/s for TCP ACKs. Because the ACKs take over a hour to reach the source, then the source can't continue sending at 2 Mbit/s because the TCP window is still stuck waiting on the first acknowledgement.
Latency and bandwidth are more related than you think (in TCP at least. UDP is a different story)

Resources