Are there metrics for Redhawk performance - redhawksdr

I have a dual channel radio where I have two RX_DIGITIZER_CHANNELIZERs and two DDCS. My waveform allocates both channels. The waveform just takes the data from each channel and outputs it to two DataConverters. I am using the snapshot function to capture data. When I start to collect data at higher rates some of the packets get dropped. Is there a way to measure how long a call such as pushPacket takes? If I used the logging function, it would produce too much output to measure how long it takes.

#michael_sw can you plot the data coming from the device in the IDE instead of saving to disk?
How are you monitoring the packet drops?
Do you need to go through the data converter? If you have to it is possible to set a blocking flag in SRI in the downstream REDHAWK device (see chapter 15 in manual) to cause back pressure and block until data converter is done consuming the previous data. This only helps if the data converter is dropping packets.
In the IDE there is a port monitoring mode where you can actually tell when data is being dropped (right click on port and select port monitoring) by a component.
Another option in the data converter you could modify the code to watch the getPacket call for the inputQueueFlushed to be true.

I commonly use timestamping - make a call to one of the system clock functions and either log the time or print the time to the console. If you do this in the function that calls pushPacket and again in the pushPacket handler then you simply take the difference. If this produces too much data, you can simply use a counter and log it only every 1000 calls, etc. Or collect the data for a period of time in an array and log/print them after the component is shut down. Calls to the system clock does not effect performance much compared to CORBA calls.

Related

Pymodbus read register continuosly in read time fails

I have an intelligent sensor for measure robot axis movements, i would read valuesusing modbus for every single reading position (it read values every 100ms)
I try using pymodbus:
slave = ModbusSerialClient(port='/dev/ttyAMA4', parity=N, baudrate=9600, timeout=1)
slave.connect()
while True:
print(slave.read_input_registers(300013, 2, unit=10))
time.sleep(0.01)
The problem is, my script start and read the first values but in 5,6 seconds exit because too many request to devices (devices does not respond)
There is a method for call a modbus device and get values in "RealTime" for example every milliseconds without problem due to the hight volumes of continuosly calls?
So many thanks in advance
Your code reads values at about every 10ms rather than 100ms. Typo?
Since your purpose is to get values in "realtime", but how fast you can achieve mostly depends on the sensor, if you can find the specs from sensor datasheet, e.g. minimal polling interval, please code accordingly, otherwise, you can keep testing with different interval values until you are satisfied.

ESP32: BLE transmission speed is very slow

I am trying to build an Android app that interfaces with the ESP32 using BLE. I am using the RxBluetoothKotlin library from Vincent Masselis for the Android side. For the ESP32 side, I am using the default Kolban libraries that are included in the Arduino IDE. My phone is a OnePlus 5T and my ESP32 is a MH ET Live ESP32DevKIT. My Android app can be found here, and my ESP32 program here.
The whole system works pretty much perfectly for me in terms of pure functionality. That is to say, every button does what it's supposed to do, and I get the exact behaviour I had expected to get. However, the communication itself is very slow. Around 200 bytes/second. My test button in the Android app requests a bunch of text data from the ESP32, and displays this in a dialog. It also lists a number which represents the time between request and reception in milliseconds. Using this, I get around 2 seconds for 440 bytes of data. When I send less data, the time decreases approximately linearly with data size. 40 bytes of data will take around 200ms, and 20 bytes or under typically takes less than 100ms.
This seems rather slow to me. From what I understand, I should be able to at least get a few kilobytes per second. I have tried to check the speed using nRF Connect, but I get the same 2 seconds timespan for my data transfer. This suggests that the problem is not in my app, since I also have it with a completely different app. I also put the code in my main loop inside of callbacks instead (which I probably should have done in the first place), but this didn't change things at all. I have tried taking the microcontroller and my phone to a few different locations, hoping to eliminate interference. I have tried to mess with BLEDevice::setPower and BLEDevice::setMTU, as well as setting RxBluetoothGatt.requestMtu(500) on the Android side. Everything so far seems to have had little to no effect. The only thing that did anything, was adding the line "pServer->updatePeerMTU(0,500);" in my loop during the connection phase. This caused the first 23 bytes of data to be repeated whenever I pressed the test button in my app, and made the data transfer take about 3 seconds. If I'm lucky, I can get maybe a bit under 1.8 seconds for 440 bytes, but this is a very small change when I'm expecting an order of magnitude of difference, and might even be down to pure chance rather than anything I did.
Does anyone have an idea of how to increase my transfer speed?
The data transmission speed is mainly influenced by the Bluetooth LE connection interval (between 7.5 ms and 4 seconds) and is negotiated between the master (central unit) and the peripheral device. The master establishes a connection with a parameter set and the peripheral can propose to change this parameter set. In the end, however, the central unit decides which parameter set is to be used.
But the Bluetooth connection interval cannot be changed by an Android applications directly, which normally act as the central role. Instead it can request a connection priority which is known to have an influence on the connection interval.

set network interface counters (rx, tx, packets, bytes)

Is it possible to SET the interface statistics in Linux after it's been brought up? I'm dealing with rrdtool (mrtg) that gets upset by a daily ifdown and ifup which brings the interface counters back to zero. Ideally I would like to continue counting from where I left and setting the interface values to what they were before the interface went down seems to be the easiest path.
I checked writing to /sys/class/net/ax0/statistics/rx_packets but that gives a Permission Denied error.
netstat, ifup, ifconfig and friends don't seem to support changing these values either.
Anything else I can try?
You can't set the kernel counters, no - but do you really need to?
MRTG will usually graph a rate, based on the difference between samples. So your MRTG/RRD will store packets-per-second values every cycle (usually 5min but maybe 1min). When your device resets the counters, then MRTG will see the value apparently go backwards - which will be discounted as out of range, so one failed sample. But, the next sample will work, and a new rate be given.
If you're getting a big spike in the MRTG graph at the point of the reset, this will be due to an incorrect 'counter rollover' detection. You can prevent this by either setting the MRTG AbsMax setting (to prevent this high value from being valid) or (better) by using SNMPv2 counters (where a reset is more obvious).
If you set your RRD file to have a large enough heartbeat and XFF, then this one missing sample will be interpolated, and so your graphs (which, remember, show the rate rather than the total) will continue to look fine.
Should you need the total, it can be derived by sum(rate x interval) which is automatically done by the Routers2 frontend for MRTG/RRD.

What do the ALSA timestamping function return and how do the result relate to each other?

There are several "hi-res" timestamping functions in ALSA:
snd_pcm_status_get_trigger_htstamp
snd_pcm_status_get_audio_htstamp
snd_pcm_status_get_driver_htstamp
snd_pcm_status_get_htstamp
I would like to understand what points in time the resulting functions represent.
My current understanding is that trigger_htstamp represents the time when stream was started/stopped/paused. snd_pcm_status_get_trigger_htstamp returns a constant value and when I add audio_htstamp to that value the result is very close to the current system time.
audio_htstamp seems to start from zero on my system and it is incremented by a value that is equal to the period size I use. Hence on my system it is a simple frame counter. If I understand ALSA correctly audio_htstamp can also work in different more accurate way depending on the system capabilities.
driver_htstamp I guess by the name is a timestamp generated by the audio driver.
Question 1: When is the timestamp driver_htstamp usually generated?
With htstamp I am really unsure where and when it is generated. I have a hunch that it may be related to DMA.
Question 2: Where is htstamp generated?
Question 3: When is htstamp generated?
Question 4: Is the assumption audio_htstamp < htstamp < driver_htstamp generally correct?
It seems like this with a little test program I wrote, but I want to verify my assumption.
I can not find this information in the ALSA documentation.
I just dug through the code for this stuff for my own purposes, so I figured I would share what I found.
The purpose of these timestamps is to allow you to determine subtle differences in the rate of different clocks; most importantly in this case the main system clock that Linux uses for general timekeeping compared with the different clock that determines the rate at which samples move in and out of the sound device. This can be very important for applications that need to keep audio from different hardware devices in sync, since the rates of different physical clocks are never exactly the same.
The technique used is sometimes called "cross-timestamping"; you capture timestamps from the clocks you want to compare as close to simultaneously as possible, and repeat this at regular intervals. There is usually some measurement error introduced, but some relatively simple filtering can get you a good characterization of the difference in the rate at which the clocks count.
The core PCM driver arranges to take a system clock timestamp as closely as possible to when an audio stream starts, and then it does a cross-timestamp between the system clock and audio clock (which can be measured in different ways) whenever it is asked to check the state of the hardware pointers for the DMA engine that moves samples around.
The default method of measuring the audio clock is via DMA hardware pointer comparsion. This isn't terribly precise, but over longer periods of time you can still get a good measure of the rate difference. At the start of snd_pcm_update_hw_ptr0, a system timestamp is captured; this will end up being htstamp. The DMA pointers are then checked, and if it's determined that they've moved since the last check, audio_htstamp is calculated based on the number of frames DMA has copied and the nominal frequency of the audio clock. Then, once all the DMA pointer update is done and right before snd_pcm_update_hw_ptr0 returns, another system timestamp is captured in driver_htstamp. This isn't meant to be used when you're using the DMA hw_ptr method of calculating the audio_htstamp though.
If you happen to have an audio device using the HDAudio driver, you can use an alternate and much more precise method of measuring the audio clock. It supplies an extra operation callback called get_time_info that is used instead of the default method of capturing the system and audio timestamps. It the HDAudio case, it takes a system timestamp for htstamp as close to possible to when it reads an interal counter driven by the same clock source as the audio clock; this forms the audio_htstamp. Afterwards, the same DMA hw_ptr bookkeeping is done, but the code that translates the pointer movement into time is skipped. The driver_htstamp is still taken right before the routine ends, though; this is "to let apps detect if the reference tstamp read by low-level hardware was provided with a delay" as the comment says in the code. This is because there's no guarantee that the get_time_info callback is going to take a new system timestamp; it may have previously recorded an audio timestamp along with a system timestamp as part of an interrupt handler. In this case, the timestamps you get might not match with the available frames and delay frames counts calculated by hw_ptr bookkeeping, but the driver_htstamp will let you know the closest system time to when those calculations were made.
In any case, the code is designed in both cases to capture htstamp and audio_htstamp as closely together as possible, and for htstamp - trigger_htstamp to represent the amount of system time that passed during the period measured by audio_htstamp of the audio clock. You mostly shouldn't need to use driver_htstamp, but I guess it might be used with the USB Audio driver, as I think it and HDAudio are the only ones that do anything special with these interfaces right now.
The documentation for this, although it doesn't contain all the details you might want to know, is part of the kernel documentation: http://lxr.free-electrons.com/source/Documentation/sound/alsa/timestamping.txt?v=4.9

Calculate latency for touch screen UI running on ARM controller board running Linux

I have an embedded board which has ARM controller, runs Linux as OS, which also has touch based screen. The data to the screen is taken from the Frame Buffer (/dev/fb0). Is there any way we can calculate the response time between two UI screen switching occurs when any option is selected by touch?
There are 3 latencies involved in the above scenario
1. Time taken for the touchscreen to register the finger and raise an input-event.
Usually a few milliseconds.
Enable FTRACE and log the following with timestamps
-- ISR
-- Entry of Bottom-half
-- Invoking of input_report()
2. Time taken by the app responsible for the GUI to update it.
Depending upon the app/framework, usually the most significant contributor to latency.
Add normal console logs with timestamps in the GUI app's code
-- upon receiving the input event
-- just before the command to modify the GUI
3. The time taken by the display to update.
Usually within 15-30 milliseconds
The final latency is a sum-total of the above 3 latencies.

Resources