How can I determine the length of time since the last screen refresh on X11? - linux

I'm trying to debug a laggy machine vision camera by writing text timestamps to a terminal window and then observing how long it takes for the camera to 'detect' the screen change. My monitor has a 60hz refresh rate, so the screen is updated every ~17ms. Is there a way to determine at what point within that 17ms window the refresh timer currently is for an X11 application.
EDIT: After wrestling with the problem for nearly a day, I think the real question I should have asked was how to generate a visual signal that was sufficiently fast to test the camera images. My working hypothesis was that the camera was buffering frames before transmitting them, as the video stream seemed to lag behind other synchronised digital events (in this case, output signals to a robotic controller)

'xrefresh' is a tool which can trigger a refresh event on an X server. It does this by painting a global window of a specified color and then removing it, causing all subsequent windows to repaint. Even with this, I was still getting very inconsistent results when trying to correlate the captured frames against the monitor output, no matter what I tried to do, the video stream seemed to lag behind what I expected the monitor state to be. This could mean that either the camera was slow to capture or the monitor was slow to update. Fortunately, I eventually hit upon the idea of using the keyboard leds to verify the synchronicity of the camera frames. ('xset led' and 'xset -led'). This showed me immediately that in fact my computer monitor was slow to update, instead of the camera lagging behind.

Related

How can I briefly display a graphic on the screen without a dialog in AppleScript?

I have 2 applescripts (saved as apps) that make webhook calls in a loop to control the volume of my stereo. Each script displays a dialog that asks for a number of ticks to tick the volume up or down, and it loops to make the webhook call each time.
Background: I wrote a program called pi_bose that runs on my raspberry pi to send commands to my Bose Series 12 stereo. It sends codes on the 28Mhz band using a wire as an antenna plugged into one of the GPIO ports. Node red receives the webhook calls and runs that script. But there are various things that can make it fail. The antenna can be loose because the pi has been bumped. Node red isn't running. The program has a small memory leak that causes a problem after having been used for about 6 months. And sometimes there's background interference that makes not every transmission work (I could probably use a longer antenna to address that I guess). But sometimes, whatever is playing on the stereo is just so soft that it's hard to detect the subtle change to the volume. And sometimes, it seems that either the webhook call happens slowly and the volume is changing - it just happens over the course of 20-30 seconds. So...
I know I could do the loop on the pi itself instead of repeating the webhook call, but I would like to see progress on the mac itself.
I'd like some sort of cue that gives me some feedback to let me know each time the webhook call happens. Like, a red dot on the AppleScript app icon or something in the corner of the screen that appears for a fraction of a second each time the webhook call is made.
Alternatively, I could make the script make some sort of sound, but I would rather not disrupt audibly whatever is playing at the time.
Does anyone know how to do that? Is it even possible to display an icon without a dialog window in applescript?

kinect dk camera not showing color image

Situation is very weird, when I got it over a year ago, it was fine. I have updated the sdk, updated the windows 10, updated the firmware, all are latest.
behavior in the official "Azure Kinect Viewer" is the following:
device shown, opens fine.
if I hit start on the defaults I get a "camera failed: timed out" after a few seconds and all other sensors (depth, IR) show nothing, I think they are not the issue here (see 3)
if I de-select the color camera (after stop and start) the depth and IR are shown and getting 30fps as expected.
So, my problem is with the color camera connection, since everything runs on the same wires, I'm assuming the device is fine and that the cable works.
I have tried the color camera with and without the depth, on all options with respect to frame rate and resolution, none work at all, missing camera fails. In some cases I get an encrypted error message in the log, something with libUSB, I rarely get it so I'm not including the exact output.
When I open the "camera app" (standard windows 10) just to see if this app sees the color camera, I do get an image (Eureka!), but the refresh rate is like a frame every 5 seconds or more (!!!!), I've never seen anything like that.
I'm very puzzled with this behavior, I'm also including my device manager layout for the device perhaps that hints someone, as far as i can see, this should be a supported specification (I tried switching ports but in most cases the device was never detected or had the same color image behavior).
Any hints on how to move forward, much appreciated.
Bonus question:
If I solved this issue with the color camera, is there a way to work with the dk camera from a remote desktop session? (the microphones seem to not be detected when doing remote desktop). My USB device manager looks fine as far as I can see
ps.
I had also tried "disable streaming LED" and instead of camera failed in the dk viewer, I can get a color image but with a frame rate of about 1 frame per 4 seconds or more.

Custom player using NDK/C++/MediaCodec - starvation/buffering in decoder

I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.

WebRTC audio distortion -- Intermittent and Repeated

When engaging in a hangout (or via custom webrtc application) on a video and audio call, one caller will observe distortion in the affected party's audio. This distortion will directly correlate to the packetsSentPerSecond (as observed from chrome://webrtc-internals (from the affected side))
I've observed this pattern on two separate machines. Both Windows 7, (8 to 10) with Chrome 45.0.2454.93 m. The audio interface has been varying, with both internal and USB interfaces tested. Once it occurs, it continues to occur more repeatedly. The pattern also seems to reset. In the above figure, the valleys increase in frequency, and seemingly reset (~9:13) and repeat that pattern.
Wondering if anyone has seen a similar problem or has any thoughts on how this can be further diagnosed.

Calculate latency for touch screen UI running on ARM controller board running Linux

I have an embedded board which has ARM controller, runs Linux as OS, which also has touch based screen. The data to the screen is taken from the Frame Buffer (/dev/fb0). Is there any way we can calculate the response time between two UI screen switching occurs when any option is selected by touch?
There are 3 latencies involved in the above scenario
1. Time taken for the touchscreen to register the finger and raise an input-event.
Usually a few milliseconds.
Enable FTRACE and log the following with timestamps
-- ISR
-- Entry of Bottom-half
-- Invoking of input_report()
2. Time taken by the app responsible for the GUI to update it.
Depending upon the app/framework, usually the most significant contributor to latency.
Add normal console logs with timestamps in the GUI app's code
-- upon receiving the input event
-- just before the command to modify the GUI
3. The time taken by the display to update.
Usually within 15-30 milliseconds
The final latency is a sum-total of the above 3 latencies.

Resources