Unexpected QT5 QTimer duration on ARM - linux

I am working on QT console application to execute on a ARM CPU and I met a very strange behavior of QTimer: instead of planned 100 ms, the timer expired after 1946 ms. I changed the duration but the observed behavior does not change (about some milliseconds, ex. 1958 ms instead of 40 ms).
When the same code is executed on x86_AMD64 (I stubbed the call to a specific HW API function; the execution of this function without QTimer slot requires less than 3 ms), the timer duration is as expected +/- 100 ms.
Note: the version of embedded QT is 5.4.1; the version of PC QT is 5.9.5
I tried different durations, including 0. The expiring is about the same duration.
I monitored the CPU usage (less than 30%) and load average (less than 0.15).
I wrote also a small QT console application which starts some timers of different durations and logs the elapsed times. The results are corrects (the elapsed times drift, as "expected" ;), so I think buildchain and embedded QT installation are good.
I added to my initial code a QElapsedTimer and I logged the elapsed time in the slot method of 40 ms QTimer.
I obtained the trace on PC:
mDebugMessage = ("elapsed time = 42 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=true - time = 46", "elapsed time = 81 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=true - time = 81", "elapsed time = 122 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=true - time = 122", "elapsed time = 162 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 163", "elapsed time = 201 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 201", "elapsed time = 242 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=true - time = 242", "elapsed time = 281 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 281", ...
On ARM, the trace is different, instead of expected +/- 40 ms, the duration is about 2 seconds:
mDebugMessage = ("elapsed time = 1958 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 1961", "elapsed time = 3916 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 3919", "elapsed time = 5873 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 5876", "elapsed time = 7830 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 7833", "elapsed time = 9787 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 9790", "elapsed time = 11744 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 11747", "elapsed time = 13700 ms - INPUT_DOOR_LOCKED_SENSOR=false - INPUT_DOOR_UNLOCKED_SENSOR=false - time = 13705", ...
I need your help to understand why my QTimer does not expire as expected or any clue to investigate on target what may prevent my timer to expire.
Thank you for your idea.
Best regards,
EDIT: as demanded, the code
const int CDoorManagement::I_DOOR_LOCKING_DURATION_MS = 40;
const int CDoorManagement::I_DOOR_LOCKING_ALARM_DURATION_MS = 12000;
CDoorManagement::CDoorManagement(CInputOutputManagerPtr ioPtr)
: QObject(nullptr)
, mIOManagerPtr(ioPtr)
, mOperationElapsedTimer()
, mDoorLockingTimer()
, mDebugMessages()
{
connect(&mDoorLockingTimer, SIGNAL(timeout()), this, SLOT(slotDoorLocking()), Qt::UniqueConnection);
}
void CDoorManagement::slotDoorLocking()
{
const auto elapsedTime = mOperationElapsedTimer.elapsed();
if (elapsedTime > I_DOOR_LOCKING_ALARM_DURATION_MS)
{
mDoorLockingTimer.stop();
mIOManagerPtr->setActuator(OUTPUT_DOOR_LOCKING_ACTUATOR, false);
mDebugMessages << QString("elapsed time = %1 ms - INPUT_DOOR_LOCKED_SENSOR=%2 - INPUT_DOOR_UNLOCKED_SENSOR=%3 - time = %4")
.arg(elapsedTime)
.arg(mIOManagerPtr->getTorInputState(INPUT_DOOR_LOCKED_SENSOR)?"true":"false")
.arg(mIOManagerPtr->getTorInputState(INPUT_DOOR_UNLOCKED_SENSOR)?"true":"false")
.arg(mOperationElapsedTimer.elapsed());
qDebug() << "door locking - mDebugMessage =" << mDebugMessages;
abort(QSTR_LOCKING_ABORTED);
}
if(mIOManagerPtr->getTorInputState(INPUT_DOOR_LOCKED_SENSOR))
{
mDoorLockingTimer.stop();
mIOManagerPtr->setActuator(OUTPUT_DOOR_LOCKING_ACTUATOR, false);
syslog(LOG_INFO, "%s::%s() - locked: elapsedTime = %lld, max time=%d",
LOG_PREFIX, __FUNCTION__, elapsedTime, I_DOOR_LOCKING_ALARM_DURATION_MS);
mDebugMessages << QString("elapsed time = %1 ms - INPUT_DOOR_LOCKED_SENSOR=%2 - INPUT_DOOR_UNLOCKED_SENSOR=%3 - time = %4")
.arg(elapsedTime)
.arg(mIOManagerPtr->getTorInputState(INPUT_DOOR_LOCKED_SENSOR)?"true":"false")
.arg(mIOManagerPtr->getTorInputState(INPUT_DOOR_UNLOCKED_SENSOR)?"true":"false")
.arg(mOperationElapsedTimer.elapsed());
qDebug() << "door locking - mDebugMessage =" << mDebugMessages;
emit signalDoorLocked();
}
else
{
mDebugMessages << QString("elapsed time = %1 ms - INPUT_DOOR_LOCKED_SENSOR=%2 - INPUT_DOOR_UNLOCKED_SENSOR=%3 - time = %4")
.arg(elapsedTime)
.arg(mIOManagerPtr->getTorInputState(INPUT_DOOR_LOCKED_SENSOR)?"true":"false")
.arg(mIOManagerPtr->getTorInputState(INPUT_DOOR_UNLOCKED_SENSOR)?"true":"false")
.arg(mOperationElapsedTimer.elapsed());
}
}
void CDoorManagement::startLocking()
{
mDebugMessages.clear();
qDebug() << "start of mDoorLockingTimer using " << I_DOOR_LOCKING_DURATION_MS << " ms delay";
mOperationElapsedTimer.start();
mDoorLockingTimer.start(I_DOOR_LOCKING_DURATION_MS);
if(!mIOManagerPtr->setActuator(OUTPUT_DOOR_LOCKING_ACTUATOR, true))
{
mIOManagerPtr->setActuator(OUTPUT_DOOR_LOCKING_ACTUATOR, false);
syslog(LOG_WARNING, "%s::%s() - failed to activate OUTPUT_DOOR_LOCKING_ACTUATOR", LOG_PREFIX, __FUNCTION__);
abort(QSTR_LOCKING_ACTIVATION_FAILURE);
}
}

I found the root cause of observed behavior: in example slot, I read a digital input and this reading requires 3 ms. in another slot, I read two RTD inputs and these readings require up to 2000 ms. the reading of digital and RTD inputs use the same library where there is a mutex to access the HW, either the access to digital or to RTD :(

Related

Terminal limiting cpu usage while running lua

I was learning lua (specifically loops), and I need to run my code in the cmd to be able to use luaJit. Doing so, I notice that the loop were too slow. After that, I recreated the loop using js in the vscode and when I ran it, everything was normal. Then, I tried the same code, but compiled in the cmd, not surprising it was also slow. So, I think there is something limiting the cpu usage while running code in the terminal, but I have no idea. If someone knows how to fix it, I would be delighted.
All I did was open the terminal and run these commands:
luajit <path-to-the-code>
node <path-to-the-code>
Lua:
vscode: average 106 ms per test | 1.1 sec total
cmd: average 10 secs per test | 100 sec total
Js:
vscode: average 288 ms per test | 3 sec total
cmd: average 10 secs per test | 100 sec total
Lua code:
function test()
for x=1, 100000 do
print(x/100)
end
end
totalTime = 0
for x=1, 10 do
start = os.clock()
test()
totalTime = totalTime + os.clock() - start
end
print(totalTime/10)
Js code:
function test(){
for(let x = 1; x<100000; x++){
console.log(x/100)
}
}
let totalTime = 0
for(x = 1; x!=10; x++){
var start = Date.now()
test()
totalTime += Date.now()-start;
}
console.log(totalTime/10)

How to execute a while loop precisely every 10 seconds in windows vc++

Please help me in running the following loop precisely every 10 seconds in windows vc++.
Initially It should start at something like say 12:12:40:000, It should neglect the milliseconds it takes to do some work commented, and restart the next loop at 12:12:50:000 and so on every 10 seconds precisely.
void controlloop()
{
struct timeb start, end;
while(1)
{
ftime(&start);
if(start.time %10 == 0)
break;
else
Sleep(100);
}
while(1)
{
ftime(&start);
if(start.time %10 == 0)
{
// some work here which will roughly take 100 ms
ftime(&end);
elapsedtime = (int) (1000.0 * (end.time - start.time) + (end.millitm - start.millitm));
if(elapsedtime > 10000)
{
sleeptime = 0;
}
else
{
sleeptime = 10000-(elapsedtime);
}
}
Sleep(sleeptime);
}//1
}
The Sleep approach only guarantees you sleep at least 10 seconds. After that your thread is considered eligible for scheduling and on the next quanta it will be considered again. You are still subject to the priority of any other threads on the system, the number of logical cores, etc. You are also still subject to the resolution of the threading quanta which is by default ~15 ms. You can change it with timeBeginPeriod, but that has system-wide power implications.
For more information on Windows scheduling see Microsoft Docs. For more on the power issues, see this blog post.
For Windows the best option is to use the high-frequency performance counter via QueryPerformanceCounter. You use QueryPerformanceFrequency to convert between cycles and seconds.
LARGE_INTEGER qpcFrequency;
QueryPerformanceFrequency(&qpcFrequency);
LARGE_INTEGER startTime;
QueryPerformanceCounter(&startTime);
LARGE_INTEGER tenSeconds;
tenSeconds.QuadPart = startTime .QuadPart + qpcFrequency.QuadPart * 10;
while (true)
{
LARGE_INTEGER currentTime;
QueryPerformanceCounter(&currentTime);
if (currentTime.QuadPart >= tenSeconds.QuadPart)
break;
}
The timer resolution for QPC is typically close the cycle speed of your CPU processor.
If you want to run a thread for as close to 10 seconds as you can while still yielding the processor use:
LARGE_INTEGER qpcFrequency;
QueryPerformanceFrequency(&qpcFrequency);
LARGE_INTEGER startTime;
QueryPerformanceCounter(&startTime);
LARGE_INTEGER tenSeconds;
tenSeconds.QuadPart = startTime .QuadPart + qpcFrequency.QuadPart * 10;
while (true)
{
LARGE_INTEGER currentTIme;
QueryPerformanceCounter(&currentTIme);
if (currentTime.QuadPart >= tenSeconds.QuadPart)
{
// do a thing
tenSeconds.QuadPart = currentTime.QuadPart + qpcFrequency.QuadPart * 10;
SwitchToThread();
}
This is not really the most efficient way to do a periodic timer, but you asked for precision not efficiency.
If you are using VS 2015 or later, you can use the C++11 type high_resolution_clock which uses QPC for it’s implementation. In older versions of Visual C++ used ‘file system time’ which is back to your original resolution problem with ftime.

Display ICU UDate formatted for different timezones

In the following example, I would like to format EPOCH (1/1/1970) in different time zones. For example, I may wish to format EPOCH using the Los Angeles time zone and/or format EPOCH using the New York timezone.
UErrorCode uErrorCode = U_ZERO_ERROR;
UnicodeString unicodeString;
UDate uDate;
icu::Locale locale = icu::Locale("en");
TimeZone* timeZone = TimeZone::createTimeZone("America/Los_Angeles");
Calendar* calendar = Calendar::createInstance(timeZone, uErrorCode);
// setting calendar to EPOCH, e.g. zero MS from 1/1/1970
calendar->setTime(0, uErrorCode);
// get calendar time as milliseconds (UDate)
uDate = calendar->getTime(uErrorCode);
DateFormat* dateFormat = DateFormat::createDateTimeInstance(
icu::DateFormat::MEDIUM, // date style
icu::DateFormat::SHORT, // time style
locale);
unicodeString = dateFormat->format(uDate, unicodeString, uErrorCode);
std::string str;
unicodeString.toUTF8String(str);
std::cout << "Date: " << str;
// Use getOffset to get the stdOffset and dstOffset for the given time
int32_t stdOffset, dstOffset;
timeZone->getOffset(uDate, true, stdOffset, dstOffset, uErrorCode);
std::cout << " | ";
std::cout << "Time zone STD offset: " << stdOffset / (1000 * 60 * 60) << " | ";
std::cout << "Time zone DST offset: " << dstOffset / (1000 * 60 * 60) << std::endl;
The problem that I have is that the output is not formatted respective to the time zone.
Here is the output when using the Los Angeles time zone:
Date: Dec 31, 1969, 6:00 PM | Time zone STD offset: -8 | Time zone DST offset: 0
Here is the output when using the New York time zone:
Date: Dec 31, 1969, 6:00 PM | Time zone STD offset: -5 | Time zone DST offset: 0
Please notice that the date is not EPOCH and secondly notice that the dates and times for both outputs are identical. The offsets are correct, but the date/time display is not.
UPDATE
It is important to note that the displayed date/time is 6 hours behind since I'm currently (-6 UTC) meaning that you ADD 6 hours to Dec. 31, 1969 at 6:00PM which would then equal EPOCH Jan. 1, 1970 12:00AM.
ICU is using my PC's timezone automatically since I have found no way to specify timezone when formatting date/time using DateFormat::Format(...). If format() accepted a timezone argument to override my PC's local timezone, I would not be having this issue.
You should call dateFormat->setTimeZone(*timeZone) to specify time zone.
#earts had it right, because you are formatting based on the scalar time value and not the calendar.
Alternatively, you can format the Calendar object itself, which will use the timezone and time from that calendar:
unicodeString = dateFormat -> format(*calendar,
unicodeString,
(FieldPositionIterator*) nullptr, // ignored
uErrorCode);
Note, though, that using the above function, the calendar type had better match that of the dateformat. An easy way to do that is to make sure you pass in the locale parameter when creating the calendar:
// above:
Calendar* calendar = Calendar::createInstance(timeZone, locale, uErrorCode);

Systematic offset on V4L2 frames

I'm grabbing frames from an UVC device using the V4L2 API. I want to measure the exposure time by calculating the offset between the timestamp of the frame and the current clock time. This is the code I'm using:
/* Control code snipped */
struct v4l2_buffer buf = {0}
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
ioctl(fd, VIDIOC_DQBUF, &buf);
switch( buf.flags & V4L2_BUF_FLAG_TIMESTAMP_MASK )
{
case V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC:
{
struct timespec uptime = {0};
clock_gettime(CLOCK_MONOTONIC,&uptime);
float const secs =
(buf.timestamp.tv_sec - uptime.tv_sec) +
(buf.timestamp.tv_usec - uptime.tv_nsec/1000.0f)/1000.0f;
if( V4L2_BUF_FLAG_TSTAMP_SRC_SOE == (buf.flags & V4L2_BUF_FLAG_TSTAMP_SRC_MASK) )
printf("%s: frame exposure started %.03f seconds ago\n",__FUNCTION__,-secs);
else if( V4L2_BUF_FLAG_TSTAMP_SRC_EOF == (buf.flags & V4L2_BUF_FLAG_TSTAMP_SRC_MASK) )
printf("%s: frame finished capturing %.03f seconds ago\n",__FUNCTION__,-secs);
else printf("%s: unsupported timestamp in frame\n",__FUNCTION__);
break;
}
case V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN:
case V4L2_BUF_FLAG_TIMESTAMP_COPY:
default:
printf("%s: no usable timestamp found in frame\n",__FUNCTION__);
}
Examples of what this returns for an exposure time of 1 second set with VIDIOC_S_CTRL:
read_frame: frame exposure started 28.892 seconds ago
read_frame: frame exposure started 28.944 seconds ago
read_frame: frame exposure started 28.895 seconds ago
read_frame: frame exposure started 29.037 seconds ago
I'm getting that weird 30-second offset between the SRC_SOE timestamp and the monotonic clock, with the 1-second exposure weld in. The V4L2/UVC timestamp is supposed to be computed from the result of ktime_get_ts(). Any idea what I am doing wrong?
This runs on a Linux 4.4 Gentoo system. The webcam is a DMK21AU04.AS, recognized as a standard UVC device.
the thing is...
1 s = 1000ms,
1 ms = 1000us,
1 us = 1000ns.
so...
it should be like...
float const secs =
(buf.timestamp.tv_sec - uptime.tv_sec) +
(buf.timestamp.tv_usec - uptime.tv_nsec/1000.0f)/1000000.0f;

how to use pthread_cond_timedwait with millisecond

I am trying to use pthread_cond_timedwait for millisecond sleep interval but I am not getting sleep duration. my thread is sleeping more than I have mentioned. below is my implementation. Let me know if i am wrong anywhere.
struct timeval tp;
struct timespec ts;
int rc = gettimeofday(&tp, NULL);
ts.tv_sec = tp.tv_sec;
ts.tv_nsec = tp.tv_usec * 1000;
ts.tv_nsec += 30 * 1000000; //30 is my milliseconds
pthread_mutex_lock(&mtxPlaybackWait);
pthread_cond_timedwait(&playbackSignal, &mtxPlaybackWait, &ts);
pthread_mutex_unlock(&mtxPlaybackWait);
timespac might be overflowed and causing timeout.
Try following:
ts.tv_sec = tp.tv_sec;
ts.tv_nsec = tp.tv_usec * 1000;
ts.tv_nsec += 30 * 1000000;
ts.tv_sec += ts.tv_nsec / 1000000000L;
ts.tv_nsec = ts.tv_nsec % 1000000000L;
You have an addition of seconds and microseconds on one side, and milliseconds on the other. The result is in seconds and nanoseconds.
If you try to express seconds in nanoseconds, this may overflow quickly: 1 second = 1,000,000,000 nanoseconds, which takes up ~30 bits. An unsigned 32-bit integer value can hold up to ~4 seconds if unsigned (~2 for a signed int) and will overflow beyond that.
Also, I am not sure if all functions behave correctly under all circumstances when passed a struct where the fractional seconds amount to more than a second. I’d expect widely used standard libraries to have done their homework and normalize first (or otherwise ensure correct behavior), but some quickly assembled niche product might not handle such cases properly.
To prevent both the overflow and strange side effects of anomalies, shave off integer seconds wherever you can and store them in the seconds part rather than in the fractional seconds.
Here is a version of your calculation which avoids both these things:
gettimeofday(&tp, NULL);
/* if msec is 1 s or more, add its integer part to tv_sec */
ts.tv_sec = tp.tv_sec + floor(msec / 1000);
/* for now, these are really µsec, not nsec, to prevent overflow */
ts.tv_nsec = tp.tv_usec + (msec % 1000) * 1000000;
/* if tv_nsec is 1s or more, move integer second part to tv_sec */
ts.tv_sec += floor(ts.tv_nsec / 1000000);
ts.tv_nsec %= 1000000;
/* and finally, convert µsec to nsec */
ts.tv_nsec *= 1000;
You might not need floor if you are certain that you are operating on integer types (i.e. for msec and ts.tv_nsec)—in that case, a simple division will do.

Resources