Systematic offset on V4L2 frames - linux

I'm grabbing frames from an UVC device using the V4L2 API. I want to measure the exposure time by calculating the offset between the timestamp of the frame and the current clock time. This is the code I'm using:
/* Control code snipped */
struct v4l2_buffer buf = {0}
buf.type = V4L2_BUF_TYPE_VIDEO_CAPTURE;
buf.memory = V4L2_MEMORY_MMAP;
ioctl(fd, VIDIOC_DQBUF, &buf);
switch( buf.flags & V4L2_BUF_FLAG_TIMESTAMP_MASK )
{
case V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC:
{
struct timespec uptime = {0};
clock_gettime(CLOCK_MONOTONIC,&uptime);
float const secs =
(buf.timestamp.tv_sec - uptime.tv_sec) +
(buf.timestamp.tv_usec - uptime.tv_nsec/1000.0f)/1000.0f;
if( V4L2_BUF_FLAG_TSTAMP_SRC_SOE == (buf.flags & V4L2_BUF_FLAG_TSTAMP_SRC_MASK) )
printf("%s: frame exposure started %.03f seconds ago\n",__FUNCTION__,-secs);
else if( V4L2_BUF_FLAG_TSTAMP_SRC_EOF == (buf.flags & V4L2_BUF_FLAG_TSTAMP_SRC_MASK) )
printf("%s: frame finished capturing %.03f seconds ago\n",__FUNCTION__,-secs);
else printf("%s: unsupported timestamp in frame\n",__FUNCTION__);
break;
}
case V4L2_BUF_FLAG_TIMESTAMP_UNKNOWN:
case V4L2_BUF_FLAG_TIMESTAMP_COPY:
default:
printf("%s: no usable timestamp found in frame\n",__FUNCTION__);
}
Examples of what this returns for an exposure time of 1 second set with VIDIOC_S_CTRL:
read_frame: frame exposure started 28.892 seconds ago
read_frame: frame exposure started 28.944 seconds ago
read_frame: frame exposure started 28.895 seconds ago
read_frame: frame exposure started 29.037 seconds ago
I'm getting that weird 30-second offset between the SRC_SOE timestamp and the monotonic clock, with the 1-second exposure weld in. The V4L2/UVC timestamp is supposed to be computed from the result of ktime_get_ts(). Any idea what I am doing wrong?
This runs on a Linux 4.4 Gentoo system. The webcam is a DMK21AU04.AS, recognized as a standard UVC device.

the thing is...
1 s = 1000ms,
1 ms = 1000us,
1 us = 1000ns.
so...
it should be like...
float const secs =
(buf.timestamp.tv_sec - uptime.tv_sec) +
(buf.timestamp.tv_usec - uptime.tv_nsec/1000.0f)/1000000.0f;

Related

Infinite loop reading packet data in 1sec loopback recording

I ported the MSDN capture stream example - https://msdn.microsoft.com/en-us/library/windows/desktop/dd370800
and modified it for loopback, exactly as seen here -
https://github.com/slavanap/WaveRec/blob/754b0cfdeec8d1edc59c60d179867ca6088bbfaa/wavetest.cpp
So I am requesting a duration of 1 second recording, and actual duration verifies that it is 1 second.
However I am stuck in an infinite loop in this packet reading here, packetLength is always a value of 448 (numFramesAvailable is also 448, I'm not sure why its never becoming 0 as the while loop is expecting.
https://github.com/slavanap/WaveRec/blob/754b0cfdeec8d1edc59c60d179867ca6088bbfaa/wavetest.cpp#L208-L232
Code is -
while (packetLength != 0)
{
// Get the available data in the shared buffer.
hr = pCaptureClient->GetBuffer(
&pData,
&numFramesAvailable,
&flags, NULL, NULL);
EXIT_ON_ERROR(hr)
if (flags & AUDCLNT_BUFFERFLAGS_SILENT)
{
pData = NULL; // Tell CopyData to write silence.
}
// Copy the available capture data to the audio sink.
// hr = pMySink->CopyData(pData, numFramesAvailable, &bDone);
// EXIT_ON_ERROR(hr)
hr = pCaptureClient->ReleaseBuffer(numFramesAvailable);
EXIT_ON_ERROR(hr)
hr = pCaptureClient->GetNextPacketSize(&packetLength);
EXIT_ON_ERROR(hr)
}
My ported code is in ctypes and is here - https://github.com/Noitidart/ostypes_playground/blob/audio-capture/bootstrap.js#L71-L180
I guess, there is hidden bug in Microsoft example that appears to happen in your case. Inner loop processing time of your code is much enough that new data appears in incoming audio buffer, thus leaving you spin in inner loop forever or until you can process incoming data at decent speed.
Thus, I suggest to add && !bDone condition to inner loop.
while (bDone == FALSE)
{
...
while (packetLength != 0 && bDone == FALSE)
{
...
}
}

How to convert arduino's digital output in terms of frequency

I am using a analog-output sound sensor module where the output of the sensor module is connected to the arduino and can see arduino is doing Ato D conversion and displaying integers from range 0 to 1023.
But I need to calculate the frequency of the sound getting measure from the sensor.
so could you help me, hwo to calculate the frequecy from this Ato D converted values from arduino.
You don't really need to to ADC conversions do you? All you need to do is detect a rising edge on the input and then count those. Since your sensor will output low-high-low sequences, and since the Arduino will register HIGH as over a certain voltage, that should be adequate.
This code will measure up to around 200 kHz, from an input connected to digital pin 8 on the board:
// Input: Pin D8
volatile boolean first;
volatile boolean triggered;
volatile unsigned long overflowCount;
volatile unsigned long startTime;
volatile unsigned long finishTime;
// timer overflows (every 65536 counts)
ISR (TIMER1_OVF_vect)
{
overflowCount++;
} // end of TIMER1_OVF_vect
ISR (TIMER1_CAPT_vect)
{
// grab counter value before it changes any more
unsigned int timer1CounterValue;
timer1CounterValue = ICR1; // see datasheet, page 117 (accessing 16-bit registers)
unsigned long overflowCopy = overflowCount;
// if just missed an overflow
if ((TIFR1 & bit (TOV1)) && timer1CounterValue < 0x7FFF)
overflowCopy++;
// wait until we noticed last one
if (triggered)
return;
if (first)
{
startTime = (overflowCopy << 16) + timer1CounterValue;
first = false;
return;
}
finishTime = (overflowCopy << 16) + timer1CounterValue;
triggered = true;
TIMSK1 = 0; // no more interrupts for now
} // end of TIMER1_CAPT_vect
void prepareForInterrupts ()
{
noInterrupts (); // protected code
first = true;
triggered = false; // re-arm for next time
// reset Timer 1
TCCR1A = 0;
TCCR1B = 0;
TIFR1 = bit (ICF1) | bit (TOV1); // clear flags so we don't get a bogus interrupt
TCNT1 = 0; // Counter to zero
overflowCount = 0; // Therefore no overflows yet
// Timer 1 - counts clock pulses
TIMSK1 = bit (TOIE1) | bit (ICIE1); // interrupt on Timer 1 overflow and input capture
// start Timer 1, no prescaler
TCCR1B = bit (CS10) | bit (ICES1); // plus Input Capture Edge Select (rising on D8)
interrupts ();
} // end of prepareForInterrupts
void setup ()
{
Serial.begin(115200);
Serial.println("Frequency Counter");
// set up for interrupts
prepareForInterrupts ();
} // end of setup
void loop ()
{
// wait till we have a reading
if (!triggered)
return;
// period is elapsed time
unsigned long elapsedTime = finishTime - startTime;
// frequency is inverse of period, adjusted for clock period
float freq = F_CPU / float (elapsedTime); // each tick is 62.5 ns at 16 MHz
Serial.print ("Took: ");
Serial.print (elapsedTime);
Serial.print (" counts. ");
Serial.print ("Frequency: ");
Serial.print (freq);
Serial.println (" Hz. ");
// so we can read it
delay (500);
prepareForInterrupts ();
} // end of loop
More discussion and information at Timers and counters.
As I suggested you in the other thread, the best is to
filter
amplify
apply a threshold
measure the time between edges
Steps 1, 2, 3 can be performed in software but it is MUCH better to perform them in hardware. The fourth step is what Nick Gammon solution is about... But you have to first make steps 1,2,3 in HW otherwise you will receive a lot of "noisy" readings

Linux kernel: Why add_timer() is modifying my "expires" value?

I am trying to setup a periodic timer triggering a function every seconds, but there is a small drift between each call. After some investigations, I found that this is the add_timer() call which adds an offset of 2 to the expires field (~2ms in my case).
Why is this drift added? Is there a clean way to prevent it? I am not trying to get an accurate millisecond precision, I have a vague understanding of the kernel real-time limitations, but at least to avoid this intentional delay at each call.
Here is the output from a test module. Each couple of numbers is the value of the expires field just before and after the call:
[100047.127123] Init timer 1000
[100048.127986] Expired timer 99790884 99790886
[100049.129578] Expired timer 99791886 99791888
[100050.131146] Expired timer 99792888 99792890
[100051.132728] Expired timer 99793890 99793892
[100052.134315] Expired timer 99794892 99794894
[100053.135882] Expired timer 99795894 99795896
[100054.137411] Expired timer 99796896 99796898
[...]
[100071.164276] Expired timer 99813930 99813932
[100071.529455] Exit timer
And here is the source:
#include <linux/module.h>
#include <linux/kernel.h>
#include <linux/init.h>
#include <linux/jiffies.h>
#include <linux/time.h>
static struct timer_list t;
static void timer_func(unsigned long data)
{
unsigned long pre, post;
t.expires = jiffies + HZ;
pre = t.expires;
add_timer(&t);
post = t.expires;
printk("Expired timer %lu %lu\n", pre, post);
}
static int __init timer_init(void)
{
init_timer(&t);
t.function = timer_func;
t.expires = jiffies + HZ;
add_timer(&t);
printk("Init timer %d\n", HZ);
return 0;
}
static void __exit timer_exit(void)
{
del_timer(&t);
printk("Exit timer\n");
}
module_init(timer_init);
module_exit(timer_exit);
I found the cause. Let's trace the add_timer function:
The add_timer function calls:
mod_timer(timer, timer->expires);
The mod_timer function calls:
expires = apply_slack(timer, expires);
and then goes on to actually modify the timer.
The apply_slack function says:
/*
* Decide where to put the timer while taking the slack into account
*
* Algorithm:
* 1) calculate the maximum (absolute) time
* 2) calculate the highest bit where the expires and new max are different
* 3) use this bit to make a mask
* 4) use the bitmask to round down the maximum time, so that all last
* bits are zeros
*/
Before continuing, let's see what is the timer's slack. The init_timer macro eventually calls do_init_timer which sets the slack by default to -1.
With this knowledge, let's reduce apply_slack and see what remains of it:
static inline
unsigned long apply_slack(struct timer_list *timer, unsigned long expires)
{
unsigned long expires_limit, mask;
int bit;
if (timer->slack >= 0) {
expires_limit = expires + timer->slack;
} else {
long delta = expires - jiffies;
if (delta < 256)
return expires;
expires_limit = expires + delta / 256;
}
mask = expires ^ expires_limit;
if (mask == 0)
return expires;
bit = find_last_bit(&mask, BITS_PER_LONG);
mask = (1 << bit) - 1;
expires_limit = expires_limit & ~(mask);
return expires_limit;
}
The first if, checking for timer->slack >= 0 fails, so the else part is applied. In that part the difference between expires and jiffies is slightly less than HZ (you just did t.expires = jiffies + HZ. Therefore, the delta in the function (with your data) is most likely about 4 and delta / 4 is non zero.
This in turn implies that mask (which is expires ^ expires_limit) is not zero. The rest really depends on the value of expires, but for sure, it gets changed.
So there you have it, since slack is automatically set to -1, the apply_slack function is changing your expires time to align with, I guess, the timer ticks.
If you don't want this slack, you can set t.slack = 0; when you are initializing the timer in timer_init.
This is the old answer! It doesn't address the issue in your question, but it is an issue with what you are trying to achieve nonetheless: having a periodic function.
Let's visualize your program in a timeline (assuming start time 1000 and HZ=50 with imaginary time units):
time (jiffies) event
1000 in timer_init(): t.expires = jiffies + HZ; // t.expires == 1050
1050 timer_func() is called by timer
1052 in timer_func(): t.expires = jiffies + HZ; // t.expires == 1102
1102 timer_func() is called by timer
1104 in timer_func(): t.expires = jiffies + HZ; // t.expires == 1154
I hope you see where this is going! The problem is that there is a delay between the time the timer expires and the time you calculate when the next expiration should be. That's where the drift comes from. The drift could get even larger, by the way, if the system is busy and your function call is delayed.
The way to fix it is very very easy. The problem was that when you update t.expires by jiffies, which is the current time. What you should do is update t.expires by the last time it expired (which is already in t.expires!).
So, in your timer_func function, instead of:
t.expires = jiffies + HZ;
simply do:
t.expires += HZ;

What is the smallest audio buffer needed to produce Tone sound without distotions with WaveOUT API

Does the WaveOut API has some internal limitation of the size for the current piece of buffer played ? I mean if I provide a very small buffer does it affects somehow the sound played to the speakers. I am experiencing very strange noise when I am generating and playing the sinus wave with small buffer. Something like a peak, or "BUMP".
The complete Story:
I made a program that can generate Sinus sound signal in real time.
The variable parameters are Frequency and Volume. The project requirement was to have a maximum latency of 50 ms. So the program must be able to produce Sinus signals with manually adjustable frequency of audio signal in real time.
I used Windows WaveOut API, C# and P/invoke to access the API.
Everything works fine when the sound buffer is 1000 ms large. If I minimize the buffer to 50 ms as per latency requirement then for certain frequencies I am experiencing at the end of every buffer, a noise or "BUMP". I do not understand if the sound generated is malformed ( I checked and is not) or something happens with the Audio chip, or some delay in initializing and playing.
When I save the produced audio to .wav file everything is perfect.
This means the must be some bug in my code or the audio subsystem has a limitation to the buffer chunks sent to it.
For those who doesn't know WaveOut must be initialized at first time and then must be prepared with audio headers for each buffer that are containing the number of bytes that needs to be played and the pointer to a memory that contains the audio that needs to be player.
UPDATE
Noise happens with the following combinations 44100 SamplingRate, 16 Bits, 2 channels, 50 ms buffer and generated Sinus audio signal of 201Hz, 202Hz, 203Hz, 204Hz, 205Hz ... 219Hz,
220Hz, 240 Hz, is ok
Why is this difference of 20, I do not know.
There are a few things to keep in mind when you need to output audio smoothly:
waveOutXxxx API is a legacy/compatibility layer on top of lower level API and as such it has greater overhead and is not recommended when you are to reach minimal latency. Note that this is unlikely to be your primary problem, but this is a piece of general knowledge helpful for understanding
because Windows is not real time OS and its audio subsystem is not realtime either you don't have control over random latency involved between you queue audio data for output and the data is really played back, the key is to keep certain level of buffer fullness which protects you from playback underflows and delivers smooth playback
with waveOutXxxx you are no limited to having single buffer, you can allocate multiple reusable buffers and recycle them
All in all, waveOutXxxx, DirectSound, DirectShow APIs work well with latencies 50 ms and up. With WASAPI exclusive mode streams you can get 5 ms latencies and even lower.
EDIT: I seem to have said too early about 20 ms latencies. To compensate for this, here is a simple tool LowLatencyWaveOutPlay (Win32, x64) to estimate the latency you can achieve. With sufficient buffering playback is smooth, otherwise you hear stuttering.
My understanding is that buffers might be returned late and the optimal design in terms of smallest latency lies along the line of having more smaller buffers so that you are given them back as early as possible. For example, 10 buffers 3 ms/buffer rather than 3 buffers 10 ms/buffer.
D:\>LowLatencyWaveOutPlay.exe 48000 10 3
Format: 48000 Hz, 1 channels, 16 bits per sample
Buffer Count: 10
Buffer Length: 3 ms (288 bytes)
Signal Frequency: 1000 Hz
^C
So I came here because I wanted to find the basic latency of waveoutwrite() as well. I got around 25-26ms of latency before I got to the smooth sine tone.
This is for:
AMD Phenom(tm) 9850 Quad-Core Processor 2.51 GHz
4.00 GB ram
64-bit operating system, x64-based processor
Windows 10 Enterprise N
The code follows. It is a modfied version of Petzold's sine wave program, refactored to run on the command line. I also changed the polling of buffers to use of a callback on buffer complete with the idea that this would make the program more efficient, but it didn't make a difference.
It also has a setup for elapsed timing, which I used to probe various timings for operations on the buffers. Using those I get:
Sine wave output program
Channels: 2
Sample rate: 44100
Bytes per second: 176400
Block align: 4
Bits per sample: 16
Time per buffer: 0.025850
Total time prepare header: 87.5000000000 usec
Total time to fill: 327.9000000000 usec
Total time for waveOutWrite: 90.8000000000 usec
Program:
/*******************************************************************************
WaveOut example program
Based on C. Petzold's sine wave example, outputs a sine wave via the waveOut
API in Win32.
*******************************************************************************/
#include <stdio.h>
#include <windows.h>
#include <math.h>
#include <limits.h>
#include <unistd.h>
#define SAMPLE_RATE 44100
#define FREQ_INIT 440
#define OUT_BUFFER_SIZE 570*4
#define PI 3.14159
#define CHANNELS 2
#define BITS 16
#define MAXTIM 1000000000
double fAngle;
LARGE_INTEGER perffreq;
PWAVEHDR pWaveHdr1, pWaveHdr2;
int iFreq = FREQ_INIT;
VOID FillBuffer (short* pBuffer, int iFreq)
{
int i;
int c;
for (i = 0 ; i < OUT_BUFFER_SIZE ; i += CHANNELS) {
for (c = 0; c < CHANNELS; c++)
pBuffer[i+c] = (short)(SHRT_MAX*sin (fAngle));
fAngle += 2*PI*iFreq/SAMPLE_RATE;
if (fAngle > 2 * PI) fAngle -= 2*PI;
}
}
double elapsed(LARGE_INTEGER t)
{
LARGE_INTEGER rt;
long tt;
QueryPerformanceCounter(&rt);
tt = rt.QuadPart-t.QuadPart;
return (tt*(1.0/(double)perffreq.QuadPart));
}
void CALLBACK waveOutProc(HWAVEOUT hwo, UINT uMsg, DWORD_PTR dwInstance, DWORD_PTR dwParam1, DWORD_PTR dwParam2)
{
if (uMsg == WOM_DONE) {
if (pWaveHdr1->dwFlags & WHDR_DONE) {
FillBuffer((short*)pWaveHdr1->lpData, iFreq);
waveOutWrite(hwo, pWaveHdr1, sizeof(WAVEHDR));
}
if (pWaveHdr2->dwFlags & WHDR_DONE) {
FillBuffer((short*)pWaveHdr2->lpData, iFreq);
waveOutWrite(hwo, pWaveHdr2, sizeof(WAVEHDR));
}
}
}
int main()
{
HWAVEOUT hWaveOut ;
short* pBuffer1;
short* pBuffer2;
short* pBuffer3;
WAVEFORMATEX waveformat;
UINT wReturn;
int bytes;
long t;
LARGE_INTEGER rt;
double timprep;
double filtim;
double waveouttim;
printf("Sine wave output program\n");
fAngle = 0; /* start sine angle */
QueryPerformanceFrequency(&perffreq);
pWaveHdr1 = malloc (sizeof (WAVEHDR));
pWaveHdr2 = malloc (sizeof (WAVEHDR));
pBuffer1 = malloc (OUT_BUFFER_SIZE*sizeof(short));
pBuffer2 = malloc (OUT_BUFFER_SIZE*sizeof(short));
pBuffer3 = malloc (OUT_BUFFER_SIZE*sizeof(short));
if (!pWaveHdr1 || !pWaveHdr2 || !pBuffer1 || !pBuffer2) {
if (!pWaveHdr1) free (pWaveHdr1) ;
if (!pWaveHdr2) free (pWaveHdr2) ;
if (!pBuffer1) free (pBuffer1) ;
if (!pBuffer2) free (pBuffer2) ;
fprintf(stderr, "*** Error: No memory\n");
exit(1);
}
// Load prime parameters to format
waveformat.wFormatTag = WAVE_FORMAT_PCM;
waveformat.nChannels = CHANNELS;
waveformat.nSamplesPerSec = SAMPLE_RATE;
waveformat.wBitsPerSample = BITS;
waveformat.cbSize = 0;
// Calculate other parameters
bytes = waveformat.wBitsPerSample/8; /* find bytes per sample */
if (waveformat.wBitsPerSample&8) bytes++; /* round up */
bytes *= waveformat.nChannels; /* find total channels size */
waveformat.nBlockAlign = bytes; /* set block align */
/* find average bytes/sec */
waveformat.nAvgBytesPerSec = bytes*waveformat.nSamplesPerSec;
printf("Channels: %d\n", waveformat.nChannels);
printf("Sample rate: %d\n", waveformat.nSamplesPerSec);
printf("Bytes per second: %d\n", waveformat.nAvgBytesPerSec);
printf("Block align: %d\n", waveformat.nBlockAlign);
printf("Bits per sample: %d\n", waveformat.wBitsPerSample);
printf("Time per buffer: %f\n",
OUT_BUFFER_SIZE*sizeof(short)/(double)waveformat.nAvgBytesPerSec);
if (waveOutOpen (&hWaveOut, WAVE_MAPPER, &waveformat, (DWORD_PTR)waveOutProc, 0, CALLBACK_FUNCTION)
!= MMSYSERR_NOERROR) {
free (pWaveHdr1) ;
free (pWaveHdr2) ;
free (pBuffer1) ;
free (pBuffer2) ;
hWaveOut = NULL ;
fprintf(stderr, "*** Error: No memory\n");
exit(1);
}
// Set up headers and prepare them
pWaveHdr1->lpData = (LPSTR)pBuffer1;
pWaveHdr1->dwBufferLength = OUT_BUFFER_SIZE*sizeof(short);
pWaveHdr1->dwBytesRecorded = 0;
pWaveHdr1->dwUser = 0;
pWaveHdr1->dwFlags = WHDR_DONE;
pWaveHdr1->dwLoops = 1;
pWaveHdr1->lpNext = NULL;
pWaveHdr1->reserved = 0;
QueryPerformanceCounter(&rt);
waveOutPrepareHeader(hWaveOut, pWaveHdr1, sizeof (WAVEHDR));
timprep = elapsed(rt);
pWaveHdr2->lpData = (LPSTR)pBuffer2;
pWaveHdr2->dwBufferLength = OUT_BUFFER_SIZE*sizeof(short);
pWaveHdr2->dwBytesRecorded = 0;
pWaveHdr2->dwUser = 0;
pWaveHdr2->dwFlags = WHDR_DONE;
pWaveHdr2->dwLoops = 1;
pWaveHdr2->lpNext = NULL;
pWaveHdr2->reserved = 0;
waveOutPrepareHeader(hWaveOut, pWaveHdr2, sizeof (WAVEHDR));
// Send two buffers to waveform output device
QueryPerformanceCounter(&rt);
FillBuffer (pBuffer1, iFreq);
filtim = elapsed(rt);
QueryPerformanceCounter(&rt);
waveOutWrite (hWaveOut, pWaveHdr1, sizeof (WAVEHDR));
waveouttim = elapsed(rt);
FillBuffer (pBuffer2, iFreq);
waveOutWrite (hWaveOut, pWaveHdr2, sizeof (WAVEHDR));
// Run waveform loop
sleep(10);
printf("Total time prepare header: %.10f usec\n", timprep*1000000);
printf("Total time to fill: %.10f usec\n", filtim*1000000);
printf("Total time for waveOutWrite: %.10f usec\n", waveouttim*1000000);
waveOutUnprepareHeader(hWaveOut, pWaveHdr1, sizeof (WAVEHDR));
waveOutUnprepareHeader(hWaveOut, pWaveHdr2, sizeof (WAVEHDR));
// Close waveform file
free (pWaveHdr1) ;
free (pWaveHdr2) ;
free (pBuffer1) ;
free (pBuffer2) ;
}

Does Linux RTC alarm use relative or absolute time?

I'm trying to configure RTC alarm on a Linux device. I've used an example from the RTC documentation:
int retval
struct rtc_time rtc_tm;
/* .... */
/* Read the RTC time/date */
retval = ioctl(fd, RTC_RD_TIME, &rtc_tm);
if (retval == -1) {
exit(errno);
}
/* Set the alarm to 5 sec in the future, and check for rollover */
rtc_tm.tm_sec += 5;
if (rtc_tm.tm_sec >= 60) {
rtc_tm.tm_sec %= 60;
rtc_tm.tm_min++;
}
if (rtc_tm.tm_min == 60) {
rtc_tm.tm_min = 0;
rtc_tm.tm_hour++;
}
if (rtc_tm.tm_hour == 24)
rtc_tm.tm_hour = 0;
retval = ioctl(fd, RTC_ALM_SET, &rtc_tm);
if (retval == -1) {
exit(errno);
}
This code snippet uses absolute time (from the epoch start) and it did not work for me. I thought this was due to a bug in hardware, but after some seemingly random time the alarm did fire. The only other piece of documentation that I've managed to find was a comment in rtc.cc:
case RTC_ALM_SET: /* Store a time into the alarm */
{
/*
* This expects a struct rtc_time. Writing 0xff means
* "don't care" or "match all". Only the tm_hour,
* tm_min and tm_sec are used.
*/
The fact that only hours, minutes and second are used suggests that time is relative to the moment when ioctl was called.
Should time passed to ioctl(fd, RTC_ALM_SET, &rtc_tm) be relative or absolute?
The RTC alarm works off absolute time, in other words if you want the alarm to go off in 5 minutes then you should read the current time and add 5 minutes to the current time and use the result to set the alarm time.
Here is a snip of text from a TI RTC chip doc: (http://www.ti.com/lit/ds/symlink/bq3285ld.pdf)
During each update cycle, the RTC compares the day-of-the-month, hours, minutes, and seconds bytes with the four corresponding alarm bytes. If a match of all bytes is found, the alarm interrupt event flag bit, AF in register C, is set to 1. If the alarm event is enabled, an interrupt request is generated.
I believe this to be pretty standard across RTCs out there...

Resources