Understanding the code in rtc_interrupt - linux

I need to understand the code in "Real time clock" function rtc_interrupt. Code is
rtc_irq_data += 0x100;
rtc_irq_data &= ~0xff;
rtc_irq_data |= (CMOS_READ(RTC_INTR_FLAGS) & 0xF0);
I am unable to understand why it is += 0x100 and the rest of code.

From the Book "Linux kernel development", from Robert Love, that snippet of code has the following comment(s):
/*
* Can be an alarm interrupt, update complete interrupt,
* or a periodic interrupt. We store the status in the
* low byte and the number of interrupts received since
* the last read in the remainder of rtc_irq_data.
*/
As for rtc_irq_data += 0x100; So, we know there is a counter for the interrupts received in the high byte. Hence the 0x100. If a 16 bit hexadecimal number representation, where the highest byte is being added +1 (more one interrupt on the counter).
As for the second line, rtc_irq_data &= ~0xff; rtc_irq_data is being logically ANDED with the negation of 0xff, eg, with possibly 0xff00. The high part of the integer is being kept, and the low part being discarded. So supposing this was the first time being called, the value would now guaranteed to be 0x0100.
The last part rtc_irq_data |= (CMOS_READ(RTC_INTR_FLAGS) & 0xF0); is doing a logical OR |= of the low byte (that is now 0 / 0x00) with as the RTC current status. Hence the comment "We store the status in the low byte".
As for doing a logical AND with 0xF0 in (CMOS_READ(RTC_INTR_FLAGS) & 0xF0) , consulting the original AT compatible RTC datasheet, INTR_FLAGS is REGISTER C, a register byte where only the 4 upwards bits are used. b7 = IRQF, b6 = FP, b5 = AF, b4 = UF,
b3 to b0
The unused bits of Status Register 1 are read as "0s". They cannot be
writen.
From RTC datasheet
Hence then as a good standard coding practice, making sure with the AND logical 0xF0 that the lower 4 bits are ignored.

Related

Microphone has too large component of lower frequency

I use knowles sph0645lm4h-b microphone to acquire data, which is a 24-bits PCM format with 18 data presicion. Then the 24-bits PCM data is truncated to 18-bits data, because the last 6 bits is alway 0 according to the specification. After that, the 18-bits data is stored as a 32-bits unsigned integer. When the MSB bit is 0, which means it's a positive integer, and the MSB is 0, which means it's a negative integer.
After that, i find all data is positive, no matter which sound i used to test. I tested it with a dual frequency, and do a FFT, then I found the result is almost right except the lower frequency about 0-100Hz is larger. But i reconstructed the sound with the data, which i used for FFT algorithm. The reconstructed sound is almost right but with noise.
I use a buffer to store the microphone data, which is transmitted using DMA. The buffer is
uint16_t fft_buffer[FFT_LENGTH*4]
The DMA configuration is doing as following:
DMA_InitStructure.DMA_Channel = DMA_Channel_0;
DMA_InitStructure.DMA_PeripheralBaseAddr = (uint32_t)&(SPI2->DR);
DMA_InitStructure.DMA_Memory0BaseAddr = (uint32_t)fft_buffer;
DMA_InitStructure.DMA_DIR = DMA_DIR_PeripheralToMemory;
DMA_InitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable;
DMA_InitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable;
DMA_InitStructure.DMA_PeripheralDataSize =DMA_PeripheralDataSize_HalfWord;
DMA_InitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_HalfWord;
DMA_InitStructure.DMA_BufferSize = FFT_LENGTH*4;
DMA_InitStructure.DMA_Mode = DMA_Mode_Normal;
DMA_InitStructure.DMA_Priority = DMA_Priority_VeryHigh;
DMA_InitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
DMA_InitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_Full;
DMA_InitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
extract data from buffer, truncate to 18 bits and extends it to 32 bits and the store at fft_integer:
int32_t fft_integer[FFT_LENGTH];
fft_buffer stores the original data from one channel and redundant data from other channel. Original data is store at two element of array, like fft_buffer[4] and fft_buffer[5], which are both 16 bits. And fft_integer store just data from one channel and each data take a 32bits.This is why the size of fft_buffer Array is [FFT_LENGTH*4]. 2 elements are used for data from one channel and 2 element is used for the other channel. But for fft_integer, the size of fft_integer array is FFT_LENGTH. Because data from one channel is stored and 18bits can be stored in one element of type int32_t.
for (t=0;t<FFT_LENGTH*4;t=t+4){
uint8_t first_8_bits, second_8_bits, last_2_bits;
uint32_t store_int;
/* get the first 8 bits, middle 8 bits and last 2 bits, combine it to a new value */
first_8_bits = fft_buffer[t]>>8;
second_8_bits = fft_buffer[t]&0xFF;
last_2_bits = (fft_buffer[t+1]>>8)>>6;
store_int = ((first_8_bits <<10)+(second_8_bits <<2)+last_2_bits);
/* convert it to signed integer number according to the MSB of value
* if MSB is 1, then set all the bits before MSB to 1
*/
const uint8_t negative = ((store_int & (1 << 17)) != 0);
int32_t nativeInt;
if (negative)
nativeInt = store_int | ~((1 << 18) - 1);
else
nativeInt = store_int;
fft_integer[cnt] = nativeInt;
cnt++;
}
The microphone is using I2S Interface and it's a single mono microphone, which means that there is just half of the data is effective at half of the transmission time. It works for about 128ms, and then will stop working.
This picture shows the data, which i convert to a integer.
My question is why there is are large components of lower frequency although it can reconstruct the similar sound. I'm sure there is no problem in Hardware configuration.
I have done a experiment to see which original data is stored in buffer. I have done the following test:
uint8_t a, b, c, d
for (t=0;t<FFT_LENGTH*4;t=t+4){
a = (fft_buffer[t]&0xFF00)>>8;
b = fft_buffer[t]&0x00FF;
c = (fft_buffer[t+1]&0xFF00)>>8;
/* set the tri-state to 0 */
d = fft_buffer[t+1]&0x0000;
printf("%.2x",a);
printf("%.2x",b);
printf("%.2x",c);
printf("%.2x\n",d);
}
The PCM data is shown like following:
0ec40000
0ec48000
0ec50000
0ec60000
0ec60000
0ec5c000
...
0cf28000
0cf20000
0cf10000
0cf04000
0cef8000
0cef0000
0cedc000
0ced4000
0cee4000
0ced8000
0cec4000
0cebc000
0ceb4000
....
0b554000
0b548000
0b538000
0b53c000
0b524000
0b50c000
0b50c000
...
Raw data in Memory:
c4 0e ff 00
c5 0e ff 40
...
52 0b ff c0
50 0b ff c0
I use it as little endian.
The large low-frequency component starting from DC in the original data is due to the large DC offset caused by incorrectly translating the 24 bit two's complement samples to int32_t. DC offset is inaudible unless it caused clipping or arithmetic overflow to occur. There are not really any low frequencies up to 100Hz, that is merely an artefact of the FFT's response to the strong DC (0Hz) element. That is why you cannot hear any low frequencies.
Below I have stated a number of assumptions as clearly as possible so that the answer may perhaps be adapted to match the actualité.
Given:
Raw data in Memory:
c4 0e ff 00
c5 0e ff 40
...
52 0b ff c0
50 0b ff c0
I use it as little endian.
and
2 elements are used for data from one channel and 2 element is used for the other channel
and given the subsequent comment:
fft_buffer[0] stores the higher 16 bits, fft_buffer[1] stores the lower 16 bits
Then the data is in fact cross-endian such that for example, for:
c4 0e ff 00
then
fft_buffer[n] = 0x0ec4 ;
fft_buffer[n+1] = 0x00ff ;
and the reconstructed sample should be:
0x00ff0ec4
then the translation is a matter of reinterpreting fft_buffer as a 32 bit array, swapping the 16 bit word order, then a shift to move the sign-bit to the int32_t sign-bit position and (optionally) a re-scale, e.g.:
c4 0e ff 00 => 0x00ff0ec4
0x00ff0ec4<< 8 = 0xff0ec400
0xff0ec400/ 16384 = 0xffff0ec4(-61756)
thus:
// Reinterpret DMA buffer as 32bit samples
int32_t* fft_buffer32 = (int32_t*)fft_buffer ;
// For each even numbered DMA buffer sample...
for( t = 0; t < FFT_LENGTH * 2; t += 2 )
{
// ... swap 16 bit word order
int32_t sample = fft_buffer32 [t] << 16 |
fft_buffer32 [t] >> 16 ;
// ... from 24 to 32 bit 2's complement and rescale to
// maintain original magnitude. Copy to single channel
// fft_integer array.
fft_integer[t / 2] = (sample << 8) / 16384 ;
}

GBZ80 - ADC instructions fail test

I've been running Blarggs CPU tests through my Gameboy emulator, and the op r,r test shows that my ADC instruction is not working properly, but that ADD is. My understanding is that the only difference between the two is adding the existing carry flag to the second operand before addition. As such, my ADC code is the following:
void Emu::add8To8Carry(BYTE &a, BYTE b) //4 cycles - 1 byte
{
if((Flags >> FLAG_CARRY) & 1)
b++;
FLAGCLEAR_N;
halfCarryAdd8_8(a, b); //generates H flag based on addition
carryAdd8_8(a, b); //generates C flag appropriately
a+=b;
if(a == 0)
FLAGSET_Z;
else
FLAGCLEAR_Z;
}
I entered the following into a test ROM:
06 FE 3E 01 88
Which leaves A with the value 0 (Flags = B0) when the carry flag is set, and FF (Flags = 00) when it is not. This is how it should work, as far as my understanding goes. However, it still fails the test.
From my research, I believe that flags are affected in an identical manner to ADD. Literally the only change in my code from the working ADD instruction is the addition of the flag check/potential increment in the first two lines, which my test code seems to prove works.
Am I missing something? Perhaps there's a peculiarity with flag states between ADD/ADC? As a side note, SUB instructions also pass, but SBC fails in the same way.
Thanks
The problem is that b is an 8 bit value. If b is 0xff and carry is set then adding 1 to b will set it to 0 and won't generate carry if added with a >= 1. You get similar problems with the half carry flag if the lower nybble is 0xf.
This might be fixed if you call halfCarryAdd8_8(a, b + 1); and carryAdd8_8(a, b + 1); when carry is set. However, I suspect that those routines also take byte operands so you may have to make changes to them internally. Perhaps by adding the carry as a separate argument so that you can do tmp = a + b + carry; without overflow of b. But I can only speculate without the source to those functions.
On a somewhat related note, there's a fairly simple way to check for carry over all the bits:
int sum = a + b;
int no_carry_sum = a ^ b;
int carry_into = sum ^ no_carry_sum;
int half_carry = carry_into & 0x10;
int carry = carry_info & 0x100;
How does that work? Consider that bitwise "xor" gives the expected result of each bit if there is no carry going in to that bit: 0 ^ 0 == 0, 1 ^ 0 == 0 ^ 1 == 1 and 1 ^ 1 == 0. By xoring sum with no_carry_sum we get the bits where the sum differs from the bit-by-bit addition. sum is only different whenever there is a carry into a particular bit position. Thus both the half carry and carry bits can be obtained with almost no overhead.

Atomicity of a read on SPARC

I'm writing a multithreaded application and having a problem on the SPARC platform. Ultimately my question comes down to atomicity of this platform and how I could be obtaining this result.
Some pseudocode to help clarify my question:
// Global variable
typdef struct pkd_struct{
uint16_t a;
uint16_t b;
} __attribute__(packed) pkd_struct_t;
pkd_struct_t shared;
Thread 1:
swap_value() {
pkd_struct_t prev = shared;
printf("%d%d\n", prev.a, prev.b);
...
}
Thread 2:
use_value() {
pkd_struct_t next;
next.a = 0; next.b = 0;
shared = next;
printf("%d%d\n", shared.a, shared.b);
...
}
Thread 1 and 2 are accessing the shared variable "shared". One is setting, the other is getting. If Thread 2 is setting "shared" to zero, I'd expect Thread 1 to read count either before OR after the setting -- since "shared" is aligned on a 4-byte boundary. However, I will occasionally see Thread 1 reading the value of the form 0xFFFFFF00. That is the high-order 24 bits are OLD, but the low-order byte is NEW. It appears I've gotten an intermediate value.
Looking at the disassembly, the use_value function simply does an "ST" instruction. Given that the data is aligned and isn't crossing a word boundary, is there any explanation for this behavior? If ST is indeed NOT atomic to use this way, does this explain the result I see (only 1 byte changed?!?)? There is no problem on x86.
UPDATE 1:
I've found the problem, but not the cause. GCC appears to be generating assembly that reads the shared variably byte-by-byte (thus allowing a partial update to be obtained). Comments added, but I am not terribly comfortable with SPARC assembly. %i0 is a pointer to the shared variable.
xxx+0xc: ldub [%i0], %g1 // ld unsigned byte g1 = [i0] -- 0 padded
xxx+0x10: ...
xxx+0x14: ldub [%i0 + 0x1], %g5 // ld unsigned byte g5 = [i0+1] -- 0 padded
xxx+0x18: sllx %g1, 0x18, %g1 // g1 = [i0+0] left shifted by 24
xxx+0x1c: ldub [%i0 + 0x2], %g4 // ld unsigned byte g4 = [i0+2] -- 0 padded
xxx+0x20: sllx %g5, 0x10, %g5 // g5 = [i0+1] left shifted by 16
xxx+0x24: or %g5, %g1, %g5 // g5 = g5 OR g1
xxx+0x28: sllx %g4, 0x8, %g4 // g4 = [i0+2] left shifted by 8
xxx+0x2c: or %g4, %g5, %g4 // g4 = g4 OR g5
xxx+0x30: ldub [%i0 + 0x3], %g1 // ld unsigned byte g1 = [i0+3] -- 0 padded
xxx+0x34: or %g1, %g4, %g1 // g1 = g4 OR g1
xxx+0x38: ...
xxx+0x3c: st %g1, [%fp + 0x7df] // store g1 on the stack
Any idea why GCC is generating code like this?
UPDATE 2: Adding more info to the example code. Appologies -- I'm working with a mix of new and legacy code and it's difficult to separate what's relevant. Also, I understand sharing a variable like this is highly-discouraged in general. However, this is actually in a lock implementation where higher-level code will be using this to provide atomicity and using pthreads or platform-specific locking is not an option for this.
Because you've declared the type as packed, it gets one byte alignment, which means it must be read and written one byte at a time, as SPARC does not allow unaligned loads/stores. You need to give it 4-byte alignment if you want the compiler to use word load/store instructions:
typdef struct pkd_struct {
uint16_t a;
uint16_t b;
} __attribute__((packed, aligned(4))) pkd_struct_t;
Note that packed is essentially meaningless for this struct, so you could leave that out.
Answering my own question here -- this has bugged me for too long and hopefully I can save someone a bit of frustration at some point.
The problem is that although the shared data is aligned, because it is packed GCC reads it byte-by-byte.
There is some discussion here on how packing leading to load/store bloat on SPARC (and other RISC platforms I'd assume...), but in my case it has lead to a race.

Bluetooth data shown as 0X80 instead of 0X00

I have been using Bluetooth module (HC-05) with Atmega8(both A and L) microcontroller to transmit data to my Android device. In following code an 8-bit signed(or unsigned doesn't made any change) value is sent over bluetooth to be displayed on device, this value starts at 0X00 and is incremented in every iteration:
#define F_CPU 1000000
#define BAUD 9600
#define MYUBRR (F_CPU/16/BAUD-1)
#include <avr/io.h>
#include <util/delay.h>
int main (void)
{
uint8_t data = 0;
UBRRH = (MYUBRR >> 8); // setting higher bits of UBRR
UBRRL = MYUBRR; // setting lower bit of UBRR
UCSRB = (1 << TXEN); // transmit enable
UCSRC = ((1 << URSEL) | (1 << UCSZ1) | (1 << UCSZ0)); //URSEL=USART reg selection (R/W), UCSZ1 & UCSZ0 set equal to 011 that is 8 bit data size
while (1)
{
UDR=data; // loading data in USART Data register (8-bit) and it will be transmitted immidiately
while(!(UCSRA&(1<<UDRE))); // waiting till complete data sent and UDRE flag set
_delay_ms(200); // after some time
data++; // incrementing data
}
return 0;
}
On the android device end there is "Bluetooth spp Pro" app to display the recieved data on screen.
Following is the configuration of recieve mode (Data is displayed as Hex values):
The data recieved here should start at 0X00 and go up to 0XFF instead it starts at 0X80 and increments upto 0XFF in a very unfamiliar manner.
Referring above image. The pattern I observed here is that the tens place digit starts at 8 and units place change from 0 to F then in next loop again it becomes 9 and unit place change from 0 to F after that instead of incrementing (expected) tens place again goes to 8 and then in next cycle it again becomes 9, after these four cycles of two repetetive words tens place increments to A and units place change from 0 to F and later the strange tens place pattern reappears for A and B then for C and D and later for E and F at tens place.
So my concern is:
Why is the device showing 80 for 00, as it is correctly working for ones place why is it not working for tens place as expected???
Thanks!!!
Edit:
This problem is neither Android version nor device manufacturer specific.
Problem was with voltage levels. Operating the microcontroller circuit on 3.2V and Bluetooth module on 3.8V solved the problem and data is transmitted as expected. However I am unable to predict an explanation for this.
Please help.
It is observed clearly on varying the potentiometer of voltage regulator, when I keep it below 3.20V data is transmitted smooth, and as the voltage level crosses 3.20V the tens place of data starts getting corrupted up to the point of complete data corruption and output becomes constant data 0XFE at 3.8V.

infinite loop receive from serial port

from this I copied the example for serial port configuration:
tcgetattr (serialfd, &tty);
cfsetospeed(&tty,B115200);
cfsetispeed(&tty,B115200);
tty.c_cflag = (tty.c_cflag & ~CSIZE) | CS8;
tty.c_iflag &= ~IGNBRK;
tty.c_lflag = 0;
tty.c_oflag = 0;
tty.c_cc[VMIN] = 0;
tty.c_cc[VTIME] = 5;
tty.c_iflag &= ~(IXON | IXOFF | IXANY);
tty.c_cflag |= (CLOCAL | CREAD);
tty.c_cflag &= ~(PARENB | PARODD);
tty.c_cflag |= 0;
tty.c_cflag &= ~CSTOPB;
tty.c_cflag &= ~CRTSCTS;
My actual code is like this:
char buf[100];
write(serialfd, "PING", strlen("PING"));
fsync(serialfd);
while (1)
{
read(serialfd, buf, sizeof(buf));
printf("length: %d\n", strlen(buf));
}
In this case it is printing length: 6 infinitely without stopping. when I change tty.c_cc[VMIN] = 1 and tty.c_cc[VTIME] = 0 it doesn't read.(it blocks in read())
I'm using debian 6.0.5 with usb to serial converter. I open serial port like this:
serialfd = open("/dev/ttyUSB0", O_RDWR | O_NOCTTY | O_SYNC);
Look at your code
while (1)
{
read(serialfd, buf, sizeof(buf));
printf("length: %d\n", strlen(buf));
}
You have written a packet prior to this loop, then on the first iteration you read the available data which gets read into your buffer. You need to either memset your buffer to zeros each time OR zero terminate your buffer using the read byte count given in the return value of your call to read. You then loop infinitely, each time reading again - but subsequent reads will not copy any more data as there is none to read. However, your buffer remains unchanged by the call to read therefore and your printed output remains the same every iteration as the buffer remains the same every iteration.
As for the blocking aspect, you should read the following guide (which has been recommended on SO before and is very good as an introduction to serial port programming)
http://www.easysw.com/~mike/serial/serial.html
This section describes the behaviour you get when setting VMIN and VTIME to various values. In particular the last paragraph explains the blocking behaviour you see.
VMIN specifies the minimum number of characters to read. If it is set
to 0, then the VTIME value specifies the time to wait for every
character read. Note that this does not mean that a read call for N
bytes will wait for N characters to come in. Rather, the timeout will
apply to the first character and the read call will return the number
of characters immediately available (up to the number you request).
If VMIN is non-zero, VTIME specifies the time to wait for the first
character read. If a character is read within the time given, any read
will block (wait) until all VMIN characters are read. That is, once
the first character is read, the serial interface driver expects to
receive an entire packet of characters (VMIN bytes total). If no
character is read within the time allowed, then the call to read
returns 0. This method allows you to tell the serial driver you need
exactly N bytes and any read call will return 0 or N bytes. However,
the timeout only applies to the first character read, so if for some
reason the driver misses one character inside the N byte packet then
the read call could block forever waiting for additional input
characters.
VTIME specifies the amount of time to wait for incoming characters in
tenths of seconds. If VTIME is set to 0 (the default), reads will
block (wait) indefinitely unless the NDELAY option is set on the port
with open or fcntl.

Resources