FreeRTOS cannot poll input pins when using vTaskDelayUntil() - polling

I'm facing weird behavior with FreeRTOS code.
Especially when using vTaskDelayUntil() and vTaskDelay()
I'm trying to read an input pin from my PIR sensor.
On the scope I see that the PIR is holding 3.3v high for at least 1 second.
The code below only reads my PIR input when I comment out the ' vTaskDelayUntil' line. As soon as I activate that line, PINC register is always 0.
Also when I'm sure there is 3.3v on my input pin.
static void TaskStatemachine(void *pvParameters)
{
(void) pvParameters;
TickType_t xLastWakeTime;
const TickType_t xFrequency = 100;
xLastWakeTime = xTaskGetTickCount();
for(;;)
{
printf("PINC.1 = %d\n", (PINC & (1<<1) ));
vTaskDelayUntil( &xLastWakeTime, ( xFrequency / portTICK_PERIOD_MS ) );
}
}
What is happening here?
I changed xFrequency to different values, but without any luck.

As an experiment, simplify the output thus:
putchar( (PINC & (1<<1)) == 0 ? '0' : '1' ) ;
You will then get a continuous stream of 1 or 0.
If that works with or without the delay, then it seems likely that that the task has too small a stack to support printf(). Try increasing the stack and putting the printf() back in.

Related

Configure SAI peripheral on STM32H7

I'm trying to play a sound, on a single speaker (mono), from a .wav file in SD card using a STM32H7 controller and freertos environment.
I currently managed to generate sound but it is very dirty and jerky.
I'd like to show the parsed header content of my wav file but my reputation score is below 10.
Most important data are :
format : PCM
1 Channel
Sample rate : 44100
Bit per sample : 16
I initialize the SAI2 block A this way :
void MX_SAI2_Init(void)
{
/* USER CODE BEGIN SAI2_Init 0 */
/* USER CODE END SAI2_Init 0 */
/* USER CODE BEGIN SAI2_Init 1 */
/* USER CODE END SAI2_Init 1 */
hsai_BlockA2.Instance = SAI2_Block_A;
hsai_BlockA2.Init.AudioMode = SAI_MODEMASTER_TX;
hsai_BlockA2.Init.Synchro = SAI_ASYNCHRONOUS;
hsai_BlockA2.Init.OutputDrive = SAI_OUTPUTDRIVE_DISABLE;
hsai_BlockA2.Init.NoDivider = SAI_MASTERDIVIDER_ENABLE;
hsai_BlockA2.Init.FIFOThreshold = SAI_FIFOTHRESHOLD_EMPTY;
hsai_BlockA2.Init.AudioFrequency = SAI_AUDIO_FREQUENCY_44K;
hsai_BlockA2.Init.SynchroExt = SAI_SYNCEXT_DISABLE;
hsai_BlockA2.Init.MonoStereoMode = SAI_MONOMODE;
hsai_BlockA2.Init.CompandingMode = SAI_NOCOMPANDING;
hsai_BlockA2.Init.TriState = SAI_OUTPUT_NOTRELEASED;
if (HAL_SAI_InitProtocol(&hsai_BlockA2, SAI_I2S_STANDARD, SAI_PROTOCOL_DATASIZE_16BIT, 2) != HAL_OK)
{
Error_Handler();
}
/* USER CODE BEGIN SAI2_Init 2 */
/* USER CODE END SAI2_Init 2 */
}
I think I set the clock frequency correctly, as I measure a frame synch clock of 43Khz (closest I can get to 44,1Khz)
The file indicate it's using PCM protocol. My init function indicate SAI_I2S_STANDARD but it's only because I was curious of the result with this parameter value. I have bad result in both cases.
And here is the part where I read the file + send data to the SAI DMA
//Before infinite loop I extract the overall file size in bytes.
// Infinite Loop
for(;;)
{
if(drv_sdcard_getDmaTransferComplete()==true)
{
// BufferRead[0]=0xAA;
// BufferRead[1]=0xAA;
//
// ret = HAL_SAI_Transmit_DMA(&hsai_BlockA2, (uint8_t*)BufferRead, 2);
// drv_sdcard_resetDmaTransferComplete();
if((firstBytesDiscarded == true)&& (remainingBytes>0))
{
//read the next BufferRead size audio samples
if(remainingBytes < sizeof(BufferAudio))
{
remainingBytes -= drv_sdcard_readDataNoRewind(file_audio1_index, BufferAudio, remainingBytes);
}
else
{
remainingBytes -= drv_sdcard_readDataNoRewind(file_audio1_index, BufferAudio, sizeof(BufferAudio));
}
//send them by the SAI through DMA
ret = HAL_SAI_Transmit_DMA(&hsai_BlockA2, (uint8_t*)BufferAudio, sizeof(BufferAudio));
//reset transmit flag for forbidding next transmit
drv_sdcard_resetDmaTransferComplete();
}
else
{
//discard header size first bytes
//I removed this part here because it works properly on my side
firstBytesDiscarded = true;
}
}
I have one track of sound quality improvment : it is to filter speaker input. Yesterday I tried cutting # 20Khz and 44khz but it cut too much the signal... So I want to try different cutting frequencies until I find the sound is of good quality. It is a simple RC filter.
But to fix the jerky part, I dont know what to do. To give you an idea on how the sound comes out, I would describe it like this :
we can hear a bit of melody
then scratchy sound [krrrrrrr]
then short silence
and this looping until the end of the file.
Buffer Audio size is 16*1024 bytes.
Thank you for your help
Problems
No double-buffering. You are reading data from the SD-card into the same buffer that you are playing from. So you'll get some samples from the previous read, and some samples from the new read.
Not checking when the DMA is complete. HAL_SAI_Transmit_DMA() returns immediately, and you cannot call it again until the previous DMA has completed.
Not checking return values of HAL functions. You assign ret = HAL_SAI_Transmit_DMAbut then never check what ret is. You should check if there is an error and take appropriate action.
You seem to be driving things from how fast the SD-card can DMA the data. It needs to be based on how fast the SAI is consuming it, otherwise you will have glitches.
Possible solution
The STM32's DMA controller can be configured to run in circular-buffer mode. In this mode, it will DMA all the data given to it, and then start again from the beginning.
It also provides interrupts for when the DMA is half complete, and when it is fully complete.
These two things together can provide a smooth data transfer with no gaps and glitches, if used with the SAI DMA. You'd read data into the entire buffer to start with, and kick off the DMA. When you get the half-complete interrupt, read half a buffer's worth of data into the first half of the buffer. When you get a fully complete interrupt, read half a buffer's worth of data into the second half of the buffer.
This is psuedo-code-ish, but hopefully shows what I mean:
const size_t buff_len = 16u * 1024u;
uint16_t buff[buff_len];
void start_playback(void)
{
read_from_file(buff, buff_len);
if HAL_SAI_Transmit_DMA(&hsai_BlockA2, buff, buff_len) != HAL_OK)
{
// Handle error
}
}
void sai_dma_tx_half_complete_interrupt(void)
{
read_from_file(buff, buff_len / 2u);
}
void sai_dma_tx_full_complete_interrupt(void)
{
read_from_file(buff + buff_len / 2u, buff_len / 2u);
}
You'd need to detect when you have consumed the entire file, and then stop the DMA (with something like HAL_SAI_DMAStop()).
You might want to read this similar question where I gave a similar answer. They were recording to SD-card rather than playing back, but the same principles apply. They also supplied their actual code for the solution they employed.

how to trim unknown first characters of string in code vision

I set a mega16 (16bit AVR microcontroller) to receive data from the serial port
which is connected to Bluetooth module HC-05 for attaining an acceptable number
sent by my android app and an android application sends a number in the form of a
string array whose maximum length is equal to 10 digits. The problem arrives
while receiving data such that one or two unknown characters(?) exist at the
beginning of the received string. I have to remove these unknown characters from
the beginning of the string in the case of existence.
this problem is just for HC-05. I mean I had no problem while sending numbers by
another microcontroller instead of android applications.
here is what I send by mobile:
"430102030405060\r"
and what is received in the serial port of microcontroller:
"??430102030405060\r"
or
"?430102030405060\r"
here is USART Receiver interrupt code:
//-------------------------------------------------------------------------
// USART Receiver interrupt service routine
interrupt [USART_RXC] void usart_rx_isr(void)
{
char status,data;
status=UCSRA;
data=UDR;
if (data==0x0D)
{
puts(ss);printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
}
else
{
ss[a]=data;
a+=1;
}
if ((status & (FRAMING_ERROR | PARITY_ERROR | DATA_OVERRUN))==0)
{
rx_buffer[rx_wr_index++]=data;
if RX_BUFFER_SIZE == 256
// special case for receiver buffer size=256
if (++rx_counter == 0) rx_buffer_overflow=1;
else
if (rx_wr_index == RX_BUFFER_SIZE) rx_wr_index=0;
if (++rx_counter == RX_BUFFER_SIZE)
{
rx_counter=0;
rx_buffer_overflow=1;
}
endif
}
}
//-------------------------------------------------------------------------
how can I remove extra characters (?) from the beginning of received data in codevision?
You do not need to remove them, just do not pass them to your processing.
You either may test the data character before putting it into your line buffer (ss) or after the complete line was received look for the first relevant character and only pass the string starting from this position to your processing functions.
Var 1:
BOOL isGarbage(char c){
return c<'0' || c > '9';
}
if (data==0x0D)
{
puts(ss);printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
} else {
if(!isGarbage(data))
{
ss[a]=data;
a+=1;
}
}
Var2:
if (data==0x0D)
{
const char* actualString = ss;
while(isGarbage(*actualString )){
actualString ++;
}
puts(actualString );printf("\r")
a=0;
memset(ss, '\0', sizeof(ss));
} else {
ss[a]=data;
a+=1;
}
However:
maybe you should try to solve the issue in contrast to just fix the symptoms (suppress '?' characters).
What is the exact value of the questionable characters? I suspect, that '?' is only used to represent non printable data.
Maybe your interface configuration is wrong and the sender uses software flow control on the line and the suspicious characters are XON/XOFF bytes
One additional note:
You may run into trouble if you use more complex functions or even peripheral devices from your interrupt service routine (ISR).
I would strongly suggest to only fill buffers there and do all other stuff in the main loop. triggered by some volatile flags data buffers.
Also I do not get why you are using an additional buffer (ss) in the ISR, since it seems that there already is a RX-Buffer. The implementation looks like that there is a good RX-receive buffer implementation that should have some functions/possibilities to get the buffer contents within the main loop, so that you do not need to add your own code to the ISR.
Additional additional notes:
string array whose maximum length is equal to 10 digits.
I count more than that, I hope your ss array is larger than that and you also should consider the fact that something may go wrong on transmission and you get a lot more characters before the next '\n'. Currently you overwrite all your ram.

Flushing on UART doesn't work as expected

I need to write a sequence of values (buffer, ~10bytes) via UART.
This sequence needs to start with a BREAK delimiter, and in my case I need to decrease the baud rate to a lower value.
Details about my environment:
Development board: BeagleBone Black.
Linux Kernel version: 3.8.13-bone70.
Serial driver used by the tty discipline: omap-serial.
What I finally get is something like this:
UARTIOHandler->setBaudRate(B9600);
unsigned char breakChar[] = { 0 };
UARTIOHandler->write(breakChar, 1);
UARTIOHandler->setBaudRate(B19200);
UARTIOHandler->write({1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
The write method is implemented this way:
int UARTIOHandler::write(const std::initializer_list<uchar8> &data) {
uchar8 buffer[data.size()];
int counter = 0;
for(auto i : data) {
buffer[counter++] = i;
}
auto output = ::write(this->fd_write, buffer, data.size());
this->flush();
return output;
}
And finally the flush() method:
void UARTIOHandler::flush() {
tcflush(this->fd_write, TCIOFLUSH);
}
The problem with this code is that the flushing doesn't always work, sometimes the distance between the BREAK and the first byte of data (observed on a scope) is ~500us (which is fine for my application), and sometimes is up to ~3ms.
EDIT: This is the actual behavior:
For the first five seconds everything works fine (the distance between the BREAK and the rest of the message doesn't exceed ~1ms), then, after five seconds there are some frames that exceed this inter-byte timing (for up to ~3ms).
There's always the code that I posted which is executed, so there's no possible way that I somehow forget flushing the buffers.
Why do this variations happen?
I have searched for relevant problems and found this, one workaround described there is to use a delay in front of the tcflush(...) function call. I can't use this method in my application because it will affect the functionality.
Another comment in that topic suggests that this was a bug in the Linux kernel, could this also be in my case?

Controlling TI OMAP l138 frequency leads to "Division by zero in kernel"

My team is trying to control the frequency of an Texas Instruments OMAP l138. The default frequency is 300 MHz and we want to put it to 372 MHz in a "complete" form: we would like not only to change the default value to the desired one (or at least configure it at startup), but also be capable of changing the value at run time.
Searching on the web about how to do this, we found an article which tells that one of the ways to do this is by an "echo" command:
echo 372000 /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
We did some tests with this command and it runs fine with one problem: sometimes the first call to this echo command leads to a error message of "Division by zero in kernel":
In my personal tests, this error appeared always in the first call to the echo command. All the later calls worked without error. If, then, I reset my processor and calls the command again, the same problem occurs: the first call leads to this error and later calls work without problem.
So my questions are: what is causing this problem? And how could I solve it? (Obviously the answer "always type it twice" doesn't count!)
(Feel free to mention other ways of controlling the OMAP l138's frequency at real time as well!)
Looks to me like you have division by zero in davinci_spi_cpufreq_transition() function. Somewhere in this function (or in some function that's being called in davinci_spi_cpufreq_transition) there is a buggy division operation which tries to divide by some variable which is (in your case) has value of 0. And this is obviously error case which should be handled properly in code, but in fact it isn't.
It's hard to tell which code exactly leads to this, because I don't know which kernel you are using. It would be much more easier if you can provide link to your kernel repository. Although I couldn't find davinci_spi_cpufreq_transition in upstream kernel, I found it here.
davinci_spi_cpufreq_transition() function appears to be in drivers/spi/davinci_spi.c. It calls davinci_spi_calc_clk_div() function. There are 2 division operations there. First is:
prescale = ((clk_rate / hz) - 1);
And second is:
if (hz < (clk_rate / (prescale + 1)))
One of them is probably causing "division by zero" error. I propose you to trace which one is that by modifying davinci_spi_calc_clk_div() function in next way (just add lines marked as "+"):
static void davinci_spi_calc_clk_div(struct davinci_spi *davinci_spi)
{
struct davinci_spi_platform_data *pdata;
unsigned long clk_rate;
u32 hz, cs_num, prescale;
pdata = davinci_spi->pdata;
cs_num = davinci_spi->cs_num;
hz = davinci_spi->speed;
clk_rate = clk_get_rate(davinci_spi->clk);
+ printk(KERN_ERR "### hz = %u\n", hz);
prescale = ((clk_rate / hz) - 1);
if (prescale > 0xff)
prescale = 0xff;
+ printk("### prescale + 1 = %u\n", prescale + 1UL);
if (hz < (clk_rate / (prescale + 1)))
prescale++;
if (prescale < 2) {
pr_info("davinci SPI controller min. prescale value is 2\n");
prescale = 2;
}
clear_fmt_bits(davinci_spi->base, 0x0000ff00, cs_num);
set_fmt_bits(davinci_spi->base, prescale << 8, cs_num);
}
My guess -- it's "hz" variable which is 0 in your case. If it's so, you also may want to add next debug line to davinci_spi_setup_transfer() function:
if (!hz)
hz = spi->max_speed_hz;
+ printk(KERN_ERR "### setup_transfer: setting speed to %u\n", hz);
davinci_spi->speed = hz;
davinci_spi->cs_num = spi->chip_select;
With all those modifications made, rebuild your kernel and you will probably get the clue why you have that "div by zero" error. Just look for lines started with "###" in your kernel boot log. In case you don't know what to do next -- attach those debug lines and I will try to help you.

How to read from a Linux serial port

I am working on robot which has to control using wireless serial communication. The robot is running on a microcontroller (by burning a .hex file). I want to control it using my Linux (Ubuntu) PC. I am new to serial port programming. I am able to send the data, but I am not able to read data.
A few piece of code which is running over at the microcontroller:
Function to send data:
void TxData(unsigned char tx_data)
{
SBUF = tx_data; // Transmit data that is passed to this function
while(TI == 0) // Wait while data is being transmitted
;
}
I am sending data through an array of characters data_array[i]:
for (i=4; i<=6; i++)
{
TxData(data_array[i]);
RI = 0; // Clear receive interrupt. Must be cleared by the user.
TI = 0; // Clear transmit interrupt. Must be cleared by the user.
}
Now the piece of code from the C program running on Linux...
while (flag == 0) {
int res = read(fd, buf, 255);
buf[res] = 0; /* Set end of string, so we can printf */
printf(":%s:%d\n", buf, res);
if (buf[0] == '\0')
flag = 1;
}
It prints out value of res = 0.
Actually I want to read data character-by-character to perform calculations and take further decision. Is there another way of doing this?
Note: Is there good study material (code) for serial port programming on Linux?
How can I read from the Linux serial port...
This is a good guide: Serial Programming Guide for POSIX Operating Systems
The read call may return with no data and errno set to EAGAIN. You need to check the return value and loop around to read again if you're expecting data to arrive.
First, take a look at /proc/tty/driver/serial to see that everything is set up correctly (i.e., you see the signals you should see). Then, have a look at the manual page for termios(3), you may be interested in the VMIN and VTIME explanation.

Resources