Choppy SDL+OpenGL animation when vsync is on - linux

Uint32 prev = SDL_GetTicks();
while ( true )
{
Draw();
Uint32 now = SDL_GetTicks();
Uint32 delta = now - prev;
printf( "%u\n" , delta );
Update( delta / 1000.0f );
prev = now;
ProcessEvents();
}
The application is a simple moving square. My loop looks like that and when vsync is on the whole thing just runs quite smoothly; turning it off instead causes some kind of jumps of the animation. I've inserted some prints and here's what I've found:
[...]
16
15
16
66 #
2 #
0 #
0 #
16
16
21
[...]
I know there are several issues with this kind of loop but none of them seem to apply to this simple example (am I wrong?). What causes this behavior and how can I overcome it?
I'm using an ATI card on a Linux system, but I'm expecting a portable explanation/solution.

It seems that it was a lack of glFinish(), I've read somewhere that calls to that function are in most cases useless (here or here for example). Well, I'm maybe misunderstanding some fundamental concepts but that worked for me and now the Draw() function ends with:
[...]
glFinish();
SDL_GL_SwapBuffers();
}

Related

FreeRTOS cannot poll input pins when using vTaskDelayUntil()

I'm facing weird behavior with FreeRTOS code.
Especially when using vTaskDelayUntil() and vTaskDelay()
I'm trying to read an input pin from my PIR sensor.
On the scope I see that the PIR is holding 3.3v high for at least 1 second.
The code below only reads my PIR input when I comment out the ' vTaskDelayUntil' line. As soon as I activate that line, PINC register is always 0.
Also when I'm sure there is 3.3v on my input pin.
static void TaskStatemachine(void *pvParameters)
{
(void) pvParameters;
TickType_t xLastWakeTime;
const TickType_t xFrequency = 100;
xLastWakeTime = xTaskGetTickCount();
for(;;)
{
printf("PINC.1 = %d\n", (PINC & (1<<1) ));
vTaskDelayUntil( &xLastWakeTime, ( xFrequency / portTICK_PERIOD_MS ) );
}
}
What is happening here?
I changed xFrequency to different values, but without any luck.
As an experiment, simplify the output thus:
putchar( (PINC & (1<<1)) == 0 ? '0' : '1' ) ;
You will then get a continuous stream of 1 or 0.
If that works with or without the delay, then it seems likely that that the task has too small a stack to support printf(). Try increasing the stack and putting the printf() back in.

Converting 24 bit USB audio stream into 32 bit stream

I'm trying to convert a 24 bit usb audio stream into a 32 bit stream so my microcontroller's peripherals can play happily with the stream (it can only handle 16 or 32 bit data like most mcus...).
The following code is what I got from the mcu's company... didn't work as expected and I ended up getting really distorted audio.
// Function takes usb stream and processes the data for our peripherals
// #data - usb stream data
// #byte_count - size of stream
void process_usb_stream(uint8_t *data, uint16_t byte_count) {
// Etc code that gets buffers ready to read the stream...
// Conversion here!
int32_t *buffer;
int sample_count = 0;
for (int i = 0; i < byte_count; i += 3) {
buffer[sample_count++] = data[i] | data[i+1] << 8 | data[i+2] << 16;
}
// Send buffer to peripherals for them to use...
}
Any help with converting the data from a 24 bit stream to 32 bit stream would be super awesome! This area of work is very hard for me :(
data[...] is a uint8_t. You need to cast that before shifting, because data[...]<<8 and data[...]<<16 are undefined. They'll either be 0 or unchanged, neither of which is what you want.
Also, you need to shift by another 8 bits to get the full range and put the sign bit in the right place.
Also, you're treating the data as if it were in little-endian format. Make sure it is. I'll assume that's correct, so something like this works:
int32_t *buffer;
int sample_count = 0;
for (int i = 0; i+3 <= byte_count; ) {
int32_t v = ((int32_t)data[i++])<<8;
v |= ((int32_t)data[i++])<<16;
v |= ((int32_t)data[i++])<<24;
buffer[sample_count++] = v;
}
Finally, note that this assumes that byte_count is divisible by 3 -- make sure that's true!
this is DSP stuff if, also post this question on http://dsp.stackexchange.com
In DSP the process of changing the bit depth is called scaling
16 bit resolution has 65536 values
24 bit resolution has 16777216
possible values
32 bit has 4294967296 values so the factor is 256
According to https://electronics.stackexchange.com/questions/229268/what-is-name-of-process-used-to-change-sample-bit-depth/229271
reduction from 24 bit to 16 bit is called scaling down and is done by dividing each value by 256.
This can be done by bitwise shifting every bit by 8
y = x >> 8. When scaling down this way the LSB is lost
Scaling up to 32 bit is more complicated and there are several approaches how to do this. It may work by multiplying each bit of the value with a value between 2⁰ and 2⁸.
Push the 24 bit value in a 32 bit register and then left-shifting each bit by a value between 2⁰ and 2⁸:
data32[31] = data32[23] << 8;
data32[22] = data32[14] << 8;
...
data32[0] = data32[0];
and interpolate the bits you do not get with this (linear interpolation)
Maybe there are much better scaling up algortihms ask on http://dsp.stackexchange.com
See also http://blog.bjornroche.com/2013/05/the-abcs-of-pcm-uncompressed-digital.html for the scaling up problem...

Flushing on UART doesn't work as expected

I need to write a sequence of values (buffer, ~10bytes) via UART.
This sequence needs to start with a BREAK delimiter, and in my case I need to decrease the baud rate to a lower value.
Details about my environment:
Development board: BeagleBone Black.
Linux Kernel version: 3.8.13-bone70.
Serial driver used by the tty discipline: omap-serial.
What I finally get is something like this:
UARTIOHandler->setBaudRate(B9600);
unsigned char breakChar[] = { 0 };
UARTIOHandler->write(breakChar, 1);
UARTIOHandler->setBaudRate(B19200);
UARTIOHandler->write({1, 2, 3, 4, 5, 6, 7, 8, 9, 10});
The write method is implemented this way:
int UARTIOHandler::write(const std::initializer_list<uchar8> &data) {
uchar8 buffer[data.size()];
int counter = 0;
for(auto i : data) {
buffer[counter++] = i;
}
auto output = ::write(this->fd_write, buffer, data.size());
this->flush();
return output;
}
And finally the flush() method:
void UARTIOHandler::flush() {
tcflush(this->fd_write, TCIOFLUSH);
}
The problem with this code is that the flushing doesn't always work, sometimes the distance between the BREAK and the first byte of data (observed on a scope) is ~500us (which is fine for my application), and sometimes is up to ~3ms.
EDIT: This is the actual behavior:
For the first five seconds everything works fine (the distance between the BREAK and the rest of the message doesn't exceed ~1ms), then, after five seconds there are some frames that exceed this inter-byte timing (for up to ~3ms).
There's always the code that I posted which is executed, so there's no possible way that I somehow forget flushing the buffers.
Why do this variations happen?
I have searched for relevant problems and found this, one workaround described there is to use a delay in front of the tcflush(...) function call. I can't use this method in my application because it will affect the functionality.
Another comment in that topic suggests that this was a bug in the Linux kernel, could this also be in my case?

Controlling TI OMAP l138 frequency leads to "Division by zero in kernel"

My team is trying to control the frequency of an Texas Instruments OMAP l138. The default frequency is 300 MHz and we want to put it to 372 MHz in a "complete" form: we would like not only to change the default value to the desired one (or at least configure it at startup), but also be capable of changing the value at run time.
Searching on the web about how to do this, we found an article which tells that one of the ways to do this is by an "echo" command:
echo 372000 /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
We did some tests with this command and it runs fine with one problem: sometimes the first call to this echo command leads to a error message of "Division by zero in kernel":
In my personal tests, this error appeared always in the first call to the echo command. All the later calls worked without error. If, then, I reset my processor and calls the command again, the same problem occurs: the first call leads to this error and later calls work without problem.
So my questions are: what is causing this problem? And how could I solve it? (Obviously the answer "always type it twice" doesn't count!)
(Feel free to mention other ways of controlling the OMAP l138's frequency at real time as well!)
Looks to me like you have division by zero in davinci_spi_cpufreq_transition() function. Somewhere in this function (or in some function that's being called in davinci_spi_cpufreq_transition) there is a buggy division operation which tries to divide by some variable which is (in your case) has value of 0. And this is obviously error case which should be handled properly in code, but in fact it isn't.
It's hard to tell which code exactly leads to this, because I don't know which kernel you are using. It would be much more easier if you can provide link to your kernel repository. Although I couldn't find davinci_spi_cpufreq_transition in upstream kernel, I found it here.
davinci_spi_cpufreq_transition() function appears to be in drivers/spi/davinci_spi.c. It calls davinci_spi_calc_clk_div() function. There are 2 division operations there. First is:
prescale = ((clk_rate / hz) - 1);
And second is:
if (hz < (clk_rate / (prescale + 1)))
One of them is probably causing "division by zero" error. I propose you to trace which one is that by modifying davinci_spi_calc_clk_div() function in next way (just add lines marked as "+"):
static void davinci_spi_calc_clk_div(struct davinci_spi *davinci_spi)
{
struct davinci_spi_platform_data *pdata;
unsigned long clk_rate;
u32 hz, cs_num, prescale;
pdata = davinci_spi->pdata;
cs_num = davinci_spi->cs_num;
hz = davinci_spi->speed;
clk_rate = clk_get_rate(davinci_spi->clk);
+ printk(KERN_ERR "### hz = %u\n", hz);
prescale = ((clk_rate / hz) - 1);
if (prescale > 0xff)
prescale = 0xff;
+ printk("### prescale + 1 = %u\n", prescale + 1UL);
if (hz < (clk_rate / (prescale + 1)))
prescale++;
if (prescale < 2) {
pr_info("davinci SPI controller min. prescale value is 2\n");
prescale = 2;
}
clear_fmt_bits(davinci_spi->base, 0x0000ff00, cs_num);
set_fmt_bits(davinci_spi->base, prescale << 8, cs_num);
}
My guess -- it's "hz" variable which is 0 in your case. If it's so, you also may want to add next debug line to davinci_spi_setup_transfer() function:
if (!hz)
hz = spi->max_speed_hz;
+ printk(KERN_ERR "### setup_transfer: setting speed to %u\n", hz);
davinci_spi->speed = hz;
davinci_spi->cs_num = spi->chip_select;
With all those modifications made, rebuild your kernel and you will probably get the clue why you have that "div by zero" error. Just look for lines started with "###" in your kernel boot log. In case you don't know what to do next -- attach those debug lines and I will try to help you.

Explanation of fuzz and flat in input_absinfo struct in input.h

I'm trying to tweak the sensitivity of a joystick which does not work correctly with SDL, using the EVIOCSABS call from input.h. I think that the fuzz and flat members of the input_absinfo struct affect the sensitivity of the axes, but after not a few shots in the dark I'm still feeling stumped as to how they work exactly. I am hoping someone can point me in the right direction.
Thank you for considering my question! Here's the code I have written in my Joystick class:
int Joystick::configure_absinfo(int axis, int fuzz, int flat)
{
struct input_absinfo jabsx;
int result_code = ioctl(joystick_fd, EVIOCGABS(axis), &jabsx);
if (result_code < 0)
{
perror("ioctl GABS failed");
}
else
{
jabsx.fuzz = fuzz;
jabsx.flat = flat;
result_code = ioctl(joystick_fd, EVIOCSABS(axis), &jabsx);
if (result_code < 0)
{
perror("ioctl SABS failed");
}
}
return result_code;
}
Regarding the fuzz value it seems like a value used for abs input devices. Looking at the documentation of input_absinfo in input.h
Link to input.h at lxr.linux.no
You can find that
fuzz: specifies fuzz value that is used to filter noise from the event stream.
Which means that the input system in linux will drop events generated by the device driver if the difference from the last value is lower than the fuzz. This is done in the input layer.
The flat value determines the size of the dead zone ([source])(https://wiki.archlinux.org/index.php/Gamepad#evdev_API_deadzones). fuzz is far harder to find, but the best I can find are these docs:
Filtering (fuzz value): Tiny changes are not reported to reduce noise
So it would seem that any changes less than fuzz are should be filtered out / ignored.

Resources