Arduino - Use serial print in the background - multithreading

I would like to know if it would be somehow possible to handle a Serial.println() on an Arduino uno without holding up the main program.
Basically, I'm using the arduino to charge a 400V capacitor and the main program is opening and closing the gate on a MOSFET transistor for 15 and 20 microseconds respectively. I also have a voltage divider connected to the capacitor which I use to measure the voltage on the capacitor when its being charged with the Arduino. I use analogRead() to get the raw value on the pin, multiply the value by the required ratio and try to print that value to the serial console at the end of each cycle. The problem is, though, that in order for the capacitor to charge quickly, the delays need to be very small (in the range of microseconds) and the serial print command takes much longer than that to execute and therefore holds up the entire program.
My question therefore is wherther it would somehow be possible to get the command to execute on something like a different "thread" without holding up the main cycle. Is the 8bit AVR capable of doing something like this?
Thanks

Arduino does not support multithreading, but there are a few libraries that allow you to do the equivalent:
Arduino Multi-Threading Library
ArduinoThread
For more information, you can also visit this question: Does Arduino support threading?

The basic Arduino serial print functions are blocking, They watch the TX Ready flag then load the next byte to transmit. This means if you send "Hello World" the print function will block for as long as it take to load the UART to send 10 characters at your selected baud rate.
One way to deal with this is using high baud rate so you don't wait long.
The better way is to use interrupt driven transmit. This way your string of data to send is buffered with each character sent when the UART can accept the character. The call to send data just loads the TX buffer, loads the first character into the UART to start the process then returns.
This bufferd approach will still block if you fill the TX buffer. The send method would have to wait to load more into the buffer. A good serial lib might give you an easy way to check the buffer status so you could implement your own extra buffering.
Here is a library that claims to be interrupt driven. I've not used it myself. Let us know if it works for you.
https://code.google.com/p/arduino-buffered-serial/

I found this post: Serial output is now interrupt driven. Serial.print() just stuffs the data in a buffer. Interrupts have to happen for that data to actually be shifted out. posted: Nov 11, 2012. I have found that to be true BUT if you try to print more then the internal serial buffer will hold it will lock up until the buffer is at least partially emptied.

Related

Is interrupt jitter causing the annoying wobble in audio using the mcu's dac?

I had a assignment for college where we needed to play a precompiled wav as integer array through the PWM and DAC. Now, I wanted more of a challenge, so I went out of my way and created a audio dac over usb using the micro controller in question: The STM32F051. It basically listens to my soundcard output using a wasapi loopback recorder, changes the resolution from 16 to 12 bit (since the dac on the stm32 only has a 12 bit resolution) and sends it over using usart using 10x sample rate as baud rate (in my case 960000). All done in C#.
On the microcontroller I simply use a interrupt for usart and push the received data to the dac.
It works pretty well, much better than PWM, and at a decent sample frequency of 48kHz.
But... here it comes.. When there is some (mostly) high pitch symphonic melody it starts to sound "wobbly".
Here a video where you can hear it: https://youtu.be/xD3uTP9etuA?t=88
I read up on the internet a bit about DIY dac's and someone somewhere (don't remember where) mentioned that MCU's in general have interrupt jitter. So may basic question is: Is interrupt jitter actually causing this? If so, are there ways to limit the jitter happening?
Or is this something entirely different?
I am thinking of trying to compact the pcm data send over serial (as said before, resolution of 12 bits, but are sent in packet of 2 8bits forming 16bits, hence twice the samplerate as the baud rate, so my plan is trying to shift 12 bits to the MSB and adding four bits of the next 12 bit value to the current 16 bit variable, hence only needing 12 transfers instead of 16 per 8 samples. Might read upon more efficient ways of compacting data for transport.), put the samples in a buffer and then use another timer that triggers at 48kHz for sending the samples to the dac. Would this concept work? Or would I just waste time?
For code, here is the project: https://github.com/EldinZenderink/SoundOverSerial

What is the best way to sample periodicaly gpio-pins in Linux?

I like to sample a signal which is generated by the pins of my Raspberry Pi. I made the experience that high sample rates are difficult to realize.
First I done a fast approach with Python(super slow). Then I changed to ANSI C + the bcm2835.h lib. I gained significant performance gains.
Now I am asking my self the question: How to do the best sampling under Linux?
My tries were undertaken in user space. But what is about, switching to kernel space? I can write a simple character device kernel module. In this module the pins are periodically checked. If the state changed some information is put into a buffer. This i/o buffer is polled by a synchronous file read for a application in user space. The best solution for me would be, if the pin checking can be done with a fixed frequency(sample period should be constant for signal processing).
The set up for this could be:
#kernel: character module + kernel thread + gpio device tree interface + DSP at constant sampling time
#user space: i/o app reading synchronous from character module
Ideas/tips?
I have a solution for you.
I have written such a module:
https://github.com/Appyx/gpio-reflect
You can synchronously read any signal from the GPIO pins.
You can use the output and calculate the signal with your sampling rate.
Just divide the periods.

Bit Bang with SPI (fwirte, write performance)

I have a bit bang code that allows me to send like 4 megs of data through SPI lines. Its embedded code for custom Hardware using a Linux Kernel.
The problem is that takes a VERY long time to do it (4 hours) this is most likely becase the kernel is doing more stuff. Basically my code is something like this(aprox):
unsigned char data=0xFF;
BB_SPI_Init();
SPI_start();//activates chipselect(enable)
for(i=0;i<8;i++){
if(data & 0x80){
gpio_set_value(SPI_MOSI,1);
}else{
gpio_set_value(SPI_MOSI,0);
}
//send pulse clock
gpio_set_value(SPI_CLK,0);
gpio_set_value(SPI_CLK,1);
data<<=1;
}
SPI_stop();//deactivates chipselect(disable)
So is a very simple bit bang, but i notice that if i use write to send data to the linux gpio handler /sys/class/gpio/gpioXX/value (where XX is any gpio number) it takes 4 hours.
But if i use fwrite() for sending to the same device it takes 3 hours.
BUT, if you use write() only for the enables ( SPI_stop(), and SPI_start()) and fwrite() for sending to MISO, CLK it only takes 1 hour and 30 minutes.
So, with that as a base, could someone explain to me how is that happening? my imagination says that is the way the threads are handled and in every software cycle it resolves 2 threads (fwrite() and write()) instead if was only one of the functions used, but now i'm still investigating, can someone let me know any kind of information? is there a better way to handle this?
FYI
Can't use kernel driver spi because the hardware was connected to gpios and it is a mandatory requirement to use bit bang but i accept any suggestion
Thanks in advance
EDIT
Hey Guys thanks for your comments, it seems that i had a problem (very dumb one) that i created the file descriptor each time that i was going to send data to sys/class/gpio/gpioxx/value so that's why was slow. Also turn off some other programs and the transfer skyrocket to 3 minutes instead of 1 hour 30 minutes (with write()). Thanks and Sorry about it
I think that the spi-bitbang driver is the best solution if you are looking for performance. Doing the bit-bang from user space is a pain because you have at least 3 system calls for each bit of data. A system call is an expensive operation.
FYI Can't use kernel driver spi because the hardware was connected to gpios and it is a mandatory requirement to use bit bang but i accept any suggestion
That's why the spi-bitbang driver exists. You can easily configure the spi-bitbang driver to work with your GPIOs.
Then, once you have a spi-bitbang driver, you can write a char device that accept as input your entire block of data and transfer it in kernel space. With this solution you will get the maximum performance for a bit-bang interface.

Arduino Random Characters sent to Bluetooth TX during Code Upload

When I upload code to my Arduino while the TX and RX pins are connected to my HC-05 module, a bunch of random characters are sent to the TX buffer, and when I connect to a device, those characters are sent and mess up communication. Is there a way that I can clear this buffer after uploading the code? I've just been disconnecting the wires whenever I upload, but I'd like to find an easier way. Thanks!
Well, if you use a serial port to both send data and the program of course you will see it on the other side of the BT... Possible solutions:
disconnect the BT module every time you want to program the Arduino
shut down the other BT device (or just disconnect it) when you have to program the Arduino
shut down the HC-05 (or keep it in reset state) until the arduino says that it is communicating (so use a GPIO to control the reset pin or a transistor to power the BT on at the beginning of the program)
use a 3-state driver between the HC-05 and the Arduino serial ports (one driver for TX and one for RX) and activate its outputs at the beginning of the arduino program.
I don't like djUniversal's solution because you cannot control what the PC transmits; if, for instance, you decide to use the byte 0xAA to signal the start of the transmission then if the PC sends 0xAA the other device thinks that the Arduino is transmitting. Choosing longer bytes sequences helps, because the sequence becomes less probable, but.....
Moreover you have to send it at EVERY command, not just at the beginning, because you have to reset the arduino to program it (and so the other device is not aware of WHEN to stop considering the data).
The only other way around it is to send a header of maybe a couple of bytes each time to send a message. The other program can wait for these characters before it starts to take commands. Until those characters are read from the buffer you would just do a Serial.read() loop to get rid of the garbage.
Also, if garbage characters are going to screw up your program really badly you might want to think about creating some kind of crude checksum also to confirm the correct transmission.
Need help coding? Let me know.

Serial port programming - Recognize end of received data

I am writing a serial port application using VC++, in which I can open a port on a switch device, send some commands and display their output. I am running a thread which always read open port for output of given command. My main thread waits until read completes, but problem is how do I recognize that command output ends, and I should signal main thread.
Almost any serial port communication requires a protocol. Some way for the receiver to discover that a response has been received in full. A very simple one is using a unique byte or character that can never appear in the rest of the data. A linefeed is standard, used by any modem for example.
This needs to get more elaborate when you need to transfer arbitrary binary data. A common solution for that is to send the length of the response first. The receiver can then count down the received bytes to know when it is complete. This often needs to be embellished with a specific start byte value so that the receiver has some chance to re-synchronize with the transmitter. And often includes a checksum or CRC so that the receiver can detect transmission errors. Further embellishments then is to make errors recoverable with ACK/NAK responses from the receiver. You'd be then well on your way in re-inventing TCP. The RATP protocol in RFC-916 is a good example, albeit widely ignored.

Resources