It's a weird senario. We use a x86 soc as the host and a stm8s MCU as client. They are connected with 2 wires between the GPIOs(sOC side) and RST and SWIM(MCU side). So in this way we can reflash the MCU firmware by the SWIM protocol.
Here is the problem. After we enter the SWIM mode(enter sequence is easily UP/DOWN by gpio operations on low frequency), we need to send data to MCU. But the protocol need to send data with a high frequency(16M clock). I have try with a simple gpio operation with no delay:
for(i=0; i<10; i++) {
gpio_set_value(SWIM, 0);
gpio_set_value(SWIM, 1);
}
But on the oscilloscope the pulse will delay 2ns for both UP/DOWN. I think it maybe the io speed limit or some other system reason. As the SWIM protocol said, to send data the minimal delay is about 0.25ns.
Are there any ways to speed up the GPIO operations? or do we have some special method to apply this request?
BTW: We are using linux on the soc.
Related
I'm developing a RF modem based on a new protocol, which has a feature of streaming 96 Bytes in one frame - but they are sent on and on, before communication ends. I plan using two 96 Bytes buffers in STM32 - in next lines I will explain why.
I want to send first 96 Byte frames by the USB-CDC to STM32 - then external modem chip will generate a "9600bps" clock and STM will have to write Payload bits by bits on specified output pin(at the trailing edge of the each clock pulse).
When STM32 will notice that it had sent a half of 96 Byte frame - that it sent to PC notification to send more data - PC will refill second 96 Byte buffer by USB-CDC immediately. When STM32 will end sending first buffer - immediately starts sending second buffer content. When it will send half of second buffer - as previous will ask PC for another 96Byte frame.
And that way all the time, before PC will sent command to stop tx.
This transfer mode - a serial, with using a "trigger clock".
Is this possible using DMA, and how could I set it?
I want to use DMA to have ability to use USB while already streaming data to the radio modem chip. Is this the right approach?
I'm working in project building an opensource radiocommunication system project with both packet and stream capatibilities & digital voice. I'm designing and electronics for PC radiomodem. Project is called M17 and is maintained by Wojtek SP5WWP.
Re. general architecture. Serial communication over USB ACM does not have to use buffer of the same size and be synchronized with the downstream communication over SPI. You could use buffers as big as practically possible so PC can send data in advance. This will reduce the chance of buffer underflow if PC does not provide data fast enough. Use a circular buffer and fill it when a packet arrives from USB.
DMA is the right approach. Although people often say that DMA is only necessary for high bandwidth operations, it may be actually easier to work with DMA than handling interrupts per every byte, even when you only handle 9600 bits per second.
DMA controller in STM32F3 has a Half-Transfer Complete (HTIF in DMA_ISR) bit that you can poll or make it generate and interrupt. In conjunction with the Transfer Complete status (TCIF) and the Circular bit (CIRC in DMA_CCR) you can organize a double-buffered data pipe so that transfers can overlap with whatever else the MCU is doing. The application will reload the first half of the DMA buffer on the HTIF event. When the TCIF event happens, it reloads the second half. It has to be done quickly, before the other half is also completed. However, you need a double buffered pipeline only when you need to constantly stream data, i.e. overall amount is larger than can the size of the DMA buffer.
Stopping a circular DMA may be tricky. I suppose both the STM32 and external chip know how many bytes to send. In that case, after this amount is received, disable the DMA.
It seems you need a slave SPI in STM32 as the external chip generates SPI clock.
DMA is not difficult to set up, however, it needs multiple things to work properly. I assume register-level programming, if you use some kind of framework, you'll need to find out how it implements these features. Enable clocks for SPI, GPIO port for SPI pins, and DMA, configure the pins as AF. Find the right DMA channel for the SPI peripheral. In case of SPI DMA you usually need two channels: TX and RX, but with the slave SPI, you may get away with one. Configure SPI, pay attention to clock polarity and phase, and set it to generate a DMA request for each TX and/or RX. Set the DMA CPAR channel register pointing to the SPI DR register in channel(s) and program all other DMA channel registers appropriately. Enable the DMA channel(s). Enable SPI in slave mode. When the SPI master clocks data on the MOSI/SCK pins, the DMA controller will put them in memory. When the buffer is half-full and full-full, the channel will set the HTIF and TCIF bits and generate and interrupt, if you told it to. Use these events to implement flow control.
I'm implementing a protocol over serial ports on Linux. The protocol is based on a request answer scheme so the throughput is limited by the time it takes to send a packet to a device and get an answer. The devices are mostly arm based and run Linux >= 3.0. I'm having troubles reducing the round trip time below 10ms (115200 baud, 8 data bit, no parity, 7 byte per message).
What IO interfaces will give me the lowest latency: select, poll, epoll or polling by hand with ioctl? Does blocking or non blocking IO impact latency?
I tried setting the low_latency flag with setserial. But it seemed like it had no effect.
Are there any other things I can try to reduce latency? Since I control all devices it would even be possible to patch the kernel, but its preferred not to.
---- Edit ----
The serial controller uses is an 16550A.
Request / answer schemes tends to be inefficient, and it shows up quickly on serial port. If you are interested in throughtput, look at windowed protocol, like kermit file sending protocol.
Now if you want to stick with your protocol and reduce latency, select, poll, read will all give you roughly the same latency, because as Andy Ross indicated, the real latency is in the hardware FIFO handling.
If you are lucky, you can tweak the driver behaviour without patching, but you still need to look at the driver code. However, having the ARM handle a 10 kHz interrupt rate will certainly not be good for the overall system performance...
Another options is to pad your packet so that you hit the FIFO threshold every time. It will also confirm that if it is or not a FIFO threshold problem.
10 msec # 115200 is enough to transmit 100 bytes (assuming 8N1), so what you are seeing is probably because the low_latency flag is not set. Try
setserial /dev/<tty_name> low_latency
It will set the low_latency flag, which is used by the kernel when moving data up in the tty layer:
void tty_flip_buffer_push(struct tty_struct *tty)
{
unsigned long flags;
spin_lock_irqsave(&tty->buf.lock, flags);
if (tty->buf.tail != NULL)
tty->buf.tail->commit = tty->buf.tail->used;
spin_unlock_irqrestore(&tty->buf.lock, flags);
if (tty->low_latency)
flush_to_ldisc(&tty->buf.work);
else
schedule_work(&tty->buf.work);
}
The schedule_work call might be responsible for the 10 msec latency you observe.
Having talked to to some more engineers about the topic I came to the conclusion that this problem is not solvable in user space. Since we need to cross the bridge into kernel land, we plan to implement an kernel module which talks our protocol and gives us latencies < 1ms.
--- edit ---
Turns out I was completely wrong. All that was necessary was to increase the kernel tick rate. The default 100 ticks added the 10ms delay. 1000Hz and a negative nice value for the serial process gives me the time behavior I wanted to reach.
Serial ports on linux are "wrapped" into unix-style terminal constructs, which hits you with 1 tick lag, i.e. 10ms. Try if stty -F /dev/ttySx raw low_latency helps, no guarantees though.
On a PC, you can go hardcore and talk to standard serial ports directly, issue setserial /dev/ttySx uart none to unbind linux driver from serial port hw and control the port via inb/outb to port registers. I've tried that, it works great.
The downside is you don't get interrupts when data arrives and you have to poll the register. often.
You should be able to do same on the arm device side, may be much harder on exotic serial port hw.
Here's what setserial does to set low latency on a file descriptor of a port:
ioctl(fd, TIOCGSERIAL, &serial);
serial.flags |= ASYNC_LOW_LATENCY;
ioctl(fd, TIOCSSERIAL, &serial);
In short: Use a USB adapter and ASYNC_LOW_LATENCY.
I've used a FT232RL based USB adapter on Modbus at 115.2 kbs.
I get about 5 transactions (to 4 devices) in about 20 mS total with ASYNC_LOW_LATENCY. This includes two transactions to a slow-poke device (4 mS response time).
Without ASYNC_LOW_LATENCY the total time is about 60 mS.
With FTDI USB adapters ASYNC_LOW_LATENCY sets the inter-character timer on the chip itself to 1 mS (instead of the default 16 mS).
I'm currently using a home-brewed USB adapter and I can set the latency for the adapter itself to whatever value I want. Setting it at 200 µS shaves another mS off that 20 mS.
None of those system calls have an effect on latency. If you want to read and write one byte as fast as possible from userspace, you really aren't going to do better than a simple read()/write() pair. Try replacing the serial stream with a socket from another userspace process and see if the latencies improve. If they don't, then your problems are CPU speed and hardware limitations.
Are you sure your hardware can do this at all? It's not uncommon to find UARTs with a buffer design that introduces many bytes worth of latency.
At those line speeds you should not be seeing latencies that large, regardless of how you check for readiness.
You need to make sure the serial port is in raw mode (so you do "noncanonical reads") and that VMIN and VTIME are set correctly. You want to make sure that VTIME is zero so that an inter-character timer never kicks in. I would probably start with setting VMIN to 1 and tune from there.
The syscall overhead is nothing compared to the time on the wire, so select() vs. poll(), etc. is unlikely to make a difference.
I have an application that sends 8000 audio packets per second. Now initially for experimenting purpose I am preparing a buffer of 8 audio packets and then making an IOCTL call and passing the buffer to my driver.
I am using "USB analyser". From the USB analyser I got that the inter packet gap(IPG) is around 20-40 usec. That is fine. But sometimes the IPG shows 200-300usec. Is it the USB subsystem(USB core/HCD) that is playing the role or the implementation of ioctl ?. Making >1000 ioctl calls and "copy_from_user"per second may be the culprit behind the late submission of packets. And moreover i am using USB3.0 which is capable of supporting 5 GBps data rate.
The code flow in my driver is like :
switch(cmd)
{
case SEND_AUDIO:
copy_from_user(...,....,...);
for(i=0; i<8; i++);
usb_bulk_msg();
break;
}
My problem is there was a spark when I plug in the voltage source of my 220 V bulb.I have my arduino uno r3,hc-05 Bluetooth Module,Relay Module and 220V bulb.
I cut the wire of my 220V Bulb.
One wire that is near to the bulb was connected to (COM)common connection of our relay.
The other cut was connected to ground.
The Relay Module's VCC was connected to Arduino's 5V.
The Relay's input pin was connected arduino's pin 13 as well as the Normally open(NO) pin of reay.The Relay's ground ws also connected to Arduino's ground.
My Bluetooth module tx was connected to aruino's rx ans Bluetooth module's rx was connected to tx.
I also connected Bluetooth Module,5v to the Arduino's 5v.
and A ground from Bluetooth module to Arduino's Ground.
I made my own version of schematic diagram and this is how it works.It is not that nice but I hoope you will understand.
The small squares serve's as the BreadBoard
https://twitter.com/n_galia/status/419876079403147264/photo/1
Here is a simple relay driver you can use with the Arduino. The component values are not super important, R4 could be larger, R3 can be larger, you can use just about any 5V relay and any NPN transistor. As show it should work with most low/medium sized relay. When active you can check the voltage between Q1's collector and ground. It should be less then a volt. About 4mA is provided by the Ardunio, far below it's output capacity.
PLEASE BE CAREFUL!! you are working with high current and high voltage power. Blowing up an Ardunio is minor compared to the damage you can do to yourself.
The revised schematic might not work either. If your relay is a basic relay a driver will be required. The Ardunio can only sink about 20mA and it's likely your relay will need more to function correctly. The relay coil might look like a short to the Arduino.
If you have a relay with a built in logic level driver or a solid state relay, or even a TRIAC part (not a relay) you might be OK.
In situations like this its advised to use an optical coupler between the Arduino and the relay.
The optical coupler has a transistor that will dive the relay in its output, the transistor is actually a light sensitive transistor (photo-transistor) which is turned on via an LED built into the package. The Arduino would drive this LED (though a limiting resistor) which would activate the transistor to drive the Relay. This way the low voltage electronics are totally protected and isolated from the high voltage stuff.
Ouch!
The Arduino is not compatible of dealing with your 220Vac lines from power and to the light.
Your Arduino may not function correctly anymore.
I have attached a revision to your wiring.
I created a WP8 App. It connects to the Bluetooth and detected it.and the Bluetooth module connected as well. But the data are not coming from the Arduino to the phone :(
error code
if(btSerial.available()) {
Serial.println(distance);
btSerial.write(distance);
}
else {
Serial.println("error"); -> always prints this
}
in the code always the error part is printing in the serial monitor. I have attached the pins in the Bluetooth device to below pins.
RXD - 11,
TXD - 10,
GND - GND,
VCC - 5v,
Please help me why is btSerial.available() is not firing ?
You have the logic backwards. available() tests whether the Arduino has data in its receive buffer. It does not test if the connection is ready. So the overall pattern of a serial program
if(someserial.available()) {
someserial.read... loop to get input
print stuff received
}
To write, just write.
//no if's just go
someserial.write("my output")
You do not need to wait. With the two wire serial connection, you have no flow control. In other words, there is no signalling between arduino and bluetooth transceiver about ready or other status. Because the baud rate of the bluetooth link exceeds the baud rate of the arduino serial link, you can't really overflow the bluetooth transmit stream.
The bluetooth aspect of negotiating the connection is meant to be transparent to the Arduino. In other words, your program is the same as if you where using a hardware serial port. If for some reason, you need details into the connection, there are special byte sequences that allow communication with the bluetooth hardware.