I'm implementing a LIN protocol on a Linux SBC that transmits over a UART. I don't have time to develop a complete LIN stack, so I'm just implementing a frame structure for messages as defined by the protocol. The problem is that the protocol requires a "Break" field which makes the slave devices on the bus listen. This field consists of zeros for 13 bit-times. Any ideas how to send zeros 13 bit-times over UART, when serial data transmission requires complete bytes?
Per Wiki:
LIN (Local Interconnect Network) is a serial network protocol used for
communication between components in vehicles. The need for a cheap
serial network arose as the technologies and the facilities
implemented in the car grew, while the CAN bus was too expensive to
implement for every component in the car. European car manufacturers
started using different serial communication topologies, which led to
compatibility problems.
If you would have paid attention at class you would have known that:
Data is transferred across the bus in fixed form messages of
selectable lengths. The master task transmits a header that consists
of a break signal followed by synchronization and identifier fields.
The slaves respond with a data frame that consists of between 2, 4 and
8 data bytes plus 3 bytes of control information.
You should just send an echo of 0x0000 following by CR/LF.
Related
I want to implement a P2P protocol in C for personal education purposes.
What would be the protocol with the shortest specification that is still used today?
I have already implemented a web and IRC client and server.
I agree with Mark, that point to point over a serial link would be a good exercise.
In particular, I would recommend the following programme of stuff...
Implement basic transmission over a "Serial Port" (using RS-232 if you have some Arduinos/embedded processors lying around, or using a null modem emulator if you don't (see com0com on Windows, or this on linux/mac).
I.e. send lower case letters from A->B, and echo them back as upper case from B->A
Implement SLIP as a way to reliably frame messages
i.e. you can send any string (e.g. "hello") and it is returned in upper case with "WORLD" appended ("HELLOWORLD").
Implement the "Read Multiple Holding Registers" and "Write Multiple Holding Registers" part of the Modbus protocol, using SLIP to frame the messages.
I.e. you have one follower (slave) device, and one leader (master) device. The follower has 10 bytes of memory that are exposed over modbus with the initial value "helloworld".
Just hard-code the follower / leader device Ids for now.
The leader reads the value, and then sets it to be "worldhello".
At the end of this you would start to have an understanding of the roles of network layers, ie:
The physical layer - Serial/RS-232
A "link layer" of sorts - SLIP
An "application" layer - Modbus
Serial. The answer is serial. You're not going to get any leaner than simple RX/TX communication but you'll lack a lot of convenience methods. If you want to explore more than simple bidirectional comms, I2C or modbus open up a lot of options.
Screen grab from WireShark showing traffic when problem occurs
Short question - Referring to WireShark image, What is causing Master to send LL_CHANNEL_MAP_IND and why is it taking so long?
We are working on a hardware/software project that is utilizing the TI WL18xx as a Bluetooth controller. One of the main functions of this product is to communicate with our sensor hardware over a Bluetooth Low Energy connection. We have encountered an issue that we have had difficulty pinpointing, but feel may reside in the TI WL18xx hardware/firmware. Intermittently, after a second Bluetooth Low Energy device is connected, the response times from writing and notification of one of the characteristics on one of the connected devices becomes very long.
Details
The host product device is running our own embedded Linux image on a TI AM4376x processor. The kernel is 4.14.79 and our communication stack sits on top of Bluez5. The wifi/bluetooth chip is the Jorjin WG7831-BO, running TIInit_11.8.32.bts firmware version 4.5. It is based on the TI WL1831. The sensor devices that we connect to are our own and we use a custom command protocol which uses two characteristics to perform command handshakes. These devices work very well on a number of other platforms, including Mac, Windows, Linux, Chrome, etc.
The workflow that is causing problems is this;
A user space application allows the user to discover and connect to our sensor devices over BLE, one device at a time.
The initial connection requires a flurry of command/response type communication over the aforementioned BLE characteristics.
Once connected, the traffic is reduced significantly to occasional notifications of new measurements, and occasional command/response exchanges triggered by the user.
A single device always seems stable and performant.
When the user connects to a second device, the initial connection proceeds as expected.
However, once the second device's connection process completes, we start to see that the command/response response times become hundreds of times longer on the initially connected device.
The second device communication continues at expected speeds.
This problem only occurs with the first device about 30% of the time we follow this workflow.
Traces
Here is a short snippet of the problem that is formed from a trace log that is a mix of our library debug and btmon traces.
Everything seems fine until line 4102, at which we see the following:
ACL Data TX: Handle 1025 flags 0x00 dlen 22 #1081 [hci0] 00:12:48.654867
ATT: Write Command (0x52) len 17
Handle: 0x0014
Data: 580fd8c71bff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1532) : Blob cmd sent: 1bh to GDX-FOR 07100117; length = 15; rolling counter = 216; timestamp = 258104ms .
HCI Event: Number of Completed Packets (0x13) plen 5 #1082 [hci0] 00:12:49.387892
Num handles: 1
Handle: 1025
Count: 1
ACL Data RX: Handle 1025 flags 0x02 dlen 23 #1083 [hci0] 00:12:51.801225
ATT: Handle Value Notification (0x1b) len 18
Handle: 0x0016
Data: 9810272f1bd8ff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1745) : GetNextResponse(GDX-FOR 07100117) returns 1bh cmd blob after 3139=(261263-258124) milliseconds.
The elapsed time reported by GetNextResponse() for most cmds should be < 30 milliseconds. Response times were short when we opened and sent a bunch of cmds to device A.
The response times remained short when we opened and sent a bunch of cmds to device B. But on the first subsequent cmd sent to device A, the response time is more than 3 seconds!
Note that a Bluetooth radio can only do one thing at a time. Receive or transmit. On one single frequency. If you have two connections and two connection events happen at the same time, the firmware must decide which one to prioritize, and which one to skip. Maybe the firmware isn't smart enough to handle your specific case. Try with other connection parameters to see if something gets better. You can also try another Bluetooth dongle from a different manufacturer.
I'm developing a RF modem based on a new protocol, which has a feature of streaming 96 Bytes in one frame - but they are sent on and on, before communication ends. I plan using two 96 Bytes buffers in STM32 - in next lines I will explain why.
I want to send first 96 Byte frames by the USB-CDC to STM32 - then external modem chip will generate a "9600bps" clock and STM will have to write Payload bits by bits on specified output pin(at the trailing edge of the each clock pulse).
When STM32 will notice that it had sent a half of 96 Byte frame - that it sent to PC notification to send more data - PC will refill second 96 Byte buffer by USB-CDC immediately. When STM32 will end sending first buffer - immediately starts sending second buffer content. When it will send half of second buffer - as previous will ask PC for another 96Byte frame.
And that way all the time, before PC will sent command to stop tx.
This transfer mode - a serial, with using a "trigger clock".
Is this possible using DMA, and how could I set it?
I want to use DMA to have ability to use USB while already streaming data to the radio modem chip. Is this the right approach?
I'm working in project building an opensource radiocommunication system project with both packet and stream capatibilities & digital voice. I'm designing and electronics for PC radiomodem. Project is called M17 and is maintained by Wojtek SP5WWP.
Re. general architecture. Serial communication over USB ACM does not have to use buffer of the same size and be synchronized with the downstream communication over SPI. You could use buffers as big as practically possible so PC can send data in advance. This will reduce the chance of buffer underflow if PC does not provide data fast enough. Use a circular buffer and fill it when a packet arrives from USB.
DMA is the right approach. Although people often say that DMA is only necessary for high bandwidth operations, it may be actually easier to work with DMA than handling interrupts per every byte, even when you only handle 9600 bits per second.
DMA controller in STM32F3 has a Half-Transfer Complete (HTIF in DMA_ISR) bit that you can poll or make it generate and interrupt. In conjunction with the Transfer Complete status (TCIF) and the Circular bit (CIRC in DMA_CCR) you can organize a double-buffered data pipe so that transfers can overlap with whatever else the MCU is doing. The application will reload the first half of the DMA buffer on the HTIF event. When the TCIF event happens, it reloads the second half. It has to be done quickly, before the other half is also completed. However, you need a double buffered pipeline only when you need to constantly stream data, i.e. overall amount is larger than can the size of the DMA buffer.
Stopping a circular DMA may be tricky. I suppose both the STM32 and external chip know how many bytes to send. In that case, after this amount is received, disable the DMA.
It seems you need a slave SPI in STM32 as the external chip generates SPI clock.
DMA is not difficult to set up, however, it needs multiple things to work properly. I assume register-level programming, if you use some kind of framework, you'll need to find out how it implements these features. Enable clocks for SPI, GPIO port for SPI pins, and DMA, configure the pins as AF. Find the right DMA channel for the SPI peripheral. In case of SPI DMA you usually need two channels: TX and RX, but with the slave SPI, you may get away with one. Configure SPI, pay attention to clock polarity and phase, and set it to generate a DMA request for each TX and/or RX. Set the DMA CPAR channel register pointing to the SPI DR register in channel(s) and program all other DMA channel registers appropriately. Enable the DMA channel(s). Enable SPI in slave mode. When the SPI master clocks data on the MOSI/SCK pins, the DMA controller will put them in memory. When the buffer is half-full and full-full, the channel will set the HTIF and TCIF bits and generate and interrupt, if you told it to. Use these events to implement flow control.
I'm using Bluetooth Low Energy to connect a device in Central mode to several devices in Peripheral mode. The Peripheral device would need to send 4 strings (all fewer than 20 characters) to the Central device.
Is it better to create 4 characteristics and have the Peripheral make 4 write requests to the Central? Or is it better to have 1 characteristic and combine all 4 strings into a JSON object as to make only 1 write request?
Simply put - is it better in this instance to small chunks of data multiple times or send a larger chunk of data once?
Which approach would be better for allowing as many Peripherals to connect to a Central as possible? Does it matter?
Thanks.
Since you have four separate strings I would recommend that you make four characteristics as well. The data packets in BLE is usually about 20 bytes (23 bytes minus some ATT overhead) so your data fits in a single packet. And when you do a read long (on a single long attribute) your going to get a single read response packet back before you ask for the next chunk of data. You would probably end up with the same amount of packets going back and forth to read the data. Obviously, discovering four vs only one characteristic takes a few extra packets, but nothing to worry about.
Simply put - is it better in this instance to small chunks of data
multiple times or send a larger chunk of data once?
Only if your data strings are considerably smaller than 20 bytes.
You talk about having the peripheral write it's data to the central (presumably right after connection). Normally this is done the other way, where the central first discovers the peripheral database (after connection of course) and then read the data it's interested in.
You can also consider using indications, rather than reads. This is typical for a peripheral sensor which is logging data. The central connects, discovers the database and then enable notification for a given attribute. The peripheral will then send each data sample (maybe packed in some way) with indications until all the locally logged data has been transferred. With indication, every transfer is acknowledged so the peripheral can wait for the confirmation before removing the data locally making sure you don't loose logging samples.
Which approach would be better for allowing as many Peripherals to
connect to a Central as possible? Does it matter?
The way to organize and transfer data does not matter in how many peripherals your central can connect to. But if your peripherals write into the central database you need to make sure they don't write to the same handle(s). Better to do a read or indication/confirmation initiated by the central.
We have developed an Application Specific Integrated Circuit for power line communications. The chip has an ethernet interface. If the ASIC receives an ethernet frame containing TCP/IP or ARP payload (ethertypes 0x0800 IPv4, 0x0806 ARP and 0x86DD IPv6), it simply forwards the frame onto the power line and does the same in the other direction. We call such frames data frames.
If the ASIC receives an ethernet frame of a specific ethertype (we use 0x88b5 which is allocated for experimental//public use on local networks), it consumes this frame itself. These frames contain configuration settings for the ASIC. We call these configuration frames.
The chip is connected to an Ethernet LAN on one side and to power line on the other end. So it basically bridges the two network. The ASIC requires throttling of the data frames passing through it. This is due to the fact that the speeds over power line are 100 times less than the 100 Mbps Ethernet and also because the number of data frames that the ASIC can handle per second are limited.
We use raw sockets to form the configuration frames and send it via ethernet to the ASIC. Is there a way in which whenever configuration frame (0x88b5), it is queued in front of all the pending data frames (ethertypes 0x0800, 0x0806, 0x86dd) in the netdev_queue?
Can this be done via some supporting functionality implemented using hacks & hooks in a kernel module?
We came across a similar question (although improperly tagged) here: Setting up priority of packets that are being transmitted over the network