I am running the following modbus slave simulator http://www.modbusdriver.com/diagslave.html as well as the following modbus poller http://www.modbusdriver.com/modpoll.html. The weird thing is, I cna not however get them to discover each other. Here is the output from the slave simulator
Protocol configuration: Modbus RTU
Slave configuration: address = -1, master activity t/o = 3.00
Serial port configuration: /dev/ttyS0, 19200, 8, 1, even
Server started up successfully.
Listening to network (Ctrl-C to stop)
....................
and the following is the output from the modbus poller.
Protocol configuration: Modbus RTU
Slave configuration...: address = 1, start reference = 1, count = 1
Communication.........: /dev/ttyS0, 19200, 8, 1, even, t/o 1.00 s, poll rate 1000 ms
Data type.............: 16-bit register, output (holding) register table
-- Polling slave... (Ctrl-C to stop)
Reply time-out!
-- Polling slave... (Ctrl-C to stop)
As you can see, the modbus slave simulator as well as modbus poller both have the same values there should be able to find each other. However they can not which I find odd. Does anyone have any suggestions on what may be happening to cause this?
The time-out can be for various reasons such as
register address your polling for does not exist in the slave device(also check if the type of register you are polling for is right)
the connection has not been established between the slave & the master(this can be due to configuring the slave and the master serial port interfaces incorrectly).
Also check if the serial link has been physically wired correctly to the correct pins on both ends (slave and master).
-Check the timeout period set at the master. You may have set it to a very low value and hence the responses from the slaves are being missed.
Hope this helps.
I just recently had a case for Modbus
There are two examples in hand, you can refer to it
The way of socket
https://github.com/asgardpz/Modbus_Server
https://github.com/asgardpz/Modbus_Client
Use the hslcommunicationt component
https://github.com/asgardpz/Hsl
Readme.md has the screen of the program,
you can see if it is the sample you want
Related
To give you some context, I am trying to configure some network redundancy on Ubuntu 20.04.
I decided to use NIC bonding with:
2 Gigabit interfaces,
active-backup mode,
link state monitoring with MII (polling every 10ms).
It works as expected, when the active slave fails (wire unplugged) the bonding switches to the other slave. However this transition is too slow in my opinion as it takes up to 400ms.
In order to investigate I had a look at the MII register using this command: mii-tool -vv ens33 (-vv is to print raw MII registers content). It generates this kind of output:
Doing a basic bash script I print current time + value of the 2nd MII register which contains the link state. As this is a "while true" loop with no pause, I get a status every 20ms, so it allows me monitoring with enough reactivity.
Observed behavior:
when the interfaces are configured with a 100Mbit/s speed, Full Duplex => the MII register is almost immediately updated (no "visible" delay between the wire unplugging and the update of the register value on the screen).
however when configured at 1Gbit/s, Full Duplex (via auto-negotiation) => the update of MII register takes much more time (it is "visible" with the bash script). By sending pings every 10ms using the bond interface, and logging packets with Wireshark on receiver side, the observed delay is around 370ms.
My questions:
The Ethernet controler is an Intel I210, and I have not found anything relevant in the datasheet.
Do you see any reason for such a long time to detect link down when interface speed is 1Gbit/s vs 100Mbit/s (370 vs 50ms) ?
Any advice to improve the reactivity?
Thanks for your help :)
Every generation of Ethernet speed has it's own method and requirements.
10BaseT:
"link pulses" are used every 16 ±8 ms
100BaseT: "link pulses" are not used but Idle ("/I/") 4B/5B code words monitoring is instead.
1000BaseT: link failure is actually
determined by maxwait_timer which is defined in IEEE 802.3 to be 750 ±10 ms for Master and 350 ±10
ms for Slave ports.
As you can see, your measured 370ms are exactly what you can expect from a slave port.
Sources:
https://www.microchip.com/forums/FindPost/417307
https://ww1.microchip.com/downloads/en/Appnotes/VPPD-03240.pdf
Screen grab from WireShark showing traffic when problem occurs
Short question - Referring to WireShark image, What is causing Master to send LL_CHANNEL_MAP_IND and why is it taking so long?
We are working on a hardware/software project that is utilizing the TI WL18xx as a Bluetooth controller. One of the main functions of this product is to communicate with our sensor hardware over a Bluetooth Low Energy connection. We have encountered an issue that we have had difficulty pinpointing, but feel may reside in the TI WL18xx hardware/firmware. Intermittently, after a second Bluetooth Low Energy device is connected, the response times from writing and notification of one of the characteristics on one of the connected devices becomes very long.
Details
The host product device is running our own embedded Linux image on a TI AM4376x processor. The kernel is 4.14.79 and our communication stack sits on top of Bluez5. The wifi/bluetooth chip is the Jorjin WG7831-BO, running TIInit_11.8.32.bts firmware version 4.5. It is based on the TI WL1831. The sensor devices that we connect to are our own and we use a custom command protocol which uses two characteristics to perform command handshakes. These devices work very well on a number of other platforms, including Mac, Windows, Linux, Chrome, etc.
The workflow that is causing problems is this;
A user space application allows the user to discover and connect to our sensor devices over BLE, one device at a time.
The initial connection requires a flurry of command/response type communication over the aforementioned BLE characteristics.
Once connected, the traffic is reduced significantly to occasional notifications of new measurements, and occasional command/response exchanges triggered by the user.
A single device always seems stable and performant.
When the user connects to a second device, the initial connection proceeds as expected.
However, once the second device's connection process completes, we start to see that the command/response response times become hundreds of times longer on the initially connected device.
The second device communication continues at expected speeds.
This problem only occurs with the first device about 30% of the time we follow this workflow.
Traces
Here is a short snippet of the problem that is formed from a trace log that is a mix of our library debug and btmon traces.
Everything seems fine until line 4102, at which we see the following:
ACL Data TX: Handle 1025 flags 0x00 dlen 22 #1081 [hci0] 00:12:48.654867
ATT: Write Command (0x52) len 17
Handle: 0x0014
Data: 580fd8c71bff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1532) : Blob cmd sent: 1bh to GDX-FOR 07100117; length = 15; rolling counter = 216; timestamp = 258104ms .
HCI Event: Number of Completed Packets (0x13) plen 5 #1082 [hci0] 00:12:49.387892
Num handles: 1
Handle: 1025
Count: 1
ACL Data RX: Handle 1025 flags 0x02 dlen 23 #1083 [hci0] 00:12:51.801225
ATT: Handle Value Notification (0x1b) len 18
Handle: 0x0016
Data: 9810272f1bd8ff00204e000000000000
D2PIO_SDK: GMBLNGIBlobSource.cpp(1745) : GetNextResponse(GDX-FOR 07100117) returns 1bh cmd blob after 3139=(261263-258124) milliseconds.
The elapsed time reported by GetNextResponse() for most cmds should be < 30 milliseconds. Response times were short when we opened and sent a bunch of cmds to device A.
The response times remained short when we opened and sent a bunch of cmds to device B. But on the first subsequent cmd sent to device A, the response time is more than 3 seconds!
Note that a Bluetooth radio can only do one thing at a time. Receive or transmit. On one single frequency. If you have two connections and two connection events happen at the same time, the firmware must decide which one to prioritize, and which one to skip. Maybe the firmware isn't smart enough to handle your specific case. Try with other connection parameters to see if something gets better. You can also try another Bluetooth dongle from a different manufacturer.
I'm using PIR sensor for motion detection and XBee s2c for transmission. The remote(transmitting) XBee, connected to PIR, is configured as below
CE=0
DH=0
DL=0
D4=3
IR=3E8 (500ms)
IC=FF (Change Detection on all pins)
The base(receiving) XBee is configured as below
CE=1
DH=0
DL=FFFF
D4=5
At the base, GPIO4 is connected to an LED. I have performed a simple test as mentioned here to check whether the GPIO is working or not. It's working as mentioned with above given DH & DLs. As D4 is configured to 5, the LED glows all time. Theoretically, whenever PIR sends high, LED should be off and vice-versa. But I am having two problems
The console of remote XBee is not showing any frames being sent but console of base XBee is showing the receiving frames and it is receiving correct data of PIR.
The pin D4 of base is not following the data being received i.e, it stays high all time.
I have observed the frames being received and they are showing the response of PIR as intended. How is the pin D4 not following the frames being received? I have followed this tutorial for DIO lines passing of XBee.
Make sure you're running the 802.15.4 (ATVR=0x20XX) or DigiMesh firmware (0x90XX) and not the ZigBee firmware (0x40XX). Looking at the options in XCTU, I don't think ZigBee firmware supports I/O line passing.
And looking at that knowledge base article, you might need to set ATIT on the remote and ATT4 and ATIA on the base. If those registers aren't available, then you're probably running a firmware version that doesn't support I/O line passing.
Is there a limit on maximum number of packets (LE_DATA) that could be send by either slave or master during one connection interval?
If this limit exists, are there any specific conditions for this limit (e.g. only x number of ATT data packets)?
Are master/slave required or allowed to impose such a limit by specification?
(I hope I'm not reviving a dead post. But I think the section 4.5.1 is better suited to answer this than 4.5.6.)
The spec doesn't define a limit of packets. It just states the following:
4.5.1 Connection Events - BLUETOOTH SPECIFICATION Version 4.2 [Vol 6, Part B]
(...)
The start of a connection event is called an anchor point. At the anchor point, a master shall start to transmit a Data Channel PDU to the slave. The start of connection events are spaced regularly with an interval of connInterval and shall not overlap. The master shall ensure that a connection event closes at least T_IFS before the anchor point of the next connection event. The slave listens for the packet sent by its master at the anchor point.
T_IFS is the "Inter Frame Space" time and shall be 150 μs. Simply put it's the job of the master to solve this problem. As far as I know iOS limits the packet number to 4 per connection event for instance. Android may have other hard coded limits depending on the OS version.
There is max data rate that can be achieved both on BT and BLE. You can tweak this data rate by changing MTU (maximum transmission unit - packet size) up to max MTU both ends of transmission can handle. But AFAIK there is no straight constraint on number of packets, besides physical ones imposed by the data rate.
You can find more in the spec
I could find the following in Bluetooth Spec v4.2:
4.5.6 Closing Connection Events
The MD bit of the Header of the Data Channel PDU is used to indicate
that the device has more data to send. If neither device has set the
MD bit in their packets, the packet from the slave closes the
connection event. If either or both of the devices have set the MD
bit, the master may continue the connection event by sending another
packet, and the slave should listen after sending its packet. If a
packet is not received from the slave by the master, the master will
close the connection event. If a packet is not received from the
master by the slave, the slave will close the connection event.
Two consecutive packets received with an invalid CRC match within a
connection event shall close the event.
This means both slave and masters have self-imposed limit on number of packets they want to transmit during a CI. When either party doesn't wish to send more data, they just set this bit to 0 and other one understands. This should usually be driven by number of pending packets on either side.
Since I was looking for logical limits due to spec or protocol, this probably answers my question.
Physical limits to number packets per CI would be governed by data rate, and as #morynicz mentioned, on MTU etc.
From my understanding, the limit is: min{max master event length, max slave event length, connection interval}.
To clarify, both the master and slave devices (specifically, the BLE stack thereof) typically have event length or "GAP event length" times. This time limit may be used to allow a central and/or advertiser and/or broadcaster to schedule the "phase offset" of more than one BLE radio activity, and/or limit the CPU usage of the BLE stack for application processing needs. E.g. a Nordic SoftDevice stack may have a default event length of 3.75ms that is indefinitely extendable (up to the connection interval) based on other demands on the SoftDevice's scheduler. In Android and iOS BLE implementations, this value may be opaque or not specified (e.g. Android may set this value to "0", which leaves the decision up to the controller implementation associated with the BLE chip of that device).
Note also that either the master or the slave may effectively "drop out" of a connection event earlier than these times if their TX/RX buffers are filled (e.g. Nordic SoftDevice stack may have a buffer size of 6 packets/frames). This may be implemented by not setting the MD bit (if TX buffer is exhausted) and/or "nacking" with the NESN bit (if RX buffer is full). However, while the master device can actually "drop out" by ending the connection event (not sending any more packets), the slave device must listen for master packets as long as at least one of master and slave have the MD bit set and the master continues to transmit packets (e.g. the slave could keep telling the master that it has no more data and also keep NACKing master packets because it has no more buffer space for the current connection event, but the master may keep trying to send for as long as it wants during the connection interval; not sure how/if the controller stack implements any "smarts" regarding this).
If there are no limits from either device in terms of stack-specified event length or buffer size, then presumably packets could be transmitted back and forth the entire connection interval (assuming at least one side had data to send and therefore set the MD bit). Just note for throughput calculation purposes that there is a T_IFS spacing (currently specified at 150us) between each packet and before the end of the connection interval.
Can I use UART on MSP-EXP430F5529LP from Energia in order to communicate on pins p3.3 and p3.4 (rx and tx respectively)?
I already use UART in order to communicate with my PC via USB. To do so, I use Serial.println() and such. Now that one UART is taken, how do I configure and use second UART to go to these pins? Or would it be better to rewire my Bluetooth chip (BlueGiga wt32) to some other pins?
Configuration aside, Serial does not seem to allow for multiple UARTs. How does it know which UART to print to?
For some reason, I could not find any manual on interacting with wt32 from Energia, or on interacting with multiple UARTs on Energia.
Edit: found this link: http://forum.43oh.com/topic/3942-stellaris-lm4f120-multiple-uart-problem/
only, incidentally, they say it does not work. Still, a lead.. but it is my understanding that I still have to configure UART on those two pins, if at all possible.
Edit2 Found MultiSerial example in Energia:
/*
Multple serial test
Receives from the main serial port, sends to the others.
Receives from serial port 1, sends to the main serial (Serial 0).
The circuit:
* Any serial device attached to Serial port 1
* Serial monitor open on Serial port 0:
created 30 Dec. 2008
by Tom Igoe
This example code is in the public domain.
*/
void setup() {
// initialize both serial ports:
Serial.begin(9600);
Serial1.begin(9600);
}
void loop() {
// read from port 1, send to port 0:
if (Serial1.available()) {
int inByte = Serial1.read();
Serial.write(inByte);
}
}
Now, this may sound like a really noob question, but where is my Serial1?? Is it somewhere on my pinout? The corresponding page in Energia manual is just a stub.
Now, this link: http://energia.nu/Tutorial_SerialCallResponse.html has a pinout according to which P1.1 is TXD, P1.2 is RXSD, which is something that I have not seen documented elsewhere. I suspect that it is assigned in this particular example, only I don't see the assignment in the code; also, I suspect that this is the backchannel, unless the switch is turned. Confused!
Edit3: found SoftwareSerial example that turns pins of your choice into a RX and TX. So at least I probably have a software solution. Of course, I'd prefer hardware. The manual for the launchboard says the hardware supports up to 4 serial ports, but how? Where are the pins?
Sorry I keep adding to this. I'll tidy it out when there is a solution.
If you dont need those uart pins to talk to the host you can remove the jumpers on that board that connect rxd and txd and then connect those to your bluetooth module.
Serial.begin(); works for backchannel UART pins(Txd,Rxd between target ic & debugger ic marked in white squares in below image)
http://energia.nu/wordpress/wp-content/uploads/2015/03/2016-06-09-LaunchPads-MSP432-2.0-%E2%80%94-Pins-Maps.jpg
Serial1.begin(); works for P3_2 & P3_3