[Edit: I found the reason, see below]
The problem:
I created a "driver" for a device in Windows using Python (PyUSB and libusb-win32). While this software works seamlessly on multiple PCs under Windows, using my Linux (Kubuntu 18.10) test laptop, a sequence of bulk writes of length 512 bytes each times out after the second 512 byte bulk transfer.
Interesting: I also tried the same using VirtualBox. And it turns out using a Windows guest via VirtualBox on the same Linux host, the same error still occurs. So it is not because of
The question:
What can happen under Linux does not happen under Windows that causes a timeout [Errno 110]?
More information, in case it helps:
Under Windows, Wireshark shows timing differences between two of the bulk writes of 6 ms for the first one and 5 ms for every following, while under Linux the delta is only round about 3 ms, which are mostly resulting from a sleep operation (relevant Python source code is attached). Doubling that time does nothing.
dmesg shows messages like 'bulk endpoint ## has invalid maxpacket 64', where ## is 0x01, 0x08 and 0x81.
The device only has one configuration.
The test laptop has only USB 3.0 connectors, where the Windows PCs have both USB 3.0 and 2.0. I tested all.
Wireshark shows the device answering with another (empty) bulk on every bulk write under Linux, while it does not show that under Windows. As far as I understand, that is because USBPcap cannot capture handshakes under Windows. But I am not shure with that, because I do not know if this type of response would really be classified as "URB_BULK out".
I tried libusb0, libusb1 and OpenUSB as backends under Linux, without success.
The bulk transfer in question is the transfer of FPGA firmware to the device.
I am able to communicate with the device before the multiple-512 byte-chunk bulk operation on the same endpoints using only a few bytes. The code that then causes the timeout is the following one in the second iteration of this for loop:
for chunk in chunks: # chunks: array of bytearrays with 512 bytes each
self.write(0x01,chunk)
time.sleep(0.003)
[Edit] The reason I found out that this only occurs on my test laptop using xhci, not on a second Linux test machine using ehci. So this might be caused by xhci. I do not yet have a workaround, but this at least gives an explanation.
It turns out that the device requested less bytes per packet, the desired amount of bytes (64) could be found in dmesg as already written in the question. Since xhci doesn't support that officially, Linux decided to ignore that request. Windows seemingly went with it and split larger packets up in the requested packet size. So the solution was to manually split the data to packets of 64 byte size before transferring it.
Related
I'm working on a project that will have SPIFFS, Bluetooth and WiFi libraries. The program is all set so the librarys don't interfere in the communication, since Bluetooth can't work when WiFi is set. But I'm getting the following problem when I attempt to add a command line from the library https://github.com/mobizt/Firebase-ESP32, this library is responsable of making connection to firestore database:
text section exceeds available space in boardSketch uses 1517102 bytes (115%) of program storage space. Maximum is 1310720 bytes.
Global variables use 63300 bytes (19%) of dynamic memory, leaving 264380 bytes for local variables. Maximum is 327680 bytes.
Sketch too big; see http://www.arduino.cc/en/Guide/Troubleshooting#size for tips on reducing it.
Error compiling for board DOIT ESP32 DEVKIT V1.
I only get this error when adding this piece of code:
Firebase.begin(&config, &auth);
Firebase.reconnectWiFi(true);
I'm using arduino ide to work with esp32, but I have esp-idf in case it helps solving the issue.
It looks like you've reached the limit of the partition size for your application. I don't know how Arduino configures the ESP IDF partitions, but you should be able to change them however you want. See the documetation on partitions
I have a problem with the i2c/SMBus on a Linux System with an Intel Apollo Lake processor. I am trying to read/write from/to an EEPROM but I face some issues. My EEPROM is located at the address 0x56, and I am able to watch the Bus with my Logic Analyzer.
When I try to read from the device via i2ctools (i2cget) the System behaves as expected. My issue occurs when I try to perform a write command via i2cset for example. i2cset ends with an error (Write failed). Because I am able to watch the Bus electrically I can also say that all lines stay HIGH and the Bus is not touched. I was able to activate the dev_dbg() functions in the i2c driver i2c_i801 and when I perform i2cset I am able to find (dmesg) the debug message:
[ 765.095591] [2753] i2c_i801:i801_check_post:433: i801_smbus 0000:00:1f.1: No response
When running my minimal I²C Python code using the smbus2 lib, I get the following error message and the above mentioned debug message:
from smbus2 import SMBus
bus = SMBus(0)
b = bus.read_byte_data(86,10) #<- This is performed
b = bus.write_byte_data(86,10,12) #<- This is not performed
bus.close()
Error:
File "usr/local/lib/python3.8/dist-packagers/smbus2-0.4.0-py3.8.egg/smbus/smbus2.py", line 455, in write_byte_data
ioctl(self.fd, I2C_SMBUS, msg)
OSERROR: [Errno 6] No such device or adress
A big hint for me is that I am not able to perform a write command in the address space form 0x50 to 0x57. My guess is that some driver locks the address space to prevent write command to that "dangerous" area.
My question is: "Does anyone know this kind of behavior and is there a solution so that I can write to my EEPROM at address 0x56? OR Is there a lock surrounding the i2c adress space from 0x50 to 0x57 and who is my opponent?"
I am kind of a newbie to the whole driver and kernel world so please be kind and it is quite possible that I made a beginner mistake.
I would appreciate hints and tips I can look after surrounding my problem.
It seems that I found the cause of my problem. In this Forum post is described that Intel changed a configuration Bit at the SMBus controller.
OK, I know what's going on.
Starting with the 8-Series/C220 chipsets, Intel introduced a new
configuration bit for the SMBus controller in register HOSTC (PCI
D31:F3 Address Offset 40h):
Bit 4 SPD Write Disable - R/WO.
0 = SPD write enabled.
1 = SPD write disabled. Writes to SMBus addresses 50h - 57h are
disabled.
This badly documented change in the configuration explains the issues.
One Challenge is, that to apply and enable changes to the SPD-write Bit the System needs to be rebooted. Unfortunately while rebooting the BIOS will change the Bit back to the default. The only solution seems to be an adaption in the BIOS.
For me, this issue is resolved. I just wanted to share this information in case someone faces the same issues.
Using BlueZ, which
is the official Linux Bluetooth stack
I'd like to know which of the below two methods are better suited for detecting a device's presence in the nearby.
To be more exact, I want to periodically scan for a Bluetooth device (not BLE => no advertisement packets are sent).
I found two ways of detecting it:
1.) Using l2ping
# l2ping BTMAC
2.) Using hcitool
# hcitool name BTMAC
Both approaches working.
I'd like to know, which approach would drain more battery of the scanned device?
Looking at solution #1 (l2ping's source):
It uses a standard socket connect call to connect to the remote device, then uses the send command to send data to it:
send(sk, send_buf, L2CAP_CMD_HDR_SIZE + size, 0)
Now, L2CAP_CMD_HDR_SIZE is 4, and default size is 44, so altogether 48 bytes are sent, and received back with L2CAP_ECHO_REQ.
For hcitool I just have found the entrypoint:
int hci_read_remote_name(int dd, const bdaddr_t *bdaddr, int len, char *name, int to);
My questions:
which of these approaches are better (less power-consuming) for the remote device? If there is any difference at all.
shall I reduce the l2ping's size? What shall be the minimum?
is my assumption correct that hci_read_remote_name also connects to the remote device and sends some kind of request to it for getting back its name?
To answer your questions:-
which of these approaches are better (less power-consuming) for the remote device? If there is any difference at all.
l2ping BTMAC is the more suitable command purely because this is what it is meant to do. While "hcitool name BTMAC" is used to get the remote device's name, "l2ping" is used to detect its presence which is what you want to achieve. The difference in power consumption is really minimal, but if there is any then l2ping should be less power consuming.
shall I reduce the l2ping's size? What shall be the minimum?
If changing the l2ping size requires modifying the source code then I recommend leaving it the same. By leaving it the same you are using the same command that has been used countless times and the same command that was used to qualify the BlueZ stack. This way there's less chance for error and any change would not result in noticeable performance or power improvements.
is my assumption correct that hci_read_remote_name also connects to the remote device and sends some kind of request to it for getting back its name?
Yes your assumption is correct. According the Bluetooth Specification v5.2, Vol 4, Part E, Section 7.1.19 Remote Name Request Command:
If no connection exists between the local device and the device
corresponding to the BD_ADDR, a temporary Link Layer connection will
be established to obtain the LMP features and name of the remote
device.
I hope this helps.
I have an Android App that write several bytes to a Bluetooth device.
Looking on btsnoop_hci.log I see that, when a large amount of bytes are sent to the BLE device, the app use Prepare Write Request more times and then Execute Write Request: Immediately Write All.
Now my problem is how to perform this with a my application using a RN4870 module.
At this moment I can connect, read service and characteristics, and write using
CHW command as described in the manual when the there are few bytes.
But I cannot write as the remote BLE device expect when there are lot of bytes.
Thank You for support
Marco
This is the Microchip answer:
Hello,
The core specifications are handled by the firmware.
The user doesn't have access at this level, so is nothing you can set.
Regarding the long data question:
"Does RN4870 module support the Data Length Extension feature? "
RN4870 rev 1.28 support DLE, but partially. The normal packet size in BLE without DLE is 20 bytes.
With a standard DLE feature, the normal packet size should be 251 bytes.
However, in RN4870 Rev 1.28, the packet size is 151 bytes. So it is not a full implementation of the DLE.
The DLE feature (Data Length Extension) is embedded into the lower levels of the Bluetooth stack and there are no specific commands to enable or disable DLE. Essentially, if the peer device also supports DLE, then the DLE will be enabled.
So there are no specific (commands) that you need to do to increase throughput through DLE.
Regards,
In other words there is nothing to do!
In Android Application you can't directly set the DLE length, instead you should set the MTU size. Android Bluetooth stack will calculate DLE length based on the MTU. Maximum Data Length supported by BT protocol in 251, but can be between 27 and 251 depends on BT controller H/W capability. During connection BT Device will negotiate with peer device(If peer device support DLE) to set Maximum DLE size supported by both device.
To increase your throughput you can use the maximum supported MTU size of 512. Also you can write without response and do error check on data using your own logic like checking parity or CRC and re-transmit data from Application for better throughput.
The dmaengine in Linux significantly simplifies writing of drivers for devices using DMA, especially if they support and use scatter-gather (SG) transfers.
However, there is a problem in case if the length of such transfer is not known a priori. This situation is quite common. It may happen in case of USB transfers, or in case of AXI Stream transfers. In the case of an AXI Stream connected device, the transfer should be terminated by setting the tlast signal to '1'. The driver should prepare the buffer for the longest possible transfer, but after the transfer is complete, it should be able to find the number of actually transferred bytes. Unfortunately it seems, that there is no documented way to read the length of the completed transfer.
The single transfer is represented with an SG-table, that is later converted into the dma_async_tx_descriptor, using the dmaengine_prep_slave_sg function , submitted to the DMA layer (e.g. like here), and finally scheduled for execution via dma_async_issue_pending. After that, the transfer descriptor may be accessed only via a dma_cookie returned by the dmaengine_prep_slave_sg function.
The problem with determining the actual length of the transfer was reported for the USB transfers, and a patch adding the transferred field to the dma_async_tx_descriptor was proposed. However that proposal was rejected, and after a discussion a solution based on using the dmaengine_tx_status, and checking the residue field in the returned dma_tx_state structure.
Unfortunately, it seems that the proposed solution does not work neither in the 4.4 kernel (used for Xilinx SoC devices) nor in the newest 4.7 kernel.
The residue field is set to 0 in case of completed transfers regardless of the actual number of transferred bytes.
So my question is: How can I reliably determine the actual number of transferred bytes in the completed SG DMA transfer in a dmaengine compatible driver?
PS. That question with more Xilinx-specific details related to the AXI DMA IP core has been also asked on the Xilinx forum.