iPad is transmitting a 0x71 packet immediately after connecting over the control L2CAP channel in a Bluetooth HIDP connection.
Unless I respond with 0x00 byte immediately, over the same channel, iPad disconnects.
What does 0x71 mean? I cannot find this byte in HID specification nor in Bluetooth HIDP specification, although I might just be searching poorly. I have some indication it might be the set_protocol packet, probably setting to REQUEST protocol as opposed the BOOT protocol, but I can't confirm that.
I don't think it matters much that I'm connecting OS X machine with the iPad, with OS X machine serving as the server, but I'm noting it here in case it does.
0x71 means "SET_PROTOCOL for Input reports". One can choose between Boot mode (0x70) and regular mode (0x71). See 7.4.6 of the HID_SPEC_V10 from Bluetooth.org for more details.
Related
I'm trying to connect to a BTLE device from Linux from C++ with BlueZ.
Connecting to most devices works fine, but there is a special device which times out with 90% probability. From a standard Android smartphone the connection to this particular device works as intended.
For #Emil's advice in my other question (thanks!) I've setup a Link Layer sniffer tool for further investigation.
During the sniff period I tried connecting to the device(Destination) from both device(Good) and device(Bad).
Device(Good) is working perfectly - it connected
Device(Bad) is not working - timed out
Now I have a Link Layer data of both device connection trials and there is one significant difference between their trials:
Device(Good)'s LL Data for its CONNECT_REQ uses 500 for Timeout value (which is 625ms) while Device(Bad)'s LL Data in CONNECT_REQ uses 42 (which is 52.5ms).
I think Device(Destination)'s response is normally (mostly) arriving between those two, ie after 52.5ms and below 625ms, but sometimes it arrives in less than 52.5ms, and then also BlueZ can connect to it finally.
Is there any possibility to change this Timeout property for CONNECT_REQ in BlueZ? Maybe with setsockopt by any chance?
Or this is something hardcoded into kernel, even for bluetooth adapters attached to USB?
I am using a BeagleBone Black board (kernel 4.14.108-ti-r104) to create USB gadget using configfs/functionfs. I compose my gadget (using gadgettool) providing details about device configuration (function, vendor id, product id and ton of other params), run my userspace program that writes descriptors and strings to ep0 and connect the device to host. All works fine, I get BIND (when binding device to UDC) and ENABLE (when actually host is connected) events and my device can read from ep2 and write to ep1. Using wireshark I see the communication looks good, device and configuration descriptors as well as strings are exchanged.
The problem starts when I connect the device to another host. Unfortunately I have almost no control over that host, in particular I cannot run wireshark there, I don't even know the OS. The only thing I can do is to plug/unplug device, optionally see a message that device was detected and optionally a restart. What I see on the gadget side is that following BIND and ENABLE events I immediately get SUSPEND event and read on ep2 fails with 108 (ESHUTDOWN). Now the question is how to track the problem down.
I tried usbmon, but it seems it does not listen to traffic when device is in gadget mode. I have also seen https://github.com/torvalds/linux/blob/master/drivers/usb/gadget/udc/trace.h file which seems to define some udc trace points, but I am not really sure how they can be used.
So the final question is simple: how do I get any information about traffic on USB bus having access only to gadget side? I don't need full trace, but al least some information which packets were exchanged would be super useful. Did it fail while exchanging device descriptors, configuration/interface/endpoint descriptors or strings or something totally different?
Small update:
The whole thing is about Android Open Accessory Protocol and I am trying to write a gadget that would connect to this accessory.
I have changed my gadget composition somewhat and now I know the gadget is being identified by host (it displays manufacturer/model) so I suppose the issue is not in device descriptor and strings. I have used two additional flags in descriptors (FUNCTIONFS_ALL_CTRL_RECIP | FUNCTIONFS_CONFIG0_SETUP) and when connecting to my computer I get setup event (request 51 as expected), but when connecting to my accessory I still get SUSPEND/ESHUTDOWN. This time though it looks like the time between ENABLE and SUSPEND is much greater (over 10 seconds) which looks to me as if the host send some message, but this message was not processed by my gadget and then the host timeout out and disabled usb device. Still don't know how to find out if the accessory sent anything to gadget and what it was...
Can anone explain how it works when an application sends UDP unicast datagrams over an 802.11 WiFi network? Assume non-blocking UDP socket. For concreteness, assume 802.11n or 802.11ac and a reasonably new Linux kernel (Android Lollipop or Debian stable). Specifically, if the sender NIC does not receive any positive ACK of send MPDUs, will the send() call return -1 and socket send queue in kernel be shown as non-empty with netstat? And the NIC will re-send same MPDUs repeatedly?
If this is not the right place to ask, please point to a good reference or another StackExchange site maybe.
In my understanding. Wifi(layer 2) would NOT care about UDP/TCP protocol, for wifi hardware, it is just a frame,
the cast will act like this.....
frame sending -> no ack -> retry again -> no ack -> retry -> no ack -> retry ...
after a few retry, wifi hardware will drop this frame and send next frame in NIC buffer. wifi driver should NOT always keep this frame, because frame drop or lost is frequently case in wifi.
then we talk about UDP network protocol now, because it's anon-blocking UDP socket, UDP would not care any error, it just continue sending and sending and sending....
Android phone has a feature -- "miracast", it also use UDP as a video streaming protocol and use wifi to transfer data. maybe you can check how does this function work.
Layer-2 (Wi-Fi, in this case) knows nothing about the layer-3 protocol (IP, IPX, etc.) used, much less the layer-4 protocol (TCP, UDP, SPX, etc.) used: Wi-Fi doesn't know about IP, which doesn't know about UDP. The whole point of the network layers is that they are independent of each other. Wi-Fi can carry any layer-3 protocol, which, in turn, can carry any layer-4 protocol, which can carry any upper layer protocols.
In general, two communicate between bluetooth devices, first we perform a bluetooth pairing between two devices and then starts further communication between them.
My problem scenario is simply to transfer a hello packet from one bluetooth device to another bluetooth device.
For this i am planning to use sockets programming technique i.e. RFCOMM sockets.
I got some help about this from http://people.csail.mit.edu/albert/bluez-intro/x502.html
So, my query is do we require bluetooth pairing between two devices before initiating communication with RFCOMM socket connection.
Or does 48 bits device address is only necessary to transfer some data packet from one bluetooth device to other and bluetooth pairing could be avoided.
No, it is not.
Bluetooth device can be in one of the four modes:
Broadcaster
Observer
Peripheral
Central
In broadcaster mode device can only send advertisement messages. This includes name and HwID.
In Observer mode device can only receive advertisement messages.
Peripheral = Broadcaster + can take in connect requests
Central = Observer + can send out connect requests.
If you have an application which does not want to connect use first two modes above.
Please let me know if this addresses your question.
I want to clear of my basics before I Jump into more complicated matter of bluetooth. I have following basic question.
If there is two bluetooth devices(A phone and a bluetooth display). Is it that bluetooth connection is initiated only by the phone.
Suppose there would be lot of bluetooth communication happening from a phone to bluetooth display.Both devices can send messages to any other devices at any time. What is usual design approach of communication. Is it that the phone creates a Socket Connection to the bluetooth display through RFCOMM first time by sending a connect request to the Bluetooth device and this connection is maintained all the time or for every message the Socket connection is made and then socket is closed, after that again reopened and closed for next message.
If the connection is opened till the devices are in nearby range what are the consequences.
What is normal way of communication in case of phone and headset.
Can I get any reference so that i can get some knowledge about that.
1) In general, bluetooth connections can be initiated by either device. For example, with a phone and computer, you could start a connection from either side. With a phone and a display or headset, there may be no input interface on one device, so you would initiate connections from the phone. Devices can also auto-negotiate role switches such that they swap master/slave roles.
2) If you have continuous data to exchange, or require low latency, the connection would typically be left up. If you only have rare messages to exchange, tearing down the connection would save power because the devices are maintaining the connection synchronization by exchanging null packets.
3) You can't maintain a connection with devices out of range. If they can't communicate for some timeout period (on the order of seconds) then they lose sync and kill the connection.
4) Note that phone/headset are not using RFCOMM connections, rather the HSP (headset profile). Connections for isochronous voice data are inherently different than a sporadic data connection like RFCOMM.
5) A good way to see how "real" devices are communicating is to use tools like hcidump, as part of the linux blueZ stack. This lets you fully sniff the protocol messages that happen as you connect devices.