Isochronous endpoints are one way only. But single isochronous IN transmission is described in various sources (eg. here http://www.beyondlogic.org/usbnutshell/usb4.shtml#Isochronous) as one IN token packet (from host to a device) followed by one DATA packet (from a device to the host). So I see communication in both directions here. Is the token packet from the host received by the same IN isochronous endpoint which then sends the data?
What is synchronization for? HERE : http://wiki.osdev.org/Universal_Serial_Bus#Supporting_Isochronous_Transfers we read : "Due to application-specific sampling rates, different hardware clock designs, scheduling policies in the operating system, or even physical anomalies, the host and isochronous device could fall out of synchronization." But how? I understand the sequence of events like this : device fills its outgoing buffer with data, and waits for the token (some interrupt probably). Host sends the token packet, and waits for the data packet, which (I think) should arrive instantly. Sequence is repeated every frame (#F.S.) and everybody is happy. Isn't the token packet synchronizing the reply from the device?
Here http://wiki.osdev.org/Universal_Serial_Bus#SYNC_Field we read : "All USB packets start with a SYNC field which serves, unsurprisingly, as a synchronization mechanism between the receiver and the sender." So once again I ask : why to synchronize isochronous transfers in another manner than this?
All USB transactions are always initiated by the Host. E.g. for an isochronous IN transaction the Host will ask the device for the next piece of data first. This is of course a data flow to the device, but on a lower protocol level (Token Packets). So a kind of control data is send TO the device, but the meaningful data (Data Packts) is only sent FROM the device (IN direction). When you develop software for a device you can often abstract away the Bus Protocol details, because they are handled in hardware (USB device peripheral). The low level messages do not enter an endpoint. Endpoints are on a higher layer.
Consider a USB microphone: It records audio data with a very specific sample rate which is based on the local oscillator of the device. It's only a matter of time that the clock of the Host and the microphone will drift. After a few minutes a gap in the data would appear (or a buffer overflow will occur) because the microphone is recording data at a slightly different speed then the USB is expecting it (from the device's configuration descriptor). So they need some kind of synchronization.
The SYNC field is on the lowest Layer. It is for bit synchronization only and should not be confused with the synchronization for isochronous endpoints (2.)
You might want to take a look at the official USB 2.0 Specification (usb_20.pdf) instead of all the third party wikis which kind of confused you.
Related
I'm developing a user space application where my end goal is to send data from a Linux machine A (an embedded device) and receive on another Linux machine B (an embedded device) over AF_XDP. I want to use AF_XDP to achieve low latency.
I understand for my user space application to receive data over AF_XDP I should use XDP_REDIRECT (please correct me if I'm wrong), what I can't understand is which XDP option (XDP_REDIRECT, XDP_TX etc) should I use to transmit data from my user space application over AF_XDP?
what I can't understand is which XDP option (XDP_REDIRECT, XDP_TX etc) should I use to transmit data from my user space application over AF_XDP?
That is not how AF_XDP works. Once all of the setup work is done, by you or a library you use, there will be a shared memory region called UMEM and 4 ringbuffers which are used in a sort of dance to receive and transmit packets. The buffers are called: UMEM_Fill, UMEM_Completion, RX, TX
On the ingress side, your XDP program is triggered so you can make a decision to send traffic to your AF_XDP socket or not. Here you will use the bpf_redirect_map to redirect the traffic into the socket/map which will return XDP_REDIRECT if successful (may fail if the socket isn't setup correctly or buffers are full).
To read the redirected packet you dequeue the RX queue/ring, the frame descriptor within will point to an address in UMEM which is where your packet data is located. As soon as you are done with this memory you send the frame descriptor back over the UMEM_Fill queue/ring so the NIC/Driver can re-use it.
To write a packet you dequeue a frame descriptor from the UMEM_Completion queue/ring fill the region of UMEM you have temporary control over with your packet and give the frame descriptor to the TX queue/ring. The NIC/Driver will consume the TX queue/ring and send out the packet. No XDP program is triggered on egress.
Would highly recommend you checkout https://www.kernel.org/doc/html/latest/networking/af_xdp.html which has more details regarding using AF_XDP.
I am writing a block device driver and having been copying parts of another one. They use the function blk_mq_rq_to_pdu, but I dont really understand what it does. It seems to just return a void pointer. What exactly is a PDU in Linux?
PDU is not Linux-specific. It's a Protocol Data Unit. From Wikipedia
In telecommunications, a protocol data unit (PDU) is a single unit of information transmitted among peer entities of a computer network. A PDU is composed of protocol-specific control information and user data. In the layered architectures of communication protocol stacks, each layer implements protocols tailored to the specific type or mode of data exchange.
So in device drivers, this is a generic term for whatever units of data are managed by the specific device or protocol.
I'm setting up a device to advertise as a server (peripheral) and a mobile phone to act as the client (central). My issue is: when my central 'reads' from the peripheral, how many packets can the peripheral respond with for a single request?
What I have seen so far is that the peripheral may respond with a 20 byte packet and then indicate another 20 byte packet. I don't see how this could achieve the stated data rates?
From your question I understand that your actual question is how to achieve the maximum BLE data rate, right?
Have a look here:
https://stackoverflow.com/search?q=BLE+throughput
Especially here: BLE peripheral throughput limit and here How can I increase the throughput of my BLE application?
In general the key is not the number of packets in the first place. First have a look at the MTU size and connection interval. After that, yes, it is possible to send multiple packets per connection interval. (Uh, I have to guess here 3 or 4 usually but not sure)
Moreover, in newer bluetooth standard versions you can look if your device supports packet length extension.
For further reading I suggest
https://punchthrough.com/pt-blog-post/maximizing-ble-throughput-on-ios-and-android/
and
https://punchthrough.com/pt-blog-post/maximizing-ble-throughput-part-3-data-length-extension-dle/
Using notifications instead of read requests you will get the best throughput.
Then a lower connection interval will also usually increase the throuhput.
Using LE Data Length Extension and a large MTU will increase it even further.
The last step is to switch from 1 Mbit/s PHY to 2 Mbit/s PHY to get even better.
Is it possible to send some notification messages to the nearby Bluetooth devices without pairing.I have found some protocol for these - OBEX Oject Push. But am not clear whether is is feasible without pairing request .Any demo apps for reference?
Yes and no.
If you are actually talking about connecting but not pairing, then, yes.
If you are talking about no connection at all, then no.
When creating a Bluetooth connection between two or more devices the following steps are taken.
Inquiry – If two Bluetooth devices know absolutely nothing about each other, one must run an inquiry to try to discover the other. One device sends out the inquiry request, and any device listening for such a request will respond with its address, and possibly its name and other information. The closest located device is not necessarily the fastest to respond and any device that hears the call will try to respond.
Paging – Paging is the process of forming a connection between two Bluetooth devices. Before this connection can be initiated, each device needs to know the address of the other (found in the inquiry process).
Connection – After a device has completed the paging process, it enters the connection state. While connected, a device can either be actively participating or it can be put into a low power sleep mode.
• Active Mode – This is the regular connected mode, where the device is actively transmitting or receiving data.
• Sniff Mode – This is a power-saving mode, where the device is less active. It’ll sleep and only listen for transmissions at a set interval (e.g. every 100ms).
• Hold Mode – Hold mode is a temporary, power-saving mode where a device sleeps for a defined period and then returns back to active mode when that interval has passed. The master can command a slave device to hold.
• Park Mode – Park is the deepest of sleep modes. A master can command a slave to “park”, and that slave will become inactive until the master tells it to wake back up.
Two devices can be bonded together through a one-time process called pairing. When two devices are paired, they store each other’s addresses, names and profiles in memory, allowing them to automatically establish a connection as soon as they are in range of each other.
It is not possible to send OPP (or other) communication between two devices before connecting.
It is possible to send communication between two devices after connection but before pairing.
I'm working on a server architecture for sending/receiving messages from remote embedded devices, which will be hosted on Windows Azure. The front-facing servers are going to be maintaining persistent TCP connections with these devices, and I need a way to communicate with them on the backend.
Problem facts:
Devices: ~10,000
Frequency of messages device is sending up to servers: 1/min
Frequency of messages originating server side (e.g. from user actions, scheduled triggers, etc.): 100/day
Average size of message payload: 64 bytes
Upward communication
The devices send up messages very frequently (sensor readings). The constraints for that data are not very strong, due to the fact that we can aggregate/insert those sensor readings in a batched manner, and that they don't require in-order guarantees. I think the best way of handling them is to put them in a Storage Queue, and have a worker process poll the queue at intervals and dump that data. Of course, I'll have to be careful about making sure the worker process does this frequently enough so that the queue doesn't infinitely back up. The max batch size of Azure Storage Queues is 32, but I'm thinking of potentially pulling in more than that: something like publishing to the data store every 1,000 readings or 30 seconds, whichever comes first.
Downward communication
The server sends down updates and notifications much less frequently. This is a slightly harder problem, as I can see two viable paradigms here (with some blending in between). Could either:
Create a Service Bus Queue for each device (or one queue with thousands of subscriptions - limit is for number of queues is 10,000)
Have a state table housed in a DB that contains the latest "state" of a specific message type that the devices will get sent to them
With option 1, the application server simply enqueues a message in a fire-and-forget manner. On the front-end servers, however, there's quite a bit of things that have to happen. Concerns I can see include:
Monitoring 10k queues (or many subscriptions off of a queue - the
Azure SDK apparently reuses connections for subscriptions to the same
queue)
Connection Management
Should no longer monitor a queue if device disconnects.
Need to expire messages if device is disconnected for an extended period of time (so that queue isn't backed up)
Need to enable some type of "refresh" mechanism to update device's complete state when it goes back online
The good news is that service bus queues are durable, and with sessions can arrange messages to come in a FIFO manner.
With option 2, the DB would house a table that would maintain state for all of the devices. This table would be checked periodically by the front-facing servers (every few seconds or so) for state changes written to it by the application server. The front-facing servers would then dispatch to the devices. This removes the requirement for queueing of FIFO, the reasoning being that this message contains the latest state, and doesn't have to compete with other messages destined for the same device. The message is ephemeral: if it fails, then it will be resent when the device reconnects and requests to be refreshed, or at the next check interval of the front-facing server.
In this scenario, the need for queues seems to be removed, but the DB becomes the bottleneck here, and I fear it's not as scalable.
These are both viable approaches, and I feel this question is already becoming too large (although I can provide more descriptions if necessary). Just wanted to get a feel for what's possible, what's usually done, if there's something fundamental I'm missing, and what things in the cloud can I take advantage of to not reinvent the wheel.
If you can identify the device (may be device id/IMEI/Mac address) by the the message it sends then you can reduce the number of queues from 10,000 to 1 queue and not have 10000 subscriptions too. This could also help you in the downward communication as you will be able to identify the device and send the message to the appropriate socket.
As you mentioned the connections last longer you could deliver the command to the device that is connected and decide what to do with the commands to the device that are not connected.
Hope it helps