Send and receive data in same URB in USB possible? LINUX - linux

I am developing a USB driver in linux kernel space Where my usb interface as two bulk endpoints (IN and OUT).I am using ONE URB to send and receive data. Can i use the same usb_alloc_urb() for sending and receive data.
I am using the below steps to send and receive data using urb
usb_alloc_urb() ---> created only one's
usb_fill_bulk_urb()---> using usb_sndbulkpipe
usb_sumbit_urb() ----> sumbited successfully
usb_fill_bulk_urb()---> using usb_rcvbulkpipe
usb_submit_urb() -----> In this point i am getting ERROR -16.
Is the above followed steps are correct/possible ?
Thank you

You cannot use the same URB for two transfers at the same time.
To be able to reuse a URB, you must wait until it has been completed (successfully or with an error).
To use full-duplex transfers, you need two URBs.
To get high transfer rates, you must pipeline URBs, i.e., you need even more.

Related

Query related to AF_XDP for transmission of data

I'm developing a user space application where my end goal is to send data from a Linux machine A (an embedded device) and receive on another Linux machine B (an embedded device) over AF_XDP. I want to use AF_XDP to achieve low latency.
I understand for my user space application to receive data over AF_XDP I should use XDP_REDIRECT (please correct me if I'm wrong), what I can't understand is which XDP option (XDP_REDIRECT, XDP_TX etc) should I use to transmit data from my user space application over AF_XDP?
what I can't understand is which XDP option (XDP_REDIRECT, XDP_TX etc) should I use to transmit data from my user space application over AF_XDP?
That is not how AF_XDP works. Once all of the setup work is done, by you or a library you use, there will be a shared memory region called UMEM and 4 ringbuffers which are used in a sort of dance to receive and transmit packets. The buffers are called: UMEM_Fill, UMEM_Completion, RX, TX
On the ingress side, your XDP program is triggered so you can make a decision to send traffic to your AF_XDP socket or not. Here you will use the bpf_redirect_map to redirect the traffic into the socket/map which will return XDP_REDIRECT if successful (may fail if the socket isn't setup correctly or buffers are full).
To read the redirected packet you dequeue the RX queue/ring, the frame descriptor within will point to an address in UMEM which is where your packet data is located. As soon as you are done with this memory you send the frame descriptor back over the UMEM_Fill queue/ring so the NIC/Driver can re-use it.
To write a packet you dequeue a frame descriptor from the UMEM_Completion queue/ring fill the region of UMEM you have temporary control over with your packet and give the frame descriptor to the TX queue/ring. The NIC/Driver will consume the TX queue/ring and send out the packet. No XDP program is triggered on egress.
Would highly recommend you checkout https://www.kernel.org/doc/html/latest/networking/af_xdp.html which has more details regarding using AF_XDP.

Production-ready WebSocket IOT System - Software Design Advice

I'm developing a system to control a range of IoT devices. Each set of devices is grouped into a "system" that monitors/controls a real-world process. For example system A may be managing process A and have:
3 cameras
1 accelerometer
1 magnetometer
5 thermocouples
The webserver maintains socket connections to each device. Users can connect (via a UI - again with WebSockets) to the webserver and receive updates about systems to which they are subscribed.
When a user wants to begin process A, they should press a 'start' button on the interface. This will start up the cameras, accelerometer, magnetometer, and thermocouples. These will begin sending data to the server. It also triggers the server to set the recording mode to true for each device, which means the server will write output to a database. My question:
Should I send a single 'start' request from javascript code in my UI to the server, and allow the server to start each device individually (how do I then handle an error, for example, if a single sensor isn't working - what about if two sensors don't work?). Or do I send individual requests from the UI to the server for each device, i.e. start camera 1, start camera 2, start accelerometer, start recording camera1, etc. and handle each success/error state individually?
My preference throughout the system so far has been the latter approach - one request, one response; with an HTTP error code. However, programming becomes more complex when there are many devices to control, for example - System B has 12 thermocouples.
Some components of the system are not vital - e.g. if 1 camera fails we can continue, however, if the accelerometer fails the whole system cannot run and so human monitoring is required. If the server started the devices individually from a single 'start' message, should I return an array of errors, or should the server know which components are vital and return a single error if a vital component fails? And in a failure state, should the server then handle stopping each sensor and returning to the original state - and what if that then fails? I foresee this code becoming quite complex with this approach.
I've been going back and forth over the best way to approach this for months, but I can't find much advice online around building complex, production-ready IoT systems for the real world. If anybody has any advice or could point me towards any papers/books/etc. I would really appreciate it.
Thanks in advance,
Tom

Linux can-bus excessive retransmit

I'm working on a project involving a linux embedded device with CAN bus-support.
I've noticed that if I try to send a CAN-packet without having anything attached to the CAN-bus, the transmit is automatically reattempted by the kernel an unlimited number of times. I can verify this using a scope - the same message is automatically transmitted over and over. This retransmission persists even if I shut down the process which created the message, and even if this process only ever attempts to transmit one single message.
My question is - is this normal behaviour for a linux CAN bus kernel? My worry is that if there is ever something wrong in the device, and it erroneously concludes that it is alone on the bus, the device might possibly swamp the bus making it unusable for other bus participants. I would have expected there to be some sort of retry-limit.
The device is using linux 4.14.48, and the can-chip is Philips SJA1000.
What you are seeing is likely error frames. Compliant behavior is this:
Node is active. It attempts to send a data frame but get no ACK bits set since nobody is listening to it.
It will send out an error frame, which pretty much only consists of 6 dominant bits to purposely break bit stuffing.
The controller will re-attempt to send the message. If a new attempt to send without receiving ACK is done, another error frame will be sent. This will keep repeating automatically.
After 128 errors, the node will go error passive, where it will still send error frames, but now with recessive level where it doesn't disrupt other traffic.
After a total of 256 errors, the node will go bus off and shut up completely.
This should all be handled by the CAN controller hardware, not by the OS. You might need to reset or power cycle the SJA1000 once it goes bus off. If it never goes bus off, then something in the driver code might be continuously resetting the CAN controller after a certain amount of errors.
Mind that microcontroller implementations might act the same and reset upon errors too, since that's typically the only way to re-establish communication after a bus off. This depends on the nature of the CAN application.
Short answer is yes - if ACK is the only TEM error the counter will stop at 128 and not go into BUS OFF. It will go forever. This happened to me as well and I just turned off the re-transmit function from the processor side. Not sure if that is a CAN standard function or not.

Is there any alternative for ioctl() in linux to interact with nvme drives

I am working on a testing tool for nvme-cli(written in c and can run on linux).
For SSD validation purpose, we are actually looking for sending I/O commands to a particular Submission queue(IO Queue pair). We needed this because we wanted threading, but for threading to happen we need to send I/O requests to different queues else the I/O requests would be processed serially.
So is there any way in ioctl() where we can specify the Submission queue IDs?
OR
Is there other thing similar to ioctl() where we can specify the Submission queue IDs?
Since i am new to nvme or ioctl, please correct me if i am wrong.
You can try SPDK (https://github.com/spdk/spdk) which contains an user-space NVMe driver. It is written in C. Youc an find its NVMe driver APIs in spdk/include/spdk/nvme.h. For example, spdk_nvme_ctrlr_cmd_io_raw() is used to send any kinds of IO commands to the device, with any created qpair.
You can also try Pynvme (https://github.com/cranechu/pynvme), a python extension of SPDK. Its IOWorker just sends requests within separated process and qpair.

Isochronous USB transfers confusion

Isochronous endpoints are one way only. But single isochronous IN transmission is described in various sources (eg. here http://www.beyondlogic.org/usbnutshell/usb4.shtml#Isochronous) as one IN token packet (from host to a device) followed by one DATA packet (from a device to the host). So I see communication in both directions here. Is the token packet from the host received by the same IN isochronous endpoint which then sends the data?
What is synchronization for? HERE : http://wiki.osdev.org/Universal_Serial_Bus#Supporting_Isochronous_Transfers we read : "Due to application-specific sampling rates, different hardware clock designs, scheduling policies in the operating system, or even physical anomalies, the host and isochronous device could fall out of synchronization." But how? I understand the sequence of events like this : device fills its outgoing buffer with data, and waits for the token (some interrupt probably). Host sends the token packet, and waits for the data packet, which (I think) should arrive instantly. Sequence is repeated every frame (#F.S.) and everybody is happy. Isn't the token packet synchronizing the reply from the device?
Here http://wiki.osdev.org/Universal_Serial_Bus#SYNC_Field we read : "All USB packets start with a SYNC field which serves, unsurprisingly, as a synchronization mechanism between the receiver and the sender." So once again I ask : why to synchronize isochronous transfers in another manner than this?
All USB transactions are always initiated by the Host. E.g. for an isochronous IN transaction the Host will ask the device for the next piece of data first. This is of course a data flow to the device, but on a lower protocol level (Token Packets). So a kind of control data is send TO the device, but the meaningful data (Data Packts) is only sent FROM the device (IN direction). When you develop software for a device you can often abstract away the Bus Protocol details, because they are handled in hardware (USB device peripheral). The low level messages do not enter an endpoint. Endpoints are on a higher layer.
Consider a USB microphone: It records audio data with a very specific sample rate which is based on the local oscillator of the device. It's only a matter of time that the clock of the Host and the microphone will drift. After a few minutes a gap in the data would appear (or a buffer overflow will occur) because the microphone is recording data at a slightly different speed then the USB is expecting it (from the device's configuration descriptor). So they need some kind of synchronization.
The SYNC field is on the lowest Layer. It is for bit synchronization only and should not be confused with the synchronization for isochronous endpoints (2.)
You might want to take a look at the official USB 2.0 Specification (usb_20.pdf) instead of all the third party wikis which kind of confused you.

Resources