I am working on a project in embedded Linux with beagle bone to transfer 300 bytes of data as one block in one write cycle to a slave (Atmel uC). After having read the Documentation on Spi ie /Documentation/spi I have found that the DMA gets enabled when the data transfer threshold exceeds 160 bytes as mentioned in /drivers/spi/omap2_mcspi.c
I would like to enable flow control based on exchange of const 4 byte values between my beaglebone and Atmel uC. Once I have sent a command say CMD_DATA, the slave responds with RC_RDY. I would like to make a kernel module that services interrupts and calls an interrupt handler every time upon receiving data from slave so that I can check for this ack from slave.
How do I enable interrupts and register interrupt handler for SPI? Any sample codes or tutorials would be helpful. I have looked extensively online and all I found was setting up interrupts for GPIO's
Thanks!
Related
I have an application that uses an ARM device running Debian Stretch (soon to be Bullseye) that is using SPI for high speed point-to-point communications to another device. I am using the spidev driver. I am seeing a lot of latency in each transaction, taking up to 700 microseconds for a burst of 256 bytes at 10MHZ. I have to transfer this burst every 1 millisecond.
I want to test reducing the latency in the Linux SPI driver. I see that struct spi_device has some fields to control that:
From spi.h
#rt: Make the pump thread real time priority.
#cs_setup: delay to be introduced by the controller after CS is asserted
#cs_hold: delay to be introduced by the controller before CS is deasserted
How do I set these?
I've done a lot of searching about this but I can't find the connection between setting struct spi_device and any external mechanism.
I'm currently in a situation that my SoC will be connected via its I2C bus through a I2C-to-UART converter MAX3107 to the UART port of a microprocessor.
Although the communication between the two shouldn't be an issue, the part were the Soc should update the firmware of the microprocessor has to be done with the Y-Modem file transfer protocol.
Although a question is pending at the manufacturer, I still wanted to check here:
Would this even be possible
The SoC runs Linux, is this depending on the MAX3107 driver
Does this concern the I2C bus or is only the UART driver and bus interesting.
https://datasheets.maximintegrated.com/en/ds/MAX3107.pdf
I used the SC16IS750 instead with the Linux kernel driver.
Sending a file via Y-Modem doesn't seem to be a problem. I tried both Minicom and TeraTerm to send a file and it works. The receiver responds is just 1 character each time before sending a part of the file. If the responds would have been more than 64-byte at a time (instead of the one character) this would have been a problem, because the FIFO first needs to be read and cleared before another sting can be received.
I'm trying to understanding NAPI implementaion in linux kernel. These are my basic doubts.
1) NAPI disables further interrupts and handles the skbs' using polling
Who disables it?
Does the Interrupt handler should disable it?
If yes - Isn't the time gap between disabling interrupt and handling the SOFTIRQ net_rx_action where actually polling is done is way too much.
2) By default all NAPI enabled drivers on receiving a single frame disable interrupt and handle remaining frames using polling in bottom halfs?
or is there a logic where only if frames > 32 (on continous receving all frames in irq handler) makes a switch to poll mode?
3) Now coming to SHARED IRQ -
what happens to other devices interrupts , other device bottom half might not run since those devices are not there in poll_list.
I wrote a comprehensive guide to understanding, tuning, and optimizing the Linux network stack which explains everything about network drivers, NAPI, and more, so check it out.
As far as your questions:
Device IRQs are supposed to be disabled by the driver's IRQ handler after NAPI is enabled. Yes, there is a time gap, but it should be quite small. That is part of the tradeoff decision you must make: do you care more about throughput or latency? Depending on which, you can optimize your network stack appropriately. In any case, most NICs allow the user to increase (or decrease) the size of the ring buffer that tracks incoming network data. So, a pause is fine because packets will just be queued for processing later.
It depends on the driver, but in general most drivers will enable NAPI poll mode in the IRQ handler, as soon as it is fired (usually) with a call to napi_schedule. You can find a walkthrough of how NAPI is enabled for the Intel igb driver here. Note that IRQ handlers are not necessarily fired for every single packet. You can adjust the rate at which IRQ handlers fire on most cards by using a feature called interrupt coalescing. Some NICs may not support this option.
The IRQ handlers for other devices will be executed when the IRQ is fired because IRQ handlers have very high priority on the CPU. The NAPI poll loop (which runs in a SoftIRQ) will run on whichever CPU the device IRQ was handled. Thus, if you have multiple NICs and multiple CPUs, you can tune the IRQ affinity of the IRQs for each NIC to prevent starving a particular NIC.
As for the example you asked about in the comments:
say NIC 1 and NIC 2 share IRQ line , lets assume NIC 1 is low load , NIC 2 high load and NIC 1 receives interrupt, driver of NIC 1 would disable interrupt until it's softirq is handled , say that time gap as t1 . So for time t1 NIC 2 interrupts are too disabled, right?
This depends on the driver, but in the normal case, NIC 1 only disables interrupts while the IRQ handler is being executed. The call to napi_schedule tells the softirq code that it should start running if it hasn't started yet. The softirq code runs asynchronously, so no NIC 1 does not wait for the softirq to be handled.
Now, as far as shared IRQs go: again it depends on the device and the driver. The driver should be written in such a way that it can handle shared IRQs. If the driver disables an IRQ that is being shared, all devices sharing that IRQ will not receive interrupts. This would be bad. One way that some devices solve this is by allowing a driver to read/write to a specific register causing that specific device to stop generating interrupts. This is a preferred solution as it does not block other devices generating the same IRQ.
When IRQs are disabled for NAPI, what is meant is that the driver asks the NIC hardware to stop sending IRQs. Thus, other IRQs on the same line (for other devices) will still continue to be processed. Here's an example of how the Intel igb driver turns off IRQs for that device by writing to registers.
I'm trying to connect a mass flow sensor, SFM-3000 by sensorion, to labview on PC using USB device, NI-8452, which provide I2C interface.
I followed the user manual of the sensor and used I2C example by labview but I cannot establish communication between them
I get the error message:
Error -301744 occurred at NI-845x I2C Run Script.vi:6110001,
Possible reason(s):
NI-845x: The I2C master lost arbitration and failed to seize the bus during transmission of an address+direction byte.
I'm using NI-8452 that include pull up resistor and I make sure to enable them by enabling 'Use Internal I2C Pullup Resistor' filled in 'NI-845x Device' property node.
I set I/O voltage level to 3.3,
I double check the address, I have 7 bit address defined in user manual of my device, 64 dec or 1000000 binary.
As specified in my device user manual, I provide it Vdd of 5v from NI-8452 pin 40 and also GND in pin 7.
Off-course SDA in pin 5 and SCL in pin 9.
I think I might have a problem with pull up reference voltage because the sensor specified it need to be 5v but NI-8452 use up to 3.3V.
but the low limit for high signal is 2.5v so it should work
My diagram:
another option i tried is using I2c script blocks
I tried similar solution also for pressure sensor, hdi0611arz8p5 by First-Sensor, but also got the same error.
After re-wiring it's started working, maybe some bad connectivity between wire and port. i hope this thread could help people that wish to connect sfm3000 using labview.
Client/server communication - client is sender and server is receiver.
When the server receives the data on the ethernet interface(UDP) the kernel in the server is triggered. I am using real time LINUX on the server side. Server (i.e. embedded pc target) is handling interrupts to trigger the embedded pc target (containing rt Linux) to gain the attention to execute the newly arrived data.
How can I calculate the time in kernel as soon as the interrupt occurs and send the response back to the client?
1) If you are using an embedded linux platform, you can refer to CPU datasheet: maybe it have a set of high-speed timers. At instance, I'm using SoC based on ARM Cortex A8, it has GP timers that can be clocked up to 38.4 MHz, so I can measure execution time with ~27ns precision. Very likely, your OS would not provide such API, so you're welcome to read-write CPU registers directly from kernel driver.
2) If you are want to just estimate execution time, and nothing more, you can use one of GPIO pins of your board. Set pin up at "start", set down at "end", then watch this pin by oscilloscope, if you have one.
3) If I missunderstood you, and all that you need is timestamp of a real time (like HH:mm:ss), you can refer to RTC chip of your board. Using driver of real-time clock chip, you can read time from your kernel module. Unfortunately, you might not be able do it from interrupt service routine.
Or just call do_gettimeofday and convert timeval to something human-readable via time_to_tm, if needed :)