What are SPI sequences, jobs and channels? (Examples) - autosar

All I know is:
1 sequence can have 1 or more jobs;
1 job can have 1 or more channels;
1 channel has 1 Tx memory buffer and 1 Rx memory buffer;

A Channel is a software exchange medium for data that are defined with the same criteria: Configuration Parameters, Number of Data elements with the same size and data pointers (Source & Destination) or location.
A Job is composed of one or several Channels. it's considered atomic and therefore cannot be interrupted by another job. it's a basic SPI command.
a sequence is a general routine like reading, erase, write. it contains a set of jobs that should be executed sequentially. A Sequence communication is interruptible (by another Sequence of communication).
for example, to write, you need the job
command job : to set the address
command addr data : to write the data

Related

Reusing the same host-visible buffer on different queue families

Considering host-visible buffers (mostly related to streaming buffers, i.e. buffers backed by VK_MEMORY_PROPERTY_DEVICE_LOCAL_BIT | VK_MEMORY_PROPERTY_HOST_VISIBLE_BIT memory), let's imagine the following usage pattern:
Write new data to the mapped address on the host (after the proper synchronization).
Read the buffer with contents written in step 1 on queue family A.
Write new data to the mapped address on the host (after the proper synchronization).
Read the buffer with contents written in step 3 on queue family B.
Now, if I omit the queue family ownership transfer (QFOT), is the data written in step 3 inaccessible to queue family B in step 4?
The data written on the host becomes visible to the device when I submit the command(s) of step 4 using vkQueueSubmit, due to the implicit memory dependency of the host write ordering guarantee.
How does it play with different queue families?
OK, so we have a CPU modifiable buffer. And for some reason, this buffer was created in exclusive mode. And you want to do the following:
Write data to the buffer.
Copy the data using queue family A.
Write data to the buffer.
Copy the data using queue family B.
In order for step 4 to work, you are required to do an ownership transfer. The standard spells this out right before what you quoted:
If memory dependencies are correctly expressed between uses of such a resource between two queues in different families, but no ownership transfer is defined, the contents of that resource are undefined for any read accesses performed by the second queue family.
You do have dependencies correctly expressed (I assume). But copying data is a "read access". And it is being performed by queue family B, which is different from queue family A. Therefore, step 4 (a "read access") triggers this clause: "the contents of that resource are undefined".
"Contents" means all of the contents. The ones you wrote in step 1 and step 3. All of them are undefined for step 4, unless you do a queue family ownership transfer.

what is "wxd" in rocketcore?

In the rocket core bypass logic
val bypass_sources = IndexedSeq(
(Bool(true), UInt(0), UInt(0)), // treat reading x0 as a bypass
(ex_reg_valid && ex_ctrl.wxd, ex_waddr, mem_reg_wdata),
(mem_reg_valid && mem_ctrl.wxd && !mem_ctrl.mem, mem_waddr, wb_reg_wdata),
(mem_reg_valid && mem_ctrl.wxd, mem_waddr, dcache_bypass_data))
What do ex_ctrl.wxd and mem_ctrl.wxd stand for?
As I understand it, wxd is set for an instruction that writes a value to a register, i.e. has a result value, so writes to the register file. Some reasonably simple decode logic (e.g. test for R-type instruction) identifies whether each instruction is such a writer or not.
Also as I understand it, ex_ctrl and mem_ctrl refer to instructions in their pipeline stages, ex, and mem, respectively — so ex_ctrl.wxd is set when the instruction in the ex stage is one that writes to a register (even though it won't do the write until the wb stage).
Background
The rocket micro architecture suspends reading coprocessor results — as reading coprocessor results means writing a processor register, so also a write to the processor's register file — when wxd is asserted for instructions in the wb pipeline stage, giving processor instructions priority over coprocessor instructions. A coprocessor result value is only transferred into the processor register file when wxd is set false (meaning the processor instruction won't write).
This mechanism limits the number of ports needed to write the register file.

spi_write_then_read with variant register size

As I understand the term "word length" (spi_bits_per_word) in spi, defines the CS (chip select) active time.
It therefore seems that linux driver will function correctly when dealing with simple spi protocols which keeps word size constant.
But, How can we deal with spi protocols which use different spi size as part of protocol.
for example cs need to be active for sending spi word - 9 bits, and then reading spi - 8 bits or 24 bits (the length of the register read is different each time, depends on register)
How can we implement that using spi_write_then_read ?
Do we need to set bits_per_word size for the sending and then another bits_per_word for the receiving ?
Regards,
Ran
"word length" means number of bits you can send in one transaction. It doesn't defines the CS (chip select) active time. You can keep it active for whatever time you want(least is for word-length).
SPI has got some format. You cannot randomly read-write whatever number of bits you want.Most of SPI supports 4-bit, 8-bit, 16-bit and 32-bit mode. If the given mode doesn't satisfy your requirement then you need to break your requirement. For eg:- To read 24-bit data, we need to use 8-bit word-length transfer for 3 times.
Generally SPI is fullduplex means it will read at same time it will write.

Using arrays of Linux kfifo

In can4linux, a Linux CAN device character driver currently a proprietary FIFO
implementation is used. The driver supports more then on CAN channel
(MAX_CHANNELS) and each channel can be opened by more than one process
(CAN_MAX_OPEN). If a CAN message is received the message is copied in all the
receive FIFOs for that channel.
Currently it looks like:
msg_fifo_t rx_buf[MAX_CHANNELS][CAN_MAX_OPEN];
The fifo size and pointers are defined in the msg_fifo_t. rx_buf is therefore a
big two dimensional array of these msg_fifo_t structures.
How can I solve this with using kfifo?
If a user process wants to read from a special
CAN controller and it is the nth process opening for read, I want to get exactly enter code herethe right fifo (or fifo pointer).
ptr = rx_buf[can[x][n];
Any links for examples or hints are welcome.

Difference between skbuff frags and frag_list

The sk_buff has two places where it can store the next fragmentation data:
skb_shinfo(head)->frag_list
skb_shinfo(head)->frags[]
What are the differences between these two ways to handle fragmentation?
Both are used for different cases.
frags[]
When your device supports scatter-gather I/O, and you want it to do the combining of data, etc., you can populate the frags[] structure starting with the second fragment till the nth fragment. The first fragment is always specified by the data and tail pointers. The rest of the fragments are filled in the frags[] structure. If you don't use scatter gather, this variable is empty.
frag_list
This is the list of IP fragments. This will be filled during ip_push_pending_frames.
Say your sk_buffs are in this arrangement,
sk_buff0->next = sk_buff1
sk_buff1->next = sk_buff2
...
sk_buffn-1->next = sk_buffn
After ip_push_pending_frames is called
sk_buff0->frag_list = sk_buff1
sk_buff1->next = sk_buff2
...
sk_buffn-1->next = sk_buffn
Simply put
frags[] are for scatter-gather I/O buffers
frag_list is for IP fragments
skb_shinfo(head)->frags[]
If the NIC supports SG I/O, __ip_append_data will copy user space data to skb_shinfo(head)->frags. The NIC driver (e.g., ixgbe_add_rx_frag) can also use these frags[] to carry the received network traffic; please note that every content in frags[] is a part of a complete packet. A complete packet consists of all frags[] + (skb->data ~ skb->tail).
skb_shinfo(head)->frag_list
This member is not used by IP fragmentation directly.
In __ip_make_skb(), the frag_list is used to collect all skbs from sk->sk_write_queue; some NIC drivers also use this frag_list for carrying a packet to the upper network stack. Every content/skb in frag_list is also not a complete packet; tcp_v4_send_ack -> ip_send_unicast_reply -> ip_push_pending_frames -> ip_finish_skb -> __ip_make_skb;

Resources