AXI Burst calculations - verilog

In AXI channel the no. of data transfers in a single burst are called as beats.
size[2:0] - no. of bytes to be transfered in one beat.
but here actual bus size = 2 ^ size
eg:- if size = 100(binary)
bus size = 2 ^ 4 = 16.
also I have byte_count[15:0] - total no. of bytes to be transferred in the entire transfer.
Now I have the issue how to calculate burst length and no. of bursts to be issued.
burst length is no. of beats transferred in one burst.
the formulas are
no. of beats = byte_count / size
no. of bursts to be issued = no. of beats / 16
16 - because in 1 AXI burst you can have at max 16 beats only.
I am doing the coding in verilog.
This is for AXI Master and unaligned transfers are not supported.
Any hardware design or a formula is acceptable.

A few comments on what you have, assuming this is a master:
You're calculating the number of bursts when you probably don't need to. Usually you would keep track of only the bytes/words remaining to transfer and decrement as transactions complete. Keep in mind you can't burst across certain address boundaries which may require you to split transfers.
You're basing calculations on using 16-beat bursts(maximum LENGTH) but you may not be able to assume this. While AXI requires that slaves respond to all transfers, it does not require that they accept all of them. For instance, an AXI slave for a FIFO interface may reject unaligned transfers. If the target system may include slaves or bus fabrics that you don't know the capabilities of, you'll need to handle some programmable values.

The no. of beats = no. of read or write transfers ie.,
if AWlen or ARlen is 3, then Burst length is awlen (or) arlen + 1.
Therefore, AWlen + 1 => 3 + 1 => 4 transfers or 4 beats.
Maximum no.of beats in AXI protocol are 16 burst length size is 4 bits so that only maximum possible beats occured are 16.
hope you cleared with the concept of calculation of no. of beats.
Thankyou.

Related

spi_write_then_read with variant register size

As I understand the term "word length" (spi_bits_per_word) in spi, defines the CS (chip select) active time.
It therefore seems that linux driver will function correctly when dealing with simple spi protocols which keeps word size constant.
But, How can we deal with spi protocols which use different spi size as part of protocol.
for example cs need to be active for sending spi word - 9 bits, and then reading spi - 8 bits or 24 bits (the length of the register read is different each time, depends on register)
How can we implement that using spi_write_then_read ?
Do we need to set bits_per_word size for the sending and then another bits_per_word for the receiving ?
Regards,
Ran
"word length" means number of bits you can send in one transaction. It doesn't defines the CS (chip select) active time. You can keep it active for whatever time you want(least is for word-length).
SPI has got some format. You cannot randomly read-write whatever number of bits you want.Most of SPI supports 4-bit, 8-bit, 16-bit and 32-bit mode. If the given mode doesn't satisfy your requirement then you need to break your requirement. For eg:- To read 24-bit data, we need to use 8-bit word-length transfer for 3 times.
Generally SPI is fullduplex means it will read at same time it will write.

Why TCP/IP speed depends on the size of sending data?

When I sent small data (16 bytes and 128 bytes) continuously (use a 100-time loop without any inserted delay), the throughput of TCP_NODELAY setting seems not as good as normal setting. Additionally, TCP-slow-start appeared to affect the transmission in the beginning.
The reason is that I want to control a device from PC via Ethernet. The processing time of this device is around several microseconds, but the huge latency of sending command affected the entire system. Could you share me some ways to solve this problem? Thanks in advance.
Last time, I measured the transfer performance between a Windows-PC and a Linux embedded board. To verify the TCP_NODELAY, I setup a system with two Linux PCs connecting directly with each other, i.e. Linux PC <--> Router <--> Linux PC. The router was only used for two PCs.
The performance without TCP_NODELAY is shown as follows. It is easy to see that the throughput increased significantly when data size >= 64 KB. Additionally, when data size = 16 B, sometimes the received time dropped until 4.2 us. Do you have any idea of this observation?
The performance with TCP_NODELAY seems unchanged, as shown below.
The full code can be found in https://www.dropbox.com/s/bupcd9yws5m5hfs/tcpip_code.zip?dl=0
Please share with me your thinking. Thanks in advance.
I am doing socket programming to transfer a binary file between a Windows 10 PC and a Linux embedded board. The socket library are winsock2.h and sys/socket.h for Windows and Linux, respectively. The binary file is copied to an array in Windows before sending, and the received data are stored in an array in Linux.
Windows: socket_send(sockfd, &SOPF->array[0], n);
Linux: socket_recv(&SOPF->array[0], connfd);
I could receive all data properly. However, it seems to me that the transfer time depends on the size of sending data. When data size is small, the received throughput is quite low, as shown below.
Could you please shown me some documents explaining this problem? Thank you in advance.
To establish a tcp connection, you need a 3-way handshake: SYN, SYN-ACK, ACK. Then the sender will start to send some data. How much depends on the initial congestion window (configurable on linux, don't know on windows). As long as the sender receives timely ACKs, it will continue to send, as long as the receivers advertised window has the space (use socket option SO_RCVBUF to set). Finally, to close the connection also requires a FIN, FIN-ACK, ACK.
So my best guess without more information is that the overhead of setting up and tearing down the TCP connection has a huge affect on the overhead of sending a small number of bytes. Nagle's algorithm (disabled with TCP_NODELAY) shouldn't have much affect as long as the writer is effectively writing quickly. It only prevents sending less than full MSS segements, which should increase transfer efficiency in this case, where the sender is simply sending data as fast as possible. The only effect I can see is that the final less than full MSS segment might need to wait for an ACK, which again would have more impact on the short transfers as compared to the longer transfers.
To illustrate this, I sent one byte using netcat (nc) on my loopback interface (which isn't a physical interface, and hence the bandwidth is "infinite"):
$ nc -l 127.0.0.1 8888 >/dev/null &
[1] 13286
$ head -c 1 /dev/zero | nc 127.0.0.1 8888 >/dev/null
And here is a network capture in wireshark:
It took a total of 237 microseconds to send one byte, which is a measly 4.2KB/second. I think you can guess that if I sent 2 bytes, it would take essentially the same amount of time for an effective rate of 8.2KB/second, a 100% improvement!
The best way to diagnose performance problems in networks is to get a network capture and analyze it.
When you make your test with a significative amount of data, for example your bigger test (512Mib, 536 millions bytes), the following happens.
The data is sent by TCP layer, breaking them in segments of a certain length. Let assume segments of 1460 bytes, so there will be about 367,000 segments.
For every segment transmitted there is a overhead (control and management added data to ensure good transmission): in your setup, there are 20 bytes for TCP, 20 for IP, and 16 for ethernet, for a total of 56 bytes every segment. Please note that this number is the minimum, not accounting the ethernet preamble for example; moreover sometimes IP and TCP overhead can be bigger because optional fields.
Well, 56 bytes for every segment (367,000 segments!) means that when you transmit 512Mib, you also transmit 56*367,000 = 20M bytes on the line. The total number of bytes becomes 536+20 = 556 millions of bytes, or 4.448 millions of bits. If you divide this number of bits by the time elapsed, 4.6 seconds, you get a bitrate of 966 megabits per second, which is higher than what you calculated not taking in account the overhead.
From the above calculus, it seems that your ethernet is a gigabit. It's maximum transfer rate should be 1,000 megabits per second and you are getting really near to it. The rest of the time is due to more overhead we didn't account for, and some latencies that are always present and tend to be cancelled as more data is transferred (but they will never be defeated completely).
I would say that your setup is ok. But this is for big data transfers. As the size of the transfer decreases, the overhead in the data, latencies of the protocol and other nice things get more and more important. For example, if you transmit 16 bytes in 165 microseconds (first of your tests), the result is 0.78 Mbps; if it took 4.2 us, about 40 times less, the bitrate would be about 31 Mbps (40 times bigger). These numbers are lower than expected.
In reality, you don't transmit 16 bytes, you transmit at least 16+56 = 72 bytes, which is 4.5 times more, so the real transfer rate of the link is also bigger. But, you see, transmitting 16 bytes on a TCP/IP link is the same as measuring the flow rate of an empty acqueduct by dropping some tears of water in it: the tears get lost before they reach the other end. This is because TCP/IP and ethernet are designed to carry much more data, with reliability.
Comments and answers in this page point out many of those mechanisms that trade bitrate and reactivity for reliability: the 3-way TCP handshake, the Nagle algorithm, checksums and other overhead, and so on.
Given the design of TCP+IP and ethernet, it is very normal that, for little data, performances are not optimal. From your tests you see that the transfer rate climbs steeply when the data size reaches 64Kbytes. This is not a coincidence.
From a comment you leaved above, it seems that you are looking for a low-latency communication, instead than one with big bandwidth. It is a common mistake to confuse different kind of performances. Moreover, in respect to this, I must say that TCP/IP and ethernet are completely non-deterministic. They are quick, of course, but nobody can say how much because there are too many layers in between. Even in your simple setup, if a single packet get lost or corrupted, you can expect delays of seconds, not microseconds.
If you really want something with low latency, you should use something else, for example a CAN. Its design is exactly what you want: it transmits little data with high speed, low latency, deterministic time (just microseconds after you transmitted a packet, you know if it has been received or not. To be more precise: exactly at the end of the transmission of a packet you know if it reached the destination or not).
TCP sockets typically have a buffer size internally. In many implementations, it will wait a little bit of time before sending a packet to see if it can fill up the remaining space in the buffer before sending. This is called Nagle's algorithm. I assume that the times you report above are not due to overhead in the TCP packet, but due to the fact that the TCP waits for you to queue up more data before actually sending.
Most socket implementations therefore have a parameter or function called something like TcpNoDelay which can be false (default) or true. I would try messing with that and seeing if that affects your throughput. Essentially these flags will enable/disable Nagle's algorithm.

High Speed Serial

I have a system which uses a UART clocked at 26 Mhz. This is a 16850 UART on a i86 architecture. I have no problems accessing the port. The largest incoming message is about 56 bytes, the largest outgoing about 100. The baud rate divisor needs to be 1 so seterial /dev/ttyS4 baud_base 115200 is OK and opening at 115200. There is no flow control. Specifying part 16850 does NOT set the FIFO to deep. I was losing bytes. All the data is byte, unsigned char.
I wrote a routine that uses ioperm to set the deep FIFO's to 64 and now a read/write works meaning that the deep FIFO's are NOT being enabled by serial_core.c or 8250.c, at least in a deep manner.
With the deep FIFO set using s brute force, post open(fd, "/dev/ttyS4", NO_BLOCKING, etc I get reliably the correct number of bytes but I tend to get the same word missing a bit. Not a byte, a bit.
All this same stuff runs fine under DOS so it is not a hardware issue.
I have opened the port for raw, no delays, no party, 8 bits, 2 stop.
Has anyone seen issues reading serial ports are relatively high speeds with short bursts of data?
Yes, I have tried custom baud, etc. The FIFO levels made the biggest improvement. This is a ISA bus card using IRQ7.
It appears the serial driver for Linux sucks and has way to much latency and far to many features for really basic raw operation.
Has anyone else tried very high speed data without flow control or had similar issues. As I stated, I get the correct number of bytes and all the data is correct except 1 bit in byte 4.
I am pretty stumped.

What is the minimum packet latency that can be achieved with classic bluetooth (2.1) devices?

I'm using RN42 (http://www.microchip.com/wwwproducts/en/RN42) bluetooth module at 115200 bauds (UART SSP mode) to send very small (1 - 20 byte) serial data messages between a computer and a ATMega328 MCU. I find that the latency for one message is somewhere around 60 - 100 ms. I would need 10 ms or less in my application. I'm wondering if this is even possible with bluetooth 2.1 devices.ยด
I know that theoretically bluetooth packets can be sent every 2 * 625 us from one end since 625 us is the frequency hopping interval. Also, one package will always have a minimum of 126 + payload bits. If we are sending 10 bytes (80 bits), the minimum latency based on 115200 baud rate should be (126+80)/115200*1000 + 2 * 0.625 = 3 ms. However, when I measure the latency with a test code, the minimum latency will not go below 60 ms. This suggests that the baud rate or the frequency hopping are not the dominant causes of latency and for some reason the packages are not sent at a maximum rate.
Does someone know if < 60 ms latency is technically possible with this setup and if so, how to achieve it?

Less throughput with small packets

I have a question why the throughput of my machine is very bad with a SMALL sized packet (i.e 64bytes) when compared with the packet sized 1500bytes?
I am having a GIGABIT NIC card and able to transmit at 80MB/s for 1500bytes sized packets but in the case 64bytes sized packet I can hardly make out around 25MB/s.
I know that in the case of 1500byte packets I need to send around 80k PPS to reach line rate and for 64bytes its around 1.4 million PPS.
But why there is a huge variation in throughput for small sized packets ??
EDIT: I am using memory mapping to transmit the packets from user-space to kernel-space in linux and then directly writing into the network driver to transmit. And I see my CPU utilization is very less and same when compared between 64bytes and 1500bytes packets.
But why there is a huge variation in throughput for small sized
packets ??
CPU strain. Independent of its size, each packet that gets out passes through a lot of processing before reaching the interface. Put another way, the "costs" of transmitting a small packet and a large packet are comparable.
If you're interested in this you might want to look into "GSO" and "UFO" in the Linux kernel - it was developed specifically for this.
It takes time to send packet headers. It takes time to setup DMA buffers, process packet headers, etc. All that extra work reduces the amount of actual payload that can be sent.
Think about this: each packet has its header contains the size of payload(data) and some general data. lets say the header are 16 bytes.
If you send 1000 packets of 64 bytes you send 1000 * (64 + 16) = 64000 + 16000 bytes.
If you send it in one shot it is only 64000+16 bytes.

Resources