Reading LIN frames using Linux standard serial API - linux

I have a development board that runs some Linux distribution, this board has some UART peripherals that are mapped into the system as a tty-like files.
On a specific UART port I have connected a LIN* transceiver which is connected to a LIN bus.
The LIN transceiver outputs me frames (two types: one type of frame has 3 bytes, and the other one has between minimum 6 bytes and maximum 12 bytes) with a minimum ~20ms of distance between them.
Now I want to write an application that is able to read this individual frames as data buffers (not byte-by-byte or any other possibility).
For setting the communication parameters (baud rate, parity bits, start/stop bits, etc), I'm using the stty** utility. I have played a bit with the min, and time [***] special settings parameters, but I didn't obtained the right behavior, big frames will always be spitted into at least three chunks.
Is there any way to achieve this?
[*] LIN: https://en.wikipedia.org/wiki/Local_Interconnect_Network
[**] stty: http://linux.die.net/man/1/stty
[***] I have used the following modes:
MIN == 0, TIME > 0 (read with timeout)
This won't work because I will always receive at least one individual byte (and the rest of the frame as a buffer).
MIN > 0, TIME > 0 (read with interbyte timeout)
In this mode setting the MIN to 3 (the smallest frame haves 3 bytes), and the TIME parameter to some higher value like 90 also won't do the trick, the short frames are received correctly in this case (at once), but the longer frames are splitted into 3 parts (the first part with 3 bytes, the second one with three bytes, and the last one with 5 bytes).

Related

spi_write_then_read with variant register size

As I understand the term "word length" (spi_bits_per_word) in spi, defines the CS (chip select) active time.
It therefore seems that linux driver will function correctly when dealing with simple spi protocols which keeps word size constant.
But, How can we deal with spi protocols which use different spi size as part of protocol.
for example cs need to be active for sending spi word - 9 bits, and then reading spi - 8 bits or 24 bits (the length of the register read is different each time, depends on register)
How can we implement that using spi_write_then_read ?
Do we need to set bits_per_word size for the sending and then another bits_per_word for the receiving ?
Regards,
Ran
"word length" means number of bits you can send in one transaction. It doesn't defines the CS (chip select) active time. You can keep it active for whatever time you want(least is for word-length).
SPI has got some format. You cannot randomly read-write whatever number of bits you want.Most of SPI supports 4-bit, 8-bit, 16-bit and 32-bit mode. If the given mode doesn't satisfy your requirement then you need to break your requirement. For eg:- To read 24-bit data, we need to use 8-bit word-length transfer for 3 times.
Generally SPI is fullduplex means it will read at same time it will write.

low level requirements for ethernet frame in linux

I'm developing a high-speed, high-resolution video camera for robotics applications. For various reasons I need to adopt gigabit ethernet (1Ge) or 10Ge to interface my cameras to PCs. Either that or I'll need to develop my own PCIe card which I prefer not to do (more work, plus then I'd have to create drivers).
I have two questions that I am not certain about after reading linux documentation.
#1: My desired ethernet frame is:
8-byte interpacket pad + sync byte
6-byte MAC address (destination)
6-byte MAC address (source)
2-byte packet length (varies 6KB to 9KB depending on lossless compression)
n-byte image data (number of bytes specified in previous 2-byte field)
4-byte CRC32
The question is, will linux accept this packet if the application tell linux to expect AF_PACKETs (assuming applications CAN tell linux this)? It is acceptable if the application that controls the camera (sends packets to it) and receives the image data in packets must run with root privilege.
#2: Which will be faster:
A: linux sockets with AF_PACKET protocol
B: libpcap application
Speed is crucial, because packets will arrive with little space between them, since each packet contains one horizontal row of pixels in my own lossless compression format (unless I can find a better algorithm that can also be implemented in the FPGA at real time speeds). There will be a pause between frames, but that is after 1200 or more horizontal rows (ethernet frame packets).
Because the application is robotics, each horizontal row will be immediately decompressed and stored in a simple packed array of RGBA pixels just like OpenGL accepts as textures. So robotics software can immediately inspect each image as the image arrives row by row and possibly react as quickly as inhumanly possible.
The data for the first RGBA pixel in each row immediately follows the last RGBA pixel in the previous row, so at the end of the last horizontal row of pixels the image is complete and ready to transfer to GPUs and/or save to disk. Each horizontal row will be a multiple of 16 pixels, so no "padding" is required.
NOTE: The camera must be directly plugged into the RJ45 jack without routers or other devices between camera and PC.
I think you will have to change your Ethernet frame format to use the first two bytes after the source and dest MACs as the type, not the length. Old-style lengths must be less than 1536, anything greater is treated as an IEEE type field instead. As you want 6K or more, there's a chance the receiving Ethernet chip / Linux packet handler will discard your frames because they're badly formatted.
As for performance, the Golden Rule is measure, don't guess. Pick the one that is simplest to program and try.
Hope this helps.

How Do sequence numbers affect sliding window protocol with fixed window size?

I've been going through, trying to learn this protocol from a book, except at this point they seem to shy away from it, they express that the sequence bit is the number of frames one can send and receive but apart from that they dont approach it any more.
I want to know how it affects the protocol with a fixed window size.
Does a sequence bit of 3 with a window size of 4 mean that the sender cannot send more than 3 frames at any one time?
Or does it mean that the frames are numbered in the sequence of: 0, 1, 2, 0, 1, 2
As you can see, i'm quite confused, but that for any help! its much appreciated :)
Try wikipedia-> http://en.wikipedia.org/wiki/Sliding_window_protocol
"Sliding window protocols are used where reliable in-order delivery of packets is required." The ordering of the packets is defined using the "sequence numbers" which are attached on every packet. In two way communications both sides agree to a window size before transmitting any packets containing actual data. That window size could be fixed or dynamically changed.
So for a client to client communication for example, lets say the window size is 10 packets. Relying on the sequence numbers this means it is initially from packet with sequence 0 to packet with sequence 10.
After the agreement takes place and the data transmission begins, the client A will start sending the first packets with seq numbers: 1,2,3,4,5,6,7,8,9,10.
Client A will stop sending packets when the window size (10) was reached according to the sequence numbers.
Client B replies with an acknowledgment(ACK) that it has received the packets 1,2,3,4.
That means the window moves from 0-10 to 5-14. The offset 10 remains the same in sliding window protocols with fixed size.
Therefore client A is able to sent the next 4 packets which are 11,12,13,14.
In general, as long as the client A has data to sent, it will keep sending until the window size is reached. Then it waits for ACKs from the other side, before it can continue sending again.
The sequence number indicates how the frames that are being sent are numbered.
For example, if the frames are numbered from 0-7, then it is a 3-bit sequence number.
If the frames are numbered from 0-15 then it is a 4-bit sequence number.

What do the bytes in a .wav file represent?

When I store the data in a .wav file into a byte array, what do these values mean?
I've read that they are in two-byte representations, but what exactly is contained in these two-byte values?
You will have heard, that audio signals are represented by some kind of wave. If you have ever seen this wave diagrams with a line going up and down -- that's basically what's inside those files. Take a look at this file picture from http://en.wikipedia.org/wiki/Sampling_rate
You see your audio wave (the gray line). The current value of that wave is repeatedly measured and given as a number. That's the numbers in those bytes. There are two different things that can be adjusted with this: The number of measurements you take per second (that's the sampling rate, given in Hz -- that's how many per second you grab). The other adjustment is how exact you measure. In the 2-byte case, you take two bytes for one measurement (that's values from -32768 to 32767 normally). So with those numbers given there, you can recreate the original wave (up to a limited quality, of course, but that's always so when storing stuff digitally). And recreating the original wave is what your speaker is trying to do on playback.
There are some more things you need to know. First, since it's two bytes, you need to know the byte order (big endian, little endian) to recreate the numbers correctly. Second, you need to know how many channels you have, and how they are stored. Typically you would have mono (one channel) or stereo (two), but more is possible. If you have more than one channel, you need to know, how they are stored. Often you would have them interleaved, that means you get one value for each channel for every point in time, and after that all values for the next point in time.
To illustrate: If you have data of 8 bytes for two channels and 16-bit number:
abcdefgh
Here a and b would make up the first 16bit number that's the first value for channel 1, c and d would be the first number for channel 2. e and f are the second value of channel 1, g and h the second value for channel 2. You wouldn't hear much there because that would not come close to a second of data...
If you take together all that information you have, you can calculate the bit rate you have, that's how many bits of information is generated by the recorder per second. In our example, you generate 2 bytes per channel on every sample. With two channels, that would be 4 bytes. You need about 44000 samples per second to represent the sounds a human beeing can normally hear. So you'll end up with 176000 bytes per second, which is 1408000 bits per second.
And of course, it is not 2-bit values, but two 2 byte values there, or you would have a really bad quality.
The first 44 bytes are commonly a standard RIFF header, as described here:
http://tiny.systems/software/soundProgrammer/WavFormatDocs.pdf
and here: http://www.topherlee.com/software/pcm-tut-wavformat.html
Apple/OSX/macOS/iOS created .wav files might add an 'FLLR' padding chunk to the header and thus increase the size of the initial header RIFF from 44 bytes to 4k bytes (perhaps for better disk or storage block alignment of the raw sample data).
The rest is very often 16-bit linear PCM in signed 2's-complement little-endian format, representing arbitrarily scaled samples at a rate of 44100 Hz.
The WAVE (.wav) file contain a header, which indicates the formatting information of the audio file's data. Following the header is the actual audio raw data. You can check their exact meaning below.
Positions Typical Value Description
1 - 4 "RIFF" Marks the file as a RIFF multimedia file.
Characters are each 1 byte long.
5 - 8 (integer) The overall file size in bytes (32-bit integer)
minus 8 bytes. Typically, you'd fill this in after
file creation is complete.
9 - 12 "WAVE" RIFF file format header. For our purposes, it
always equals "WAVE".
13-16 "fmt " Format sub-chunk marker. Includes trailing null.
17-20 16 Length of the rest of the format sub-chunk below.
21-22 1 Audio format code, a 2 byte (16 bit) integer.
1 = PCM (pulse code modulation).
23-24 2 Number of channels as a 2 byte (16 bit) integer.
1 = mono, 2 = stereo, etc.
25-28 44100 Sample rate as a 4 byte (32 bit) integer. Common
values are 44100 (CD), 48000 (DAT). Sample rate =
number of samples per second, or Hertz.
29-32 176400 (SampleRate * BitsPerSample * Channels) / 8
This is the Byte rate.
33-34 4 (BitsPerSample * Channels) / 8
1 = 8 bit mono, 2 = 8 bit stereo or 16 bit mono, 4
= 16 bit stereo.
35-36 16 Bits per sample.
37-40 "data" Data sub-chunk header. Marks the beginning of the
raw data section.
41-44 (integer) The number of bytes of the data section below this
point. Also equal to (#ofSamples * #ofChannels *
BitsPerSample) / 8
45+ The raw audio data.
I copied all of these from http://www.topherlee.com/software/pcm-tut-wavformat.html here
As others have pointed out, there's metadata in the wav file, but I think your question may be, specifically, what do the bytes (of data, not metadata) mean? If that's true, the bytes represent the value of the signal that was recorded.
What does that mean? Well, if you extract the two bytes (say) that represent each sample (assume a mono recording, meaning only one channel of sound was recorded), then you've got a 16-bit value. In WAV, 16-bit is (always?) signed and little-endian (AIFF, Mac OS's answer to WAV, is big-endian, by the way). So if you take the value of that 16-bit sample and divide it by 2^16 (or 2^15, I guess, if it's signed data), you'll end up with a sample that is normalized to be within the range -1 to 1. Do this for all samples and plot them versus time (and time is determined by how many samples/second is in the recording; e.g. 44.1KHz means 44.1 samples/millisecond, so the first sample value will be plotted at t=0, the 44th at t=1ms, etc) and you've got a signal that roughly represents what was originally recorded.
I suppose your question is "What do the bytes in data block of .wav file represent?" Let us know everything systematically.
Prelude:
Let us say we play a 5KHz sine wave using some device and record it in a file called 'sine.wav', and recording is done on a single channel (mono). Now you already know what the header in that file represents.
Let us go through some important definitions:
Sample: A sample of any signal means the amplitude of that signal at the point where sample is taken.
Sampling rate: Many such samples can be taken within a given interval of time. Suppose we take 10 samples of our sine wave within 1 second. Each sample is spaced by 0.1 second. So we have 10 samples per second, thus the sampling rate is 10Hz. Bytes 25th to 28th in the header denote sampling rate.
Now coming to the answer of your question:
It is not possible practically to write the whole sine wave to the file because there are infinite points on a sine wave. Instead, we fix a sampling rate and start sampling the wave at those intervals and record the amplitudes. (The sampling rate is chosen such that the signal can be reconstructed with minimal distortion, using the samples we are going to take. The distortion in the reconstructed signal because of the insufficient number of samples is called 'aliasing'.)
To avoid aliasing, the sampling rate is chosen to be more than twice the frequency of our sine wave (5kHz)(This is called 'sampling theorem' and the rate twice the frequency is called 'nyquist rate'). Thus we decide to go with sampling rate of 12kHz which means we will sample our sine wave, 12000 times in one second.
Once we start recording, if we record the signal, which is sine wave of 5kHz frequency, we will have 12000*5 samples(values). We take these 60000 values and put it in an array. Then we create the proper header to reflect our metadata and then we convert these samples, which we have noted in decimal, to their hexadecimal equivalents. These values are then written in the data bytes of our .wav files.
Plot plotted on : http://fooplot.com
Two bit audio wouldn't sound very good :) Most commonly, they represent sample values as 16-bit signed numbers that represent the audio waveform sampled at a frequency such as 44.1kHz.

1 frame consist of left and right in audio?

In anime, does frame means number of scene per second? Each scene can consist of several layer background, hero, object, etc. I think this is the reason why I am confused.
In wave (raw audio) file,
Does one frame contain data for one side (left or right) only?
Does bit sampling precision refer to a single side/channel?
With audio, do frames represent changes in loudness?
One frame can consist of left and right?
I.e. stereo 8 bit sampling depth => 1 frame => 2 bytes?
I do not know whether a formal definition of a frame exists, but when referring to an audio frame we usually mean a single time sample of a number of channels. So 2 audio channels # 8 bits per channel results in 2 bytes per frame. 4 channels # 16 bit per sample is 8 bytes.

Resources