CRC-32 values not matching - python-3.x

I am using SPI communication to communicate between Raspberry Pi and a microcontroller. I am sending a value of "32" (a 32-bit integer) or "0x00000020" with CRC value calculated by the microcontroller as "2613451423". I am running CRC32 on this 32-bit integer. Polynomial being used on MCU is "0x04C11DB7". Below is the snippet of code I am using on microcontroller:
GPCRC_Init_TypeDef initcrc = GPCRC_INIT_DEFAULT;
initcrc.initValue = 0xFFFFFFFF; // Standard CRC-32 init value
initcrc.reverseBits = true;
initcrc.reverseByteOrder =true; //Above line and this line converts data from big endian to little endian
/*********other code here**************/
for(int m=0;m<1;m++){
data_adc = ((uint8_t *)(StartPage+m));//reading data from flash memory,StartPage is the address from where to read (data stored as 32bit integers),here reading only a byte
ecode = SPIDRV_STransmitB(SPI_HANDLE, data_adc, 4, 0);//transmitting 4 bytes (32-bit value) through SPI over to RPi
GPCRC_Start(GPCRC); //Set CRC parameters such as polynomial
for(int i=0;i<1;i++){
GPCRC_InputU32(GPCRC,((uint32_t *)(StartPage+i)));//generating crc for the same 32 bit value
}
//I also tried using below code:
/*for(int i=0;i<4;i++){
GPCRC_InputU8(GPCRC,((uint8_t *)(StartPage+i)));//generating crc for the same 32 bit value
}*/
checksum[0] = ~GPCRC_DataRead(GPCRC); //CRC value inverted and stored in array
ecode = SPIDRV_STransmitB(SPI_HANDLE, checksum, 4, 0);//sending this value through SPI (in chunks of 4 bytes)
On RPi, I am collecting this value (i.e. "32", I receive the value correct) but the CRC calculated by RPi is "2172022818". I am using "zlib" to calculate CRC32. I have added a code snippet as well:
import datetime
import os
import struct
import time
import pigpio
import spidev
import zlib
bus = 0
device = 0
spi = spidev.SpiDev()
spi.open(bus, device)
spi.max_speed_hz = 4000000
spi.mode = 0
pi.set_mode(25, pigpio.INPUT)
rpi_crc=0
def output_file_path():
return os.path.join(os.path.dirname(__file__),
datetime.datetime.now().strftime("%dT%H.%M.%S") + ".csv")
def spi_process(gpio,level,tick):
print("Detected")
data = bytes([0]*4)
crc_data = bytes([0]*4)
spi.xfer2([0x02])
with open(output_file_path(), 'w') as f:
t1=datetime.datetime.now()
for x in range(1):
recv = spi.xfer2(data)
values = struct.unpack("<" +"I"*1, bytes(recv))
print(values)
rpi_crc = zlib.crc32(bytes(recv))
print('RPis own CRC generated:')
print(rpi_crc)
f.write("\n")
f.write("\n".join([str(x) for x in values]))
mcu_crc_bytes = spi.xfer2(crc_data)
mcu_crc = struct.unpack("<"+"I"*1,bytes(mcu_crc_bytes))
mcu_crc_int = int(''.join(map(str,mcu_crc)))
print('MCU sent this CRC:')
print(mcu_crc_int)
if (rpi_crc != mcu_crc_int):
spi.xfer([0x03])
t2=datetime.datetime.now()
print(t2-t1)
input("Press Enter to start the process ")
spi.xfer2([0x01])
cb1=pi.callback(25, pigpio.RISING_EDGE, spi_process)
while True:
time.sleep(1)
From this forum itself I got to know that it might be the issue of endianness so, I tried playing around with the endianness of one value and compare it with other but it still produces different values.
For ex: Value sent by RPi is:2172022818 (in decimal)
Changing it to Hex: 0x81767022
Changing the endianness: 0x22707681
Value sent by microcontroller is:2613451423 (in decimal)
Changing it to Hex: 0x9BC61A9F
As you can see both the values in bold are not same. Please let me know if I am doing something wrong or what could be going on here. Thanks!
EDIT:
Added more code to provide better overview of certain aspects which were missing before. Datasheet for microcontroller (CRC on Pg. 347): https://www.wless.ru/files/ZigBee/EFR32MG21/EFR32xG21_reference_manual.pdf

I was able to figure out the issues. I used this https://crccalc.com/?crc=C5&method=crc32&datatype=hex&outtype=0 to confirm CRC values that I was getting on microcontroller and RPi.
First issue was on microcontroller, where I was not even performing the CRC on the data, instead I was performing on the address where that data was stored.
Second issue was that MCU was performing CRC on the value which was stored in the little-endian form. On RPi also, CRC was being performed on values stored in little-endian form. Hence, since the endianness was same on both the devices, I did not have to reverse the bits or bytes.
After doing these changes, I was able to get correct and same CRC values on both RPi and microcontroller.

Related

Arduino serial data manipulation - Sensors Serial Data, Read and parse to variables

I send 3 set of data from 3 sensors from Arduino 1 (router) to another Arduino(coordinator) to with wireless technology (xbee):
On coordinator, I receive wireless data from this 3 sensors(from the router) perfectly. The data stream is something like this(each sensor data on its line):
22.5624728451
944
8523
I want to have these 3 values as 3 variables that get updated constantly and then pass these values on to the rest of the program to make something like print on LCD or something else:
temperature=22. 5624728451
gas=944
smoke=8523
Initially, I had only 2 sensors and I send the data of these 2 sensors something like this:
22.5624728451944(22.5624728451 – temperature, 944 - gas) and I received both of them on the same line and divided everything into two variables(with readString.substring() ) with the code below. But now I have 3 sensors and I receive data on a separate line because I don't know which is the length of each data string … And I can't use the same technique (sending only one string that contain all sensor data on the same line and then divide them)
My old code:
#include <LiquidCrystal.h>
LiquidCrystal lcd(12,11,10,9,8,7);
String temperature;
String gas;
String readString;
void setup() {
Serial.begin(9600);
lcd.begin(16, 2);
}
void loop() {
while (Serial.available() > 0)
{
char IncomingData = Serial.read();
readString += IncomingData ;
temperature = readString.substring(0, 13); //get the first 13 characters
gas = readString.substring(13, 16); //get the last 3 characters
Serial.print(IncomingData); //here I have my string: 20.1324325452924 wichs is updating properly when I have sensor values changes
// Process message when new line character is DatePrimite
if (IncomingData == '\n')
{
Serial.println(temperature);
lcd.setCursor(0,0);
lcd.write("T:");
lcd.print(temperature);
delay(500);
temperature = ""; // Clear DatePrimite buffer
Serial.println(gaz);
lcd.begin(16, 2);
lcd.setCursor(0,1);
lcd.write("G:");
lcd.print(gas);
delay(500);
gaz = ""; // Clear DatePrimite buffer
readString = "";
}
}
}
All I want to do now is to assign a variable for every sensor data (3 lines – 3 variables for each line) updated constantly and then pass these values on to the rest of the program. Does anyone have any idea how to modify the code tO work in this situation?
Thank you in advance!
I would recommend that you concatenate the values into the same line on the sending end and use a delimiter like a comma along with string.split() on the receiving end if you are committed to using string values. EDIT: It appears Arduino does not have the string.split() function. See this conversation for an example.
An alternative would be to set a standard byte length and send the numbers as binary instead of ASCII encoded strings representing numbers. See this post on the Arudino forum for a little background. I am recommending sending the number in raw byte notation rather than as ASCII characters. When you define a variable as in integer on the arduino it defaults to 16-bit signed integer value. A float is a 32-bit floating point number. If, for example, you send a float and two ints as binary values the float will always be the first 4 bytes, the first int, the next 2 and the last int the last 2. The order of the bytes (endianness, or most significant byte first (Big Endian, Motorolla style)/least significant bit first (Little Endian, Intel style)).

android AudioTrack playback short array (16bit)

I have an application that playback audio. It takes encoded audio data over RTP and decode it to 16bit array. The decoded 16bit array is converted to 8 bit array (byte array) as this is required for some other functionality.
Even though audio playback is working it is breaking continuously and very hard to recognise audio output. If I listen carefully I can tell it is playing the correct audio.
I suspect this is due to the fact I convert 16 bit data stream into a byte array and use the write(byte[], int, int, AudioTrack.WRITE_NON_BLOCKING) of AudioTrack class for audio playback.
Therefore I converted the byte array back to a short array and used write(short[], int, int, AudioTrack.WRITE_NON_BLOCKING) method to see if it could resolve the problem.
However now there is no audio sound at all. In the debug output I can see the short array has data.
What could be the reason?
Here is the AUdioTrak initialization
sampleRate =AudioTrack.getNativeOutputSampleRate(AudioManager.STREAM_MUSIC);
minimumBufferSize = AudioTrack.getMinBufferSize(sampleRate, AudioFormat.CHANNEL_OUT_STEREO, AudioFormat.ENCODING_PCM_16BIT);
audioTrack = new AudioTrack(AudioManager.STREAM_MUSIC, sampleRate,
AudioFormat.CHANNEL_OUT_STEREO,
AudioFormat.ENCODING_PCM_16BIT,
minimumBufferSize,
AudioTrack.MODE_STREAM);
Here is the code converts short array to byte array
for (int i=0;i<internalBuffer.length;i++){
bufferIndex = i*2;
buffer[bufferIndex] = shortToByte(internalBuffer[i])[0];
buffer[bufferIndex+1] = shortToByte(internalBuffer[i])[1];
}
Here is the method that converts byte array to short array.
public short[] getShortAudioBuffer(byte[] b){
short audioBuffer[] = null;
int index = 0;
int audioSize = 0;
ByteBuffer byteBuffer = ByteBuffer.allocate(2);
if ((b ==null) && (b.length<2)){
return null;
}else{
audioSize = (b.length - (b.length%2));
audioBuffer = new short[audioSize/2];
}
if ((audioSize/2) < 2)
return null;
byteBuffer.order(ByteOrder.LITTLE_ENDIAN);
for(int i=0;i<audioSize/2;i++){
index = i*2;
byteBuffer.put(b[index]);
byteBuffer.put(b[index+1]);
audioBuffer[i] = byteBuffer.getShort(0);
byteBuffer.clear();
System.out.print(Integer.toHexString(audioBuffer[i]) + " ");
}
System.out.println();
return audioBuffer;
}
Audio is decoded using opus library and the configuration is as follows;
opus_decoder_ctl(dec,OPUS_SET_APPLICATION(OPUS_APPLICATION_AUDIO));
opus_decoder_ctl(dec,OPUS_SET_SIGNAL(OPUS_SIGNAL_MUSIC));
opus_decoder_ctl(dec,OPUS_SET_FORCE_CHANNELS(OPUS_AUTO));
opus_decoder_ctl(dec,OPUS_SET_MAX_BANDWIDTH(OPUS_BANDWIDTH_FULLBAND));
opus_decoder_ctl(dec,OPUS_SET_PACKET_LOSS_PERC(0));
opus_decoder_ctl(dec,OPUS_SET_COMPLEXITY(10)); // highest complexity
opus_decoder_ctl(dec,OPUS_SET_LSB_DEPTH(16)); // 16bit = two byte samples
opus_decoder_ctl(dec,OPUS_SET_DTX(0)); // default - not using discontinuous transmission
opus_decoder_ctl(dec,OPUS_SET_VBR(1)); // use variable bit rate
opus_decoder_ctl(dec,OPUS_SET_VBR_CONSTRAINT(0)); // unconstrained
opus_decoder_ctl(dec,OPUS_SET_INBAND_FEC(0)); // no forward error correction
Let's assume you have a short[] array which contains the 16-bit one channel data to be played.
Then each sample is a value between -32768 and 32767 which represents the signal amplitude at the exact moment. And 0 value represents a middle point (no signal). This array can be passed to the audio track with ENCODING_PCM_16BIT format encoding.
But things are going weird when playing ENCODING_PCM_8BIT is used (See AudioFormat)
In this case each sample encoded by one byte. But each byte is unsigned. That means, it's value is between 0 and 255, while 128 represents the middle point.
Java has no unsigned byte format. Byte format is signed. I.e. values -128...-1 will represent actual values of 128...255. So you have to be careful when converting to the byte array, otherwise it will be a noise with barely recognizable source sound.
short[] input16 = ... // the source 16-bit audio data;
byte[] output8 = new byte[input16.length];
for (int i = 0 ; i < input16.length ; i++) {
// To convert 16 bit signed sample to 8 bit unsigned
// We add 128 (for rounding), then shift it right 8 positions
// Then add 128 to be in range 0..255
int sample = ((input16[i] + 128) >> 8) + 128;
if (sample > 255) sample = 255; // strip out overload
output8[i] = (byte)(sample); // cast to signed byte type
}
To perform backward conversion all should be the same: each single sample to be converted to exactly one sample of the output signal
byte[] input8 = // source 8-bit unsigned audio data;
short[] output16 = new short[input8.length];
for (int i = 0 ; i < input8.length ; i++) {
// to convert signed byte back to unsigned value just use bitwise AND with 0xFF
// then we need subtract 128 offset
// Then, just scale up the value by 256 to fit 16-bit range
output16[i] = (short)(((input8[i] & 0xFF) - 128) * 256);
}
The issue of not being able to convert data from byte array to short array was resolved when used bitwise operators instead of using ByteArray. It could be due not setting the correct parameters in ByteArray or it is not suitable for such conversion.
Nevertheless implementing conversion using bitwise operators resolved the problem. Since the original question has been resolved by this approach, please consider this as the final answer.
I will raise a separate topic for playback issue.
Thank you for all your support.

SPI linux driver

I am trying to learn how to write a basic SPI driver and below is the probe function that I wrote.
What I am trying to do here is setup the spi device for fram(datasheet) and use the spi_sync_transfer()api description to get the manufacturer's id from the chip.
When I execute this code, I can see the data on the SPI bus using logic analyzer but I am unable to read it using the rx buffer. Am I missing something here? Could someone please help me with this?
static int fram_probe(struct spi_device *spi)
{
int err;
unsigned char ch16[] = {0x9F,0x00,0x00,0x00};// 0x9F => 10011111
unsigned char rx16[] = {0x00,0x00,0x00,0x00};
printk("[FRAM DRIVER] fram_probe called \n");
spi->max_speed_hz = 1000000;
spi->bits_per_word = 8;
spi->mode = (3);
err = spi_setup(spi);
if (err < 0) {
printk("[FRAM DRIVER::fram_probe spi_setup failed!\n");
return err;
}
printk("[FRAM DRIVER] spi_setup ok, cs: %d\n", spi->chip_select);
spi_element[0].tx_buf = ch16;
spi_element[1].rx_buf = rx16;
err = spi_sync_transfer(spi, spi_element, ARRAY_SIZE(spi_element)/2);
printk("rx16=%x %x %x %x\n",rx16[0],rx16[1],rx16[2],rx16[3]);
if (err < 0) {
printk("[FRAM DRIVER]::fram_probe spi_sync_transfer failed!\n");
return err;
}
return 0;
}
spi_element is not declared in this example. You should show that and also how all elements of that are array are filled. But just from the code that's there I see a couple mistakes.
You need to set the len parameter of spi_transfer. You've assigned the TX or RX buffer to ch16 or rx16 but not set the length of the buffer in either case.
You should zero out all the fields not used in the spi_transfer.
If you set the length to four, you would not be sending the proper command according to the datasheet. RDID expects a one byte command after which will follow four bytes of output data. You are writing a four byte command in your first transfer and then reading four bytes of data. The tx_buf in the first transfer should just be one byte.
And finally the number of transfers specified as the last argument to spi_sync_transfer() is incorrect. It should be 2 in this case because you have defined two, spi_element[0] and spi_element[1]. You could use ARRAY_SIZE() if spi_element was declared for the purpose of this message and you want to sent all transfers in the array.
Consider this as a way to better fill in the spi_transfers. It will take care of zeroing out fields that are not used, defines the transfers in a easy to see way, and changing the buffer sizes or the number of transfers is automatically accounted for in remaining code.
const char ch16[] = { 0x8f };
char rx16[4];
struct spi_transfer rdid[] = {
{ .tx_buf = ch16, .len = sizeof(ch16) },
{ .rx_buf = rx16, .len = sizeof(rx16) },
};
spi_transfer(spi, rdid, ARRAY_SIZE(rdid));
Since you have a scope, be sure to check that this operation happens under a single chip select pulse. I have found more than one Linux SPI driver to have a bug that pulses chip select when it should not. In some cases switching from TX to RX (like done above) will trigger a CS pulse. In other cases a CS pulse is generated for every word (8 bits here) of data.
Another thing you should change is use dev_info(&spi->dev, "device version %d", id)' and also dev_err() to print messages. This inserts the device name in a standard way instead of your hard-coded non-standard and inconsistent "[FRAME DRIVER]::" text. And sets the level of the message as appropriate.
Also, consider supporting device tree in your driver to read device properties. Then you can do things like change the SPI bus frequency for this device without rebuilding the kernel driver.

Bluetooth data shown as 0X80 instead of 0X00

I have been using Bluetooth module (HC-05) with Atmega8(both A and L) microcontroller to transmit data to my Android device. In following code an 8-bit signed(or unsigned doesn't made any change) value is sent over bluetooth to be displayed on device, this value starts at 0X00 and is incremented in every iteration:
#define F_CPU 1000000
#define BAUD 9600
#define MYUBRR (F_CPU/16/BAUD-1)
#include <avr/io.h>
#include <util/delay.h>
int main (void)
{
uint8_t data = 0;
UBRRH = (MYUBRR >> 8); // setting higher bits of UBRR
UBRRL = MYUBRR; // setting lower bit of UBRR
UCSRB = (1 << TXEN); // transmit enable
UCSRC = ((1 << URSEL) | (1 << UCSZ1) | (1 << UCSZ0)); //URSEL=USART reg selection (R/W), UCSZ1 & UCSZ0 set equal to 011 that is 8 bit data size
while (1)
{
UDR=data; // loading data in USART Data register (8-bit) and it will be transmitted immidiately
while(!(UCSRA&(1<<UDRE))); // waiting till complete data sent and UDRE flag set
_delay_ms(200); // after some time
data++; // incrementing data
}
return 0;
}
On the android device end there is "Bluetooth spp Pro" app to display the recieved data on screen.
Following is the configuration of recieve mode (Data is displayed as Hex values):
The data recieved here should start at 0X00 and go up to 0XFF instead it starts at 0X80 and increments upto 0XFF in a very unfamiliar manner.
Referring above image. The pattern I observed here is that the tens place digit starts at 8 and units place change from 0 to F then in next loop again it becomes 9 and unit place change from 0 to F after that instead of incrementing (expected) tens place again goes to 8 and then in next cycle it again becomes 9, after these four cycles of two repetetive words tens place increments to A and units place change from 0 to F and later the strange tens place pattern reappears for A and B then for C and D and later for E and F at tens place.
So my concern is:
Why is the device showing 80 for 00, as it is correctly working for ones place why is it not working for tens place as expected???
Thanks!!!
Edit:
This problem is neither Android version nor device manufacturer specific.
Problem was with voltage levels. Operating the microcontroller circuit on 3.2V and Bluetooth module on 3.8V solved the problem and data is transmitted as expected. However I am unable to predict an explanation for this.
Please help.
It is observed clearly on varying the potentiometer of voltage regulator, when I keep it below 3.20V data is transmitted smooth, and as the voltage level crosses 3.20V the tens place of data starts getting corrupted up to the point of complete data corruption and output becomes constant data 0XFE at 3.8V.

What is openCL equivalent for this cuda "cudaMallocPitch "code.?

My PC has an AMD processor with an ATI 3200 GPU which doesn't support OpenCL. The rest of the codes all running by "Falling back to CPU itself".
I am converting one of the code from CUDA to OpenCL but stuck in some particular part for which there is no exact conversion code in OpenCL. since i have less experience in OpenCL I can't make out this, please suggest me some solution if any of you think will work,
The CUDA code is,
size_t pitch = 0;
cudaError error = cudaMallocPitch((void**)&gpu_data, (size_t*)&pitch,
instances->cols * sizeof(float), instances->rows);
for( int i = 0; i < instances->rows; i++ ){
error = cudaMemcpy((void*)(gpu_data + (pitch/sizeof(float))*i),
(void*)(instances->data + (instances->cols*i)),
instances->cols * sizeof(float) ,cudaMemcpyHostToDevice);
If I remove the pitch value from the above I end up with an problem which doesn't write to the device memory "gpu_data".
Somebody please convert this code to OpenCL and reply. I have converted it to OpenCL, but its not working and the data is not written to "gpu_data". My converted OpenCL code is
gpu_data = clCreateBuffer(context, CL_MEM_READ_WRITE, ((instances->cols)*(instances->rows))*sizeof(float), NULL, &ret);
for( int i = 0; i < instances->rows; i++ ){
ret = clEnqueueWriteBuffer(command_queue, gpu_data, CL_TRUE, 0, ((instances->cols)*(instances->rows))*sizeof(float),(void*)(instances->data + (instances->cols*i)) , 0, NULL, NULL);
Sometimes it runs well for this code and gets stuck in the reading part i.e.
ret = clEnqueueReadBuffer(command_queue, gpu_data, CL_TRUE, 0,sizeof( float ) * instances->cols* 1 , instances->data, 0, NULL, NULL);
overhere. And it gives error like
Unhandled exception at 0x10001098 in CL_kmeans.exe: 0xC000001D: Illegal Instruction.
when break is pressed , it gives:
No symbols are loaded for any call stack frame. The source code cannot be displayed.
while debugging. In the call stack it is displaying:
OCL8CA9.tmp.dll!10001098()
[Frames below may be incorrect and/or missing, no symbols loaded for OCL8CA9.tmp.dll]
amdocl.dll!5c39de16()
I really dont know what it means. someone please help me to rid of this problem.
First of all, in the CUDA code you're doing a horribly inefficient thing to copy the data. The CUDA runtime has the function cudaMemcpy2D that does exactly what you are trying to do by looping over different rows.
What cudaMallocPitch does is to compute an optimal pitch (= distance in byte between rows in a 2D array) such that each new row begins at an address that is optimal for coalescing, and then allocates a memory area as large as pitch times the number of rows you specify. You can emulate the same thing in OpenCL by first computing the optimal pitch and then doing the allocation of the correct size.
The optimal pitch is computed by (1) getting the base address alignment preference for your card (CL_DEVICE_MEM_BASE_ADDR_ALIGN property with clGetDeviceInfo: note that the returned value is in bits, so you have to divide by 8 to get it in bytes); let's call this base (2) find the largest multiple of base that is no less than your natural data pitch (sizeof(type) times number of columns); this will be your pitch.
You then allocate pitch times number of rows bytes, and pass the pitch information to kernels.
Also, when copying data from the host to the device and converesely, you want to use clEnqueue{Read,Write}BufferRect, that are specifically designed to copy 2D data (they are the counterparts to cudaMemcpy2D).

Resources