Specification given in the manual of meter which uses modbus as communication protocol:
“dword” refers to 32-bit unsigned integer using two data addresses and 4 bytes
of memory with high word at the front and low word at the end, it varies from 0
to 4294967295. rx = high word *65536+low word
i have this hexValue : 00003ef5.
where highword = 0000
and lowword = 3ef5
so rx would be 3ef5 and this converted to decimal gives 16,117.
For now i have used this:
var rx = hexValue.substr(0,4)*65536 + hexValue.substr(4,8);
console.log(parseInt(rx,16));
Is there a way to convert my hex value directly to dword using nodejs buffer library or any other method better than this?
Related
I am using SPI communication to communicate between Raspberry Pi and a microcontroller. I am sending a value of "32" (a 32-bit integer) or "0x00000020" with CRC value calculated by the microcontroller as "2613451423". I am running CRC32 on this 32-bit integer. Polynomial being used on MCU is "0x04C11DB7". Below is the snippet of code I am using on microcontroller:
GPCRC_Init_TypeDef initcrc = GPCRC_INIT_DEFAULT;
initcrc.initValue = 0xFFFFFFFF; // Standard CRC-32 init value
initcrc.reverseBits = true;
initcrc.reverseByteOrder =true; //Above line and this line converts data from big endian to little endian
/*********other code here**************/
for(int m=0;m<1;m++){
data_adc = ((uint8_t *)(StartPage+m));//reading data from flash memory,StartPage is the address from where to read (data stored as 32bit integers),here reading only a byte
ecode = SPIDRV_STransmitB(SPI_HANDLE, data_adc, 4, 0);//transmitting 4 bytes (32-bit value) through SPI over to RPi
GPCRC_Start(GPCRC); //Set CRC parameters such as polynomial
for(int i=0;i<1;i++){
GPCRC_InputU32(GPCRC,((uint32_t *)(StartPage+i)));//generating crc for the same 32 bit value
}
//I also tried using below code:
/*for(int i=0;i<4;i++){
GPCRC_InputU8(GPCRC,((uint8_t *)(StartPage+i)));//generating crc for the same 32 bit value
}*/
checksum[0] = ~GPCRC_DataRead(GPCRC); //CRC value inverted and stored in array
ecode = SPIDRV_STransmitB(SPI_HANDLE, checksum, 4, 0);//sending this value through SPI (in chunks of 4 bytes)
On RPi, I am collecting this value (i.e. "32", I receive the value correct) but the CRC calculated by RPi is "2172022818". I am using "zlib" to calculate CRC32. I have added a code snippet as well:
import datetime
import os
import struct
import time
import pigpio
import spidev
import zlib
bus = 0
device = 0
spi = spidev.SpiDev()
spi.open(bus, device)
spi.max_speed_hz = 4000000
spi.mode = 0
pi.set_mode(25, pigpio.INPUT)
rpi_crc=0
def output_file_path():
return os.path.join(os.path.dirname(__file__),
datetime.datetime.now().strftime("%dT%H.%M.%S") + ".csv")
def spi_process(gpio,level,tick):
print("Detected")
data = bytes([0]*4)
crc_data = bytes([0]*4)
spi.xfer2([0x02])
with open(output_file_path(), 'w') as f:
t1=datetime.datetime.now()
for x in range(1):
recv = spi.xfer2(data)
values = struct.unpack("<" +"I"*1, bytes(recv))
print(values)
rpi_crc = zlib.crc32(bytes(recv))
print('RPis own CRC generated:')
print(rpi_crc)
f.write("\n")
f.write("\n".join([str(x) for x in values]))
mcu_crc_bytes = spi.xfer2(crc_data)
mcu_crc = struct.unpack("<"+"I"*1,bytes(mcu_crc_bytes))
mcu_crc_int = int(''.join(map(str,mcu_crc)))
print('MCU sent this CRC:')
print(mcu_crc_int)
if (rpi_crc != mcu_crc_int):
spi.xfer([0x03])
t2=datetime.datetime.now()
print(t2-t1)
input("Press Enter to start the process ")
spi.xfer2([0x01])
cb1=pi.callback(25, pigpio.RISING_EDGE, spi_process)
while True:
time.sleep(1)
From this forum itself I got to know that it might be the issue of endianness so, I tried playing around with the endianness of one value and compare it with other but it still produces different values.
For ex: Value sent by RPi is:2172022818 (in decimal)
Changing it to Hex: 0x81767022
Changing the endianness: 0x22707681
Value sent by microcontroller is:2613451423 (in decimal)
Changing it to Hex: 0x9BC61A9F
As you can see both the values in bold are not same. Please let me know if I am doing something wrong or what could be going on here. Thanks!
EDIT:
Added more code to provide better overview of certain aspects which were missing before. Datasheet for microcontroller (CRC on Pg. 347): https://www.wless.ru/files/ZigBee/EFR32MG21/EFR32xG21_reference_manual.pdf
I was able to figure out the issues. I used this https://crccalc.com/?crc=C5&method=crc32&datatype=hex&outtype=0 to confirm CRC values that I was getting on microcontroller and RPi.
First issue was on microcontroller, where I was not even performing the CRC on the data, instead I was performing on the address where that data was stored.
Second issue was that MCU was performing CRC on the value which was stored in the little-endian form. On RPi also, CRC was being performed on values stored in little-endian form. Hence, since the endianness was same on both the devices, I did not have to reverse the bits or bytes.
After doing these changes, I was able to get correct and same CRC values on both RPi and microcontroller.
I send 3 set of data from 3 sensors from Arduino 1 (router) to another Arduino(coordinator) to with wireless technology (xbee):
On coordinator, I receive wireless data from this 3 sensors(from the router) perfectly. The data stream is something like this(each sensor data on its line):
22.5624728451
944
8523
I want to have these 3 values as 3 variables that get updated constantly and then pass these values on to the rest of the program to make something like print on LCD or something else:
temperature=22. 5624728451
gas=944
smoke=8523
Initially, I had only 2 sensors and I send the data of these 2 sensors something like this:
22.5624728451944(22.5624728451 – temperature, 944 - gas) and I received both of them on the same line and divided everything into two variables(with readString.substring() ) with the code below. But now I have 3 sensors and I receive data on a separate line because I don't know which is the length of each data string … And I can't use the same technique (sending only one string that contain all sensor data on the same line and then divide them)
My old code:
#include <LiquidCrystal.h>
LiquidCrystal lcd(12,11,10,9,8,7);
String temperature;
String gas;
String readString;
void setup() {
Serial.begin(9600);
lcd.begin(16, 2);
}
void loop() {
while (Serial.available() > 0)
{
char IncomingData = Serial.read();
readString += IncomingData ;
temperature = readString.substring(0, 13); //get the first 13 characters
gas = readString.substring(13, 16); //get the last 3 characters
Serial.print(IncomingData); //here I have my string: 20.1324325452924 wichs is updating properly when I have sensor values changes
// Process message when new line character is DatePrimite
if (IncomingData == '\n')
{
Serial.println(temperature);
lcd.setCursor(0,0);
lcd.write("T:");
lcd.print(temperature);
delay(500);
temperature = ""; // Clear DatePrimite buffer
Serial.println(gaz);
lcd.begin(16, 2);
lcd.setCursor(0,1);
lcd.write("G:");
lcd.print(gas);
delay(500);
gaz = ""; // Clear DatePrimite buffer
readString = "";
}
}
}
All I want to do now is to assign a variable for every sensor data (3 lines – 3 variables for each line) updated constantly and then pass these values on to the rest of the program. Does anyone have any idea how to modify the code tO work in this situation?
Thank you in advance!
I would recommend that you concatenate the values into the same line on the sending end and use a delimiter like a comma along with string.split() on the receiving end if you are committed to using string values. EDIT: It appears Arduino does not have the string.split() function. See this conversation for an example.
An alternative would be to set a standard byte length and send the numbers as binary instead of ASCII encoded strings representing numbers. See this post on the Arudino forum for a little background. I am recommending sending the number in raw byte notation rather than as ASCII characters. When you define a variable as in integer on the arduino it defaults to 16-bit signed integer value. A float is a 32-bit floating point number. If, for example, you send a float and two ints as binary values the float will always be the first 4 bytes, the first int, the next 2 and the last int the last 2. The order of the bytes (endianness, or most significant byte first (Big Endian, Motorolla style)/least significant bit first (Little Endian, Intel style)).
I have been using Bluetooth module (HC-05) with Atmega8(both A and L) microcontroller to transmit data to my Android device. In following code an 8-bit signed(or unsigned doesn't made any change) value is sent over bluetooth to be displayed on device, this value starts at 0X00 and is incremented in every iteration:
#define F_CPU 1000000
#define BAUD 9600
#define MYUBRR (F_CPU/16/BAUD-1)
#include <avr/io.h>
#include <util/delay.h>
int main (void)
{
uint8_t data = 0;
UBRRH = (MYUBRR >> 8); // setting higher bits of UBRR
UBRRL = MYUBRR; // setting lower bit of UBRR
UCSRB = (1 << TXEN); // transmit enable
UCSRC = ((1 << URSEL) | (1 << UCSZ1) | (1 << UCSZ0)); //URSEL=USART reg selection (R/W), UCSZ1 & UCSZ0 set equal to 011 that is 8 bit data size
while (1)
{
UDR=data; // loading data in USART Data register (8-bit) and it will be transmitted immidiately
while(!(UCSRA&(1<<UDRE))); // waiting till complete data sent and UDRE flag set
_delay_ms(200); // after some time
data++; // incrementing data
}
return 0;
}
On the android device end there is "Bluetooth spp Pro" app to display the recieved data on screen.
Following is the configuration of recieve mode (Data is displayed as Hex values):
The data recieved here should start at 0X00 and go up to 0XFF instead it starts at 0X80 and increments upto 0XFF in a very unfamiliar manner.
Referring above image. The pattern I observed here is that the tens place digit starts at 8 and units place change from 0 to F then in next loop again it becomes 9 and unit place change from 0 to F after that instead of incrementing (expected) tens place again goes to 8 and then in next cycle it again becomes 9, after these four cycles of two repetetive words tens place increments to A and units place change from 0 to F and later the strange tens place pattern reappears for A and B then for C and D and later for E and F at tens place.
So my concern is:
Why is the device showing 80 for 00, as it is correctly working for ones place why is it not working for tens place as expected???
Thanks!!!
Edit:
This problem is neither Android version nor device manufacturer specific.
Problem was with voltage levels. Operating the microcontroller circuit on 3.2V and Bluetooth module on 3.8V solved the problem and data is transmitted as expected. However I am unable to predict an explanation for this.
Please help.
It is observed clearly on varying the potentiometer of voltage regulator, when I keep it below 3.20V data is transmitted smooth, and as the voltage level crosses 3.20V the tens place of data starts getting corrupted up to the point of complete data corruption and output becomes constant data 0XFE at 3.8V.
I have a serial port application which is written C++/CLI
To read data from the ports input buffer I am using
String^ inputString = System::IO:Ports::SerialPort::ReadExisting();
I need to convert the inputString value to an array of bytes. I have tried using
array<Byte> ^unicodeBytes = System::Text::Encoding::Unicode->GetBytes( inputString );
This works so long as the value being read in to my port input buffer is less than 0x7F (hex). Any values greater than 0x7F gets converted to 0x3F = "?"
E.g. if I send two bytes comprising {0x7F, 0xFF} to my input port then when I read and convert them the array unicodeBytes = { 0x7F 0x00, 0x3F 0x00} when looked at in the debugger watch window of VS2008
According to the unicode tables I have looked at OxFF is a valid unicode value equal to a latin small letter 'y' with two small dots above it.
Any suggestins on how to convert 'y' with two samll dots = 0xFF in string format to 0xFF in a byte array would be greatly appreciated.
Use the SerialPort's Read method to get bytes instead of decoding encoded text:
int BufferSize = <some size>;
array<byte> ^bytes = gcnew array<byte>(BufferSize);
int available = serialPort->BytesAvailable;
serialPort->Read(bytes, 0, Math::Min(available, BufferSize));
I'm trying to find converting UTF-32 text to/from any code page is possible using the Windows API alone. I cannot used CLR to do this task.
The Code page identifiers page at Microsoft at http://msdn.microsoft.com/en-us/library/dd317756(VS.85).aspx lists UTF-32 as being available to only managed applicatiosn.
ConvertStringTo/FromUnicode fails when UTF-32 is used.
You can use this function that takes the UTF-32 codepoint to be converted to it's equivalent UTF-16 codepoint (single or surrogate as the case may be) as the first argument and the high and low surrogates as second and third arguments.
The high and low surrogate values are returned by reference.
If the codepoint is below 0x10000, then we simply return that codepoint in the low surrogate by reference while the high surrogate is 0.
If the codepoint is greater than 0x10000, then, we calculate the high and low surrogate pairs using the rules given on this wikipedia page:
https://en.wikipedia.org/wiki/UTF-16#Example_UTF-16_encoding_procedure
Here's the code:
unsigned int convertUTF32ToUTF16(unsigned int cUTF32, unsigned int &h, unsigned int &l)
{
if (cUTF32 < 0x10000)
{
h = 0;
l = cUTF32;
return cUTF32;
}
unsigned int t = cUTF32 - 0x10000;
h = (((t<<12)>>22) + 0xD800);
l = (((t<<22)>>22) + 0xDC00);
unsigned int ret = ((h<<16) | ( l & 0x0000FFFF));
return ret;
}
With a bit of knowledge of Unicode you should be able to create a UTF32 to UTF16 converter without using any APIs.
All characters in the range U+0000 to U+FFFF can simply have the upper 16 bits removed.
Values in the range U+10000 to U+10FFFF can be converted into two 16-bit words, called surrogate pairs:
http://en.wikipedia.org/wiki/UTF-16#Encoding_of_characters_outside_the_BMP
You can use the iconv library in Windows. It fully supports UTF-32 (big and little endian).