How to reverse engineer the data sent from a 433MHz temperature sensor? - protocols

First off, I apologize if my question is too specialized or does not belong here. If you believe there is a better place to ask this question, please suggest.
My main goal is to transmit the measured temperature from my microcontroller to a wireless thermometer that I bought some time ago. The temperature sensor for the thermometer is wireless and uses a certain protocol to transmit the data to the thermometer. In order to do this, I need to understand the protocol that is being used. Once I understand this, I can program the microcontroller to send my own packets. This is mostly just an exercise to practice reverse engineering and embedded programming. Here is a link to the thermometer/sensor combo.
Using my SDR, I was able to monitor the wireless sensor, record a few samples of different temperatures and view them in Audacity. I was able figure out a few things about the protocol by comparing the recorded samples.
The data is transmitted from the sensor every 40-50 seconds, in 33 bit packets. The packet is repeated 4 times per transmission. The data is OOK PWM modulated, with a 0 being a short pulse and a 1 being a long pulse.
Each transmission is prefixed with a 'start sequence' and each packet is separated by a 'delimiter sequence'.
The packet takes the form of xxxx xxxx xxxx x tttt tttt tttt cccc cccc, where t is the temperature bits, c is probably type of checksum and x is probably some type of id/address, battery indicator or header.
The x bits do not change unless I power down the sensor, so I'm not particularly worried about them as I assume that I'll be able to leave them the same in my own transmissions. the 12 t bits hold the temperature in binary, with the temperature in Celsius calculated as follows:
temp_Celsius = temp/20 - 50
Example from above packet: b011000001110 = 1551
1551/20 - 50 = 27.5 C -> 81.5 F
This works for all of my recorded samples. My main problem is trying to figure out the last 8 c bits. Since the protocol seems relatively simple, I do not think it should be an overly complicated checksum/crc, but I haven't been able to figure it out. I have tried using CRC RevEng but could not find an appropriate model.
So pretty much all I am trying to figure out at this point is how the last 8 c bits are calculated, but I would also be interested to learn what info the first 13 x bits holds. Hopefully one of you sees a pattern that I am missing.
Sample Packets:
binary:
? xxxxxxxxxxxx tttttttttttt cccccccc
-----------------------------------------
0 111000000010 011000001111 11000000 //27.5C, 81.5F
0 111000000010 011000100111 11111000 //28.7C, 83.6F
0 111000000010 011000100001 11110010 //28.4C, 83.1F
0 111000000010 011000000011 11010100 //28.4C, 83.1F
0 111000000010 010111101101 10111110 //25.8C, 78.4F
0 111000000010 010110101011 01111111 //22.5C, 72.5F
0 111000000010 010011101000 10111010 //12.8C, 55.0F
0 111000000010 010010000100 01010110 // 7.8C, 46.0F
0 111000000010 001111101111 10111110 // 0.3C, 32.5F
hex:
? xxx ttt cc
--------------
0 E02 60F C0 //27.5C, 81.5F
0 E02 627 F8 //28.7C, 83.6F
0 E02 621 F2 //28.4C, 83.1F
0 E02 603 D4 //28.4C, 83.1F
0 E02 5ED BE //25.8C, 78.4F
0 E02 5AB 7F //22.5C, 72.5F
0 E02 4E8 BA //12.8C, 55.0F
0 E02 484 56 // 7.8C, 46.0F
0 E02 3EF BE // 0.3C, 32.5F
Things I have tried:
Searching for a CRC model using RevEng
Online CRC calculators
Searching for my particular sensor or similar ones in RTL433. There are other LaCrosse devices in this program but none seem to have the same protocol as mine.
Any and all criticism/help is appreciated, thanks.

Related

"vmstat" and "perf stat -a" show different numbers for context-switching

I'm trying to understand the context-switching rate on my system (running on AWS EC2), and where the switches are coming from. Just getting the number is already confusing, as two tools that I know can output such a metric give me different results. Here's the output from vmstat:
$ vmstat -w 2
procs -------------------memory------------------ ---swap-- -----io---- --system-- -----cpu-------
r b swpd free buff cache si so bi bo in cs us sy id wa st
8 0 0 443888 492304 8632452 0 0 0 1 0 0 14 2 84 0 0
37 0 0 444820 492304 8632456 0 0 0 20 131602 155911 43 5 52 0 0
8 0 0 445040 492304 8632460 0 0 0 42 131117 147812 46 4 50 0 0
13 0 0 446572 492304 8632464 0 0 0 34 129154 142260 49 4 46 0 0
The number is ~140k-160k/sec.
But perf tells something else:
$ sudo perf stat -a
Performance counter stats for 'system wide':
2980794.013800 cpu-clock (msec) # 35.997 CPUs utilized
12,335,935 context-switches # 0.004 M/sec
2,086,162 cpu-migrations # 0.700 K/sec
11,617 page-faults # 0.004 K/sec
...
0.004 M/sec is apparently 4k/sec.
Why is there a disparity between the two tools? Am I misinterpreting something in either of them, or are their CS metrics somehow different?
FWIW, I've tried doing the same on a machine running a different workload, and the difference there is even twice larger.
Environment:
AWS EC2 c5.9xlarge instance
Amazon Linux, kernel 4.14.94-73.73.amzn1.x86_64
The service runs on Docker 18.06.1-ce
Some recent versions of perf have a unit-scaling bug in the printing code. Manually do 12.3M / wall-time and see if that's sane. (spoiler alert: it is according to OP's comment.)
https://lore.kernel.org/patchwork/patch/1025968/
Commit 0aa802a79469 ("perf stat: Get rid of extra clock display
function") introduced the bug in mainline Linux 4.19-rc1 or so.
Thus, perf_stat__update_shadow_stats() now saves scaled values of clock events
in msecs, instead of original nsecs. But while calculating values of
shadow stats we still consider clock event values in nsecs. This results
in a wrong shadow stat values.
Commit 57ddf09173c1 on Mon, 17 Dec 2018 fixed it in 5.0-rc1, eventually being released with perf upstream version 5.0.
Vendor kernel trees that cherry-pick commits for their stable kernels might have the bug or have fixed the bug earlier.

Division in C# results not exact value after decimal point

Following division showing incorrect result. It should be 15 digits after point according to the windows calculator. But in C# showing 14 digits after point.
I fixed. It will be (Decimal)200/(Decimal)30 to be more Scale digits
If numbers must add up correctly or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand.
If the exact value of numbers is not important, use double for speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits".
Please refer to below code for any further help.
C# Type .Net Framework (System) type Signed? Bytes Occupied Possible Values
sbyte System.Sbyte Yes 1 -128 to 127
short System.Int16 Yes 2 -32768 to 32767
int System.Int32 Yes 4 -2147483648 to 2147483647
long System.Int64 Yes 8 -9223372036854775808 to 9223372036854775807
byte System.Byte No 1 0 to 255
ushort System.Uint16 No 2 0 to 65535
uint System.UInt32 No 4 0 to 4294967295
ulong System.Uint64 No 8 0 to 18446744073709551615
float System.Single Yes 4 Approximately ±1.5 x 10-45 to ±3.4 x 1038 with 7 significant figures
double System.Double Yes 8 Approximately ±5.0 x 10-324 to ±1.7 x 10308 with 15 or 16 significant figures
decimal System.Decimal Yes 12 Approximately ±1.0 x 10-28 to ±7.9 x 1028 with 28 or 29 significant figures
char System.Char N/A 2 Any Unicode character (16 bit)
bool System.Boolean N/A 1 / 2 true or false

How do the the different parts of an ICC file work together?

I took apart an ICC file from http://www.brucelindbloom.com/index.html?MunsellCalcHelp.html with a look up table using ICC Profile Inspector. The ICC file is supposed to convert Lab to Uniform LAB.
The files it outputs include headers, a matrix (3x3 identity matrix), Input and Output curves, and a lookup table. What do these files mean? And how are they related to the color transform?
The header contents are:
InputChan: 3
OutputChan: 3
Input_Entries: 258
Output_Entries: 256
Clut_Size: 51
The InputCurves file has entries like:
0 0 0 0
1 256 255 255
2 512 510 510
...
256 65535 65280 65280
257 65535 65535 65535
The OutputCurves file has entries like:
0 0 0 0
1 256 257 257
2 512 514 514
...
254 65024 65278 65278
255 65280 65535 65535
And the lookup table entries look like:
0 0 0 25968
1 0 0 26351
2 0 0 26789
...
132649 65535 65535 49667
132650 65535 65535 50603
I'd like to understand how an input LAB color maps to an output value. I'm especially confused because a & bvalues can be negative.
I believe I understand how this works after skimming through http://www.color.org/specification/ICC1v43_2010-12.pdf
This explination may have some off by 1 errors, but it should be generally correct.
The input values are LAB, and L values are mapped using table 39 & 40 in section 10.8 lut16Type. Then the 258 values in the input curves are uniformly spaced across those L, a, & b ranges. The output values are 16 bit, so 0-65535.
The same goes for the CLUT. There are 51^3 entries (51 was chosen by the ICC file authoer). Each dimension (L,a,b) is split uniformally across this space as well. So 0 = 0 & 50 (note 0 - 50 is 51 entries) = 65535 from the previous section. The first 51 rows are for L =0 and a =0, but incriment b. Every 51 rows, the a value increses by 1, and every 51*51 rows, the L values increases by 1.
So given L, a, and b values adjusted by the input curves, figure out their index (0-50) and look those up in the CLUT (l_ind*51*51+a_ind*51+b_ind), which will give you 3 more values.
Now the output curves come in. It's another set of curves that work just like the input curves. The outputs can then get mapped back using the same values from Tables 39 & 40.

Send DNS data: MSB or LSB first?

I'm implementing a DNS(multicast DNS in fact) in c#.
I just want to know if I must encode my uint/int/ushort/... with the LSB on the left or the MSB on the left. And more globally how I could know this? One of this is standard?
Because I didn't found anything in the IETF description. I found a lot of things(each header field length, position), but I didn't found this.
Thank you!
The answer is in RFC 1035 (2.3.2. Data Transmission Order)
Here is the link: http://www.ietf.org/rfc/rfc1035.txt
And the interesting part
2.3.2. Data Transmission Order
The order of transmission of the header and data described in this
document is resolved to the octet level. Whenever a diagram shows a
group of octets, the order of transmission of those octets is the
normal order in which they are read in English. For example, in the
following diagram, the octets are transmitted in the order they are
numbered.
0 1
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 1 | 2 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 3 | 4 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| 5 | 6 |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Whenever an octet represents a numeric quantity, the left most bit in
the diagram is the high order or most significant bit. That is, the
bit labeled 0 is the most significant bit. For example, the following
diagram represents the value 170 (decimal).
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
|1 0 1 0 1 0 1 0|
+-+-+-+-+-+-+-+-+
Similarly, whenever a multi-octet field represents a numeric quantity
the left most bit of the whole field is the most significant bit.
When a multi-octet quantity is transmitted the most significant octet
is transmitted first.

Generating AC elements from jpeg file

I'm decoding jpeg file. I have generated huffman tables, and quantization tables, and I have reach the point where I have to decode DC and AC elements. For example lets say I have next data
FFDA 00 0C 03 01 00 02 11 03 11 00 3F 00 F2 A6 2A FD 54 C5 5F FFD9
If we ignore few bytes from SOS marker, my real data is starting from F2 byte. So lets write it in binary (starting from F2 byte):
1111 0010 1010 0110 0010 1010 1111 1101 0101 0100 1100 0101 0101 1111
F 2 A 6 2 A F D 5 4 C 5 5 F
When decoding, first element is luminance DC element so let's decode it.
[1111 0]010 1010 0110 0010 1010 1111 1101 0101 0100 1100 0101 0101 1111
F 2 A 6 2 A F D 5 4 C 5 5 F
So 11110 is Huffman code (in my case) for element 08. This means that next 8 bits are my DC value. When I take next 8 bits the value is:
1111 0[010 1010 0]110 0010 1010 1111 1101 0101 0100 1100 0101 0101 1111
F 2 A 6 2 A F D 5 4 C 5 5 F
DC element value is -171.
Here is my problem: next is luminance AC value, but I don't really understand standard in a case when is AC non zero? Tnx!
The DC values, as you've seen, are defined as the number of "extra" bits which specify the positive or negative DC value. The AC coefficients are encoded differently because most of them are 0. The Huffman table defines each entry for AC coefficients with a "skip" value and an "extra bits" length. The skip value is how many AC coefficients to skip before storing the value, and the extra bits are treated the same way as DC values. When decoding AC coefficients, you decode values from 1 to 63, but the way the encoding of the MCU ends can vary. You can have an actual value stored at index 63 or at if you're at index > 48, you could get a ZRL (zero run length = 16 zeros), or any combination which takes you past the end. A simplified decode loop:
void DecodeMCU(signed short *MCU)
{
int index;
unsigned short code, skip, extra;
MCU[0] = decodeDC();
index = 1;
while (index < 64)
{
code = decodeAC();
skip = code >> 4; // skip value
extra = code & 0xf; // extra bits
index += skip;
MCU[index++] = calcACValue(extra);
}
}
The color components can be interleaved (typical) or stored in separate scans. The elements are encoded in zigzag order in each MCU (low frequency elements first). The number of 8x8 blocks of coefficients which define an MCU varies depending on the color subsampling. For 1:1, there will be 1 Y followed by 1 Cr and 1 Cb. For typical digital camera images, the horizontal axis is subsampled, so you will get 2 Y blocks followed by 1 Cr and 1 Cb. The quality setting of the compressed image determines the quantization table used and how many zero AC coefficients are encoded. The lower the quality, the more of each MCU will be zeros. When you do the inverse DCT on your MCU, the number of zeros will determine how much detail is preserved in your 8x8, 16x8, 8x16 or 16x16 block of pixels. Here are the basic steps:
1) Entropy decode the 8x8 coefficient blocks, each color component is stored separately
2) De-zigzag and de-quantize the coefficients
3) Perform inverse DCT on the coefficients (might be 6 8x8 blocks for 4:2:0 subsampling)
4) Convert the colorspace from YCrCb to RGB or whatever you need

Resources