Casting to 8 bytes variable takes only 4 bytes - visual-c++

I have structure that contains two fields:
struct ggg {
unsigned long long int a;
unsigned int b;
};
Field a should be 8 bytes long, while b one is 4 bytes long.
Trying to cast it to array of bytes:
unsigned char c[8 + 4] = { 0x01, 0x00, 0x00, 0x00, 0x02, 0x00, 0x00, 0x00,
0x03, 0x00, 0x00, 0x00, };
ggg* g = (ggg *)c ;
char tt[1024];
sprintf(tt, "a=%d b=%d ", g->a, g->b);
Got result in tt string :
a=1 b=2
Looks like while casting a takes only 4 bytes instead of 8. Why?

The problem is not casting but your sprintf format specifiers. You are using %d which means signed int, which typically is 4 bytes.
Try changing the format string to "a=%llu b=%u" and you are more likely to get the expected output.

Related

BPF filter fails

Can anyone suggest why this (classic) BPF program sometimes lets non-DHCP-response packets through:
# Load the Ethertype field
BPF_LD | BPF_H | BPF_ABS 12
# And reject the packet if it's not 0x0800 (IPv4)
BPF_JMP | BPF_JEQ | BPF_K 0x0800 0 8
# Load the IP protocol field
BPF_LD | BPF_B | BPF_ABS 23
# And reject the packet if it's not 17 (UDP)
BPF_JMP | BPF_JEQ | BPF_K 17 0 6
# Check that the packet has not been fragmented
BPF_LD | BPF_H | BPF_ABS 20
BPF_JMP | BPF_JSET | BPF_K 0x1fff 4 0
# Load the IP header length field
BPF_LDX | BPF_B | BPF_MSH 14
# And load that offset + 16 to get the UDP destination port
BPF_LD | BPF_IND | BPF_H 16
# And reject the packet if the destination port is not 68
BPF_JMP | BPF_JEQ | BPF_K 68 0 1
# Accept the frame
BPF_RET | BPF_K 1500
# Reject the frame
BPF_RET | BPF_K 0
It doesn't let every frame through, but under heavy network load it fails quite often. I'm testing it with this Python 3 program:
import ctypes
import struct
import socket
ETH_P_ALL = 0x0003
SO_ATTACH_FILTER = 26
SO_ATTACH_BPF = 50
filters = [
0x28, 0x00, 0x00, 0x00, 0x0c, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x08, 0x00, 0x08, 0x00, 0x00,
0x30, 0x00, 0x00, 0x00, 0x17, 0x00, 0x00, 0x00, 0x15, 0x00, 0x00, 0x06, 0x11, 0x00, 0x00, 0x00,
0x28, 0x00, 0x00, 0x00, 0x14, 0x00, 0x00, 0x00, 0x45, 0x00, 0x04, 0x00, 0xff, 0x1f, 0x00, 0x00,
0xb1, 0x00, 0x00, 0x00, 0x0e, 0x00, 0x00, 0x00, 0x48, 0x00, 0x00, 0x00, 0x10, 0x00, 0x00, 0x00,
0x15, 0x00, 0x00, 0x01, 0x44, 0x00, 0x00, 0x00, 0x06, 0x00, 0x00, 0x00, 0xdc, 0x05, 0x00, 0x00,
0x06, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
]
filters = bytes(filters)
b = ctypes.create_string_buffer(filters)
mem_addr_of_filters = ctypes.addressof(b)
pf = struct.pack("HL", 11, mem_addr_of_filters)
pf = bytes(pf)
def main():
sock = socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(ETH_P_ALL))
sock.bind(("eth0", ETH_P_ALL))
sock.setsockopt(socket.SOL_SOCKET, SO_ATTACH_FILTER, pf)
# sock.send(req)
sock.settimeout(1)
try:
data = sock.recv(1500)
if data[35] == 0x43:
return
print('Packet got through: 0x{:02x} 0x{:02x}, 0x{:02x}, 0x{:02x}'.format(data[12], data[13], data
except:
print('Timeout')
return
sock.close()
for ii in range(1000):
main()
If I do this while SCPing a big core file to the host running that script, it doesn't reach the one-second timeout in the large majority of, but not all, cases. Under lighter load failures are much rarer - eg twiddling around on an ssh link while the socket is receiving; sometimes it gets through 1000 iterations without failure.
The host in question is Linux 4.9.0. The kernel has CONFIG_BPF=y.
Edit
For a simpler version of the same question, why does this BPF program let through any packets at all:
BPF_RET | BPF_K 0
Edit 2
The tests above were on an ARM64 machine. I've retested on amd64 / Linux 5.9.0. I still see failures, though not nearly as many.
I got a response on LKML explaining this.
The problem is that the filter is applied as the frame arrives on the interface, not as it's passed to userspace with recv(). So under heavy load, frames arrive between the creation of the socket with socket.socket(socket.AF_PACKET, socket.SOCK_RAW, socket.htons(ETH_P_ALL)) and the filter being applied with sock.setsockopt(socket.SOL_SOCKET, SO_ATTACH_FILTER, pf). These frames sit in the queue; once the filter is applied, subsequent arriving packets have the filter applied to them.
So once the filter is applied, it's necessary to "drain" any queued frames from the socket before you can rely on the filter.

How to declare 16-bits pointer to string in GCC C compiler for arm processor

I tried to declare an array of short pointers to strings (16-bits instead of default 32-bits) in GNU GCC C compiler for ARM Cortex-M0 processor to reduce flash consumption. I have about 200 strings in two language, so reducing the size of pointer from 32-bits to 16-bits could save 800 bytes of flash. It should be possible because the flash size is less than 64 kB so the high word (16-bits) of pointers to flash is constans and equal to 0x0800:
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned short ptrs[] = {&str1, &str2}; //this line generate error
but i got error in 3-th line
"error: initializer element is not computable at load time"
Then i tried:
const unsigned short ptr1 = (&str1 & 0xFFFF);
and i got:
"error: invalid operands to binary & (have 'const unsigned char (*)[11]' and 'int')"
After many attempts i ended up in assembly:
.section .rodata.strings
.align 2
ptr0:
ptr3: .short (str3-str0)
ptr4: .short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
compilation pass well, but now i have problem trying to reference pointers: ptr4 and ptr0 from C code. Trying to pass "ptr4-ptr0" as an 8-bit argument to C function:
ptr = getStringFromTable (ptr4-ptr0)
declared as:
const unsigned char* getStringFromTable (unsigned char stringIndex)
i got wrong code like this:
ldr r3, [pc, #28] ; (0x8000a78 <main+164>)
ldrb r1, [r3, #0]
ldr r3, [pc, #28] ; (0x8000a7c <main+168>)
ldrb r3, [r3, #0]
subs r1, r1, r3
uxtb r1, r1
bl 0x8000692 <getStringFromTable>
instead of something like this:
movs r0, #2
bl 0x8000692 <getStringFromTable>
I would be grateful for any suggestion.
.....after a few days.....
Following #TonyK and #old_timer advices i finally solved the problem in the following way:
in assembly i wrote:
.global str0, ptr0
.section .rodata.strings
.align 2
ptr0: .short (str3-str0)
.short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
then i declared in C:
extern unsigned short ptr0[];
extern const unsigned char str0[] ;
enum ptrs {ptr3, ptr4}; //automatically: ptr3=0, ptr4=1
const unsigned char* getStringFromTable (enum ptrs index)
{
return &str0[ptr0[index]] ;
}
and now this text:
ptr = getStringFromTable (ptr4)
is compiled to the correct code:
08000988: 0x00000120 movs r0, #1
0800098a: 0xfff745ff bl 0x8000818 <getStringFromTable>
i just have to remember to keep the order of enum ptrs each time i will add a string to the assembly and a new item to enum ptrs
Declare ptr0 and str0 as .global in your assembly language file. Then in C:
extern unsigned short ptr0[] ;
extern const char str0[] ;
const char* getStringFromTable (unsigned char index)
{
return &str0[ptr0[index]] ;
}
This works as long as the total size of the str0 table is less than 64K.
A pointer is an address and addresses in arm cannot be 16 bits that makes no sense, other than Acorn based arms (24 bit if I remember right), addresses are minimum 32 bits (for arm) and going into aarch64 larger but never smaller.
This
ptr3: .short (str3-str0)
does not produce an address (so it cant be a pointer) it produces an offset that is only usable when you add it to the base address str0.
You cannot generate 16 bit addresses (in a debugged/usable arm compiler), but since everything appears to be static here (const/rodata) that makes it even easier solve, solvable runtime as well, but even simpler pre-computed based on information provided thus far.
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned char str3[] ="Third string";
brute force takes like 30 lines of code to produce the header file below, much less if you try to compact it although ad-hoc programs don't need to be pretty.
This output which is intentionally long to demonstrate the solution (and to be able to visually check the tool) but the compiler doesn't care (so best to make it long and verbose for readability/validation purposes):
mystrings.h
const unsigned char strs[39]=
{
0x46, // 0 F
0x69, // 1 i
0x72, // 2 r
0x73, // 3 s
0x74, // 4 t
0x20, // 5
0x73, // 6 s
0x74, // 7 t
0x72, // 8 r
0x69, // 9 i
0x6E, // 10 n
0x67, // 11 g
0x00, // 12
0x53, // 13 S
0x65, // 14 e
0x63, // 15 c
0x6F, // 16 o
0x6E, // 17 n
0x64, // 18 d
0x20, // 19
0x73, // 20 s
0x74, // 21 t
0x72, // 22 r
0x69, // 23 i
0x6E, // 24 n
0x00, // 25
0x54, // 26 T
0x68, // 27 h
0x69, // 28 i
0x72, // 29 r
0x64, // 30 d
0x20, // 31
0x73, // 32 s
0x74, // 33 t
0x72, // 34 r
0x69, // 35 i
0x6E, // 36 n
0x67, // 37 g
0x00, // 38
};
const unsigned short ptrs[3]=
{
0x0000 // 0 0
0x000D // 1 13
0x001A // 2 26
};
The compiler then handles all of the address generation when you use it
&strs[ptrs[n]]
depending on how you write your tool can even have things like
#define FIRST_STRING 0
#define SECOND_STRING 1
and so on so that your code could find the string with
strs[ptrs[SECOND_STRING]]
making the program that much more readable. All auto generated from an ad-hoc tool that does this offset work for you.
the main() part of the tool could look like
add_string(FIRST_STRING,"First string");
add_string(SECOND_STRING,"Second string");
add_string(THIRD_STRING,"Third string");
with that function and some more code to dump the result.
and then you simply include the generated output and use the
strs[ptrs[THIRD_STRING]]
type syntax in the real application.
In order to continue down the path you started, if that is what you prefer (looks like more work but is still pretty quick to code).
ptr0:
ptr3: .short (str3-str0)
ptr4: .short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
Then you need to export str0 and ptr3, ptr4 (as needed depending on your assembler's assembly language) then access them as a pointer to str0+ptr3
extern unsigned int str0;
extern unsigned short ptr3;
...
... *((unsigned char *)(str0+ptr3))
fixing whatever syntax mistakes I intentionally or unintentionally added to that pseudo code.
That would work as well and you would have the one base address then the hundreds of 16 bit offsets to that address.
could even do some flavor of
const unsigned short ptrs[]={ptr0,ptr1,ptr2,ptr3};
...
(unsigned char *)(str0+ptrs[n])
using some flavor of C syntax to create that array but probably not worth that extra effort...
The solution a few of us have mentioned thus far (one example demonstrated above)(16 bit offsets which are NOT addresses which means NOT pointers) is much easier to code and maintain and use and maybe read depending on your implementation. However implemented it requires a full sized base address and offsets. It might be possible to code this in C without an ad-hoc tool, but the ad-hoc tool literally only takes a few minutes to write.
I write programs to write programs or programs to compress/manipulate data almost daily, why not. Compression is a good example of this want to embed a black and white image into your resource limited mcu flash? Don't put all the pixels in the binary, start with a run length encoding and go from there, which means a third party tool written by you or not that converts the real data into a structure that fits, same thing here a third party tool that prepares/compresses the data for the application. This problem is really just another compression algorithm since you are trying to reduce the amount of data without losing any.
Also note depending on what these strings are if it is possible to have duplicates or fractions the tool could be even smarter:
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned char str3[] ="Third string";
const unsigned char str4[] ="string";
const unsigned char str5[] ="Third string";
creating
const unsigned char strs[39]=
{
0x46, // 0 F
0x69, // 1 i
0x72, // 2 r
0x73, // 3 s
0x74, // 4 t
0x20, // 5
0x73, // 6 s
0x74, // 7 t
0x72, // 8 r
0x69, // 9 i
0x6E, // 10 n
0x67, // 11 g
0x00, // 12
0x53, // 13 S
0x65, // 14 e
0x63, // 15 c
0x6F, // 16 o
0x6E, // 17 n
0x64, // 18 d
0x20, // 19
0x73, // 20 s
0x74, // 21 t
0x72, // 22 r
0x69, // 23 i
0x6E, // 24 n
0x00, // 25
0x54, // 26 T
0x68, // 27 h
0x69, // 28 i
0x72, // 29 r
0x64, // 30 d
0x20, // 31
0x73, // 32 s
0x74, // 33 t
0x72, // 34 r
0x69, // 35 i
0x6E, // 36 n
0x67, // 37 g
0x00, // 38
};
const unsigned short ptrs[5]=
{
0x0000 // 0 0
0x000D // 1 13
0x001A // 2 26
0x0006 // 3 6
0x001A // 4 26
};

CRC Bluetooth Low Energy 4.2

In the core bluetooth 4.2 documentation here it talks about a CRC check for data integrity (P2456). This details the below:
With an example below:
4e 01 02 03 04 05 06 07 08 09
Producing CRC: 6d d2
I have tried a number of different methods but can't seem to reproduce the example. Can anyone provide some sample code to produce the CRC above.
You left out a key part of the example in the document, which is that the UAP used in the example is 0x47. The CRC needs to be initialized with the UAP. (Oddly, with the bits reversed and in the high byte, relative to the data bits coming in.)
The code below computes the example. The result is d26d. The CRC is transmitted least significant bit first, so it is sent 6d d2. On the receive side the same CRC is computed on the whole thing with the CRC, and the result is zero, which is how the receive side is supposed to check what was sent.
#include <stdio.h>
static unsigned crc_blue(unsigned char *payload, size_t len) {
unsigned crc = 0xe200; // UAP == 0x47
while (len--) {
crc ^= *payload++;
for (int k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return crc;
}
int main(void) {
unsigned char payload[] = {
0x4e, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09};
printf("%04x\n", crc_blue(payload, sizeof(payload)));
unsigned char recvd[] = {
0x4e, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x6d, 0xd2};
printf("%04x\n", crc_blue(recvd, sizeof(recvd)));
return 0;
}
Your code would need to initialize the UAP appropriately for that device.

Reading code from RFID card

I have a problem with reading code from RFID card.
Any conversion algorithm exist?
Examples of codes:
04006d0ba0 -> 00008596950352
0d001c59b3 -> 00047253268956
0d001c5134 -> 00047253268674
0d001c9317 -> 00047253265550
0d001c93ed -> 00047253265531
0d001c1b12 -> 00047253261700
0d001c1b1d -> 00047253261707
e800ef0aac -> 00485339628883
Same RFID card, different outputs from different readers...
I know that topic like that exist yet, but i think that is not same problem...
The conversion looks quite simple:
Let's assume that you want to convert "04006d0ba0" to "00008596950352".
Take each nibble from the hexadecimal number "04006d0ba0" (i.e. "0", then "4", then "0", then "0", then "6", ...)
Reverse the bits of each nibble (least significant bit becomes most significant bit, second bit becomes second last bit), e.g. "0" (= 0000) remains "0" (= 0000), "4" (= 0100) becomes "2" (= 0010), "6" (= 0110) remains "6" (= 0110), etc.
Convert into decimal number format.
In Java, this could look something like this:
private static final byte[] REVERSE_NIBBLE = {
0x00, 0x08, 0x04, 0x0C, 0x02, 0x0A, 0x06, 0x0E,
0x01, 0x09, 0x05, 0x0D, 0x03, 0x0B, 0x07, 0x0F
};
private long convert(byte[] input) {
byte[] output = new byte[input.length];
for (int i = 0; i < input.length; ++i) {
output[i] = (byte)((REVERSE_NIBBLE[(output[i] >>> 4) & 0x0F] << 4) |
REVERSE_NIBBLE[ output[i] & 0x0F]);
}
return new BigInteger(1, output).longValue();
}

Setting values of bytearray

I have a BYTE data[3]. The first element, data[0] has to be a BYTE with very specific values which are as follows:
typedef enum
{
SET_ACCURACY=0x01,
SET_RETRACT_LIMIT=0x02,
SET_EXTEND_LIMT=0x03,
SET_MOVEMENT_THRESHOLD=0x04,
SET_STALL_TIME= 0x05,
SET_PWM_THRESHOLD= 0x06,
SET_DERIVATIVE_THRESHOLD= 0x07,
SET_DERIVATIVE_MAXIMUM = 0x08,
SET_DERIVATIVE_MINIMUM= 0x09,
SET_PWM_MAXIMUM= 0x0A,
SET_PWM_MINIMUM = 0x0B,
SET_PROPORTIONAL_GAIN = 0x0C,
SET_DERIVATIVE_GAIN= 0x0D,
SET_AVERAGE_RC = 0x0E,
SET_AVERAGE_ADC = 0x0F,
GET_FEEDBACK=0x10,
SET_POSITION=0x20,
SET_SPEED= 0x21,
DISABLE_MANUAL = 0x30,
RESET= 0xFF,
}TYPE_CMD;
As is, if I set data[0]=SET_ACCURACY it doesn't set it to 0x01, it sets it to 1, which is not what I want. data[0] must take the value 0x01 when set it equal to SET_ACCURACY. How do I make it so that it does this for not only SET_ACCURACY, but all the other values as well?
EDIT: Actually this works... I had a different error in my code that I attributed to this. Sorry!
Thanks!
"data[0]=SET_ACCURACY doesn't set it to 0x01, it sets it to 1"
It assigns value of SET_ACCURACY to the data[0], which means that bits 00000001 are stored into memory at address &data[0]. 0x01 is hexadecimal representation of this byte, 1 is its decimal representation.

Resources