Scatterlist in linux crypto api - linux

I start to learn how to work with Crypto API in linux. It's offered to use scatterlist structures to transfer plaintext to block cipher function. Scatterlist handle to the plaintext by storing location of plaintext on the memmory page. Simplyfied definition of struct scatterlist is:
struct scatterlist {
unsigned long page_link; //number of virtual page in kernel space where data buffer is stored
unsigned int offset; //offset from page start address to data buffer start address
unsigned int length; //data buffer length
dma_addr_t dma_address; //i don't know the purpose of this variable at the moment
};
To get scatterlist variable which handle to plaintext buffer we use next function: void sg_init_one(struct scatterlist *, const void *, unsigned int);. To get buffer start address from scatterlist variable we use next function:void *sg_virt(struct scatterlist *sg).
For example:
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/scatterlist.h>
u8 plaintext_global[16]={0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f};
static int __init simple_init (void){
u8 *ptr_to_local, *ptr_to_global;
u8 palintext_local[16]={0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f};
struct scatterlist sg[2];
sg_init_one(&sg[0], plaintext_local, 16);
sg_init_one(&sg[1], plaintext_global, 16);
printk("sg[0].page_link=%u\n", sg[0].page_link);
printk("sg[0].offset=%u\n", sg[0].offset);
printk("sg[0].length=%u\n", sg[0].length);
printk("sg[1].page_link=%u\n", sg[1].page_link);
printk("sg[1].offset=%u\n", sg[1].offset);
printk("sg[1].length=%u\n", sg[1].length);
ptr_to_local=sg_virt(&sg[0]);
ptr_to_global=sg_virt(&sg[1]);
printk("plaintext_local start address:%p\n", plaintext_local);
printk("sg_virt(&sg[0]):%p\n", ptr_to_local);
printk("plaintext_global start address:%p\n", plaintext_global);
printk("sg_virt(&sg[1]):%p\n", ptr_to_global);
}
And output in dmesg after insmod this module:
sg[0].page_link=31209922
sg[0].offset=3168
sg[0].length=16
sg[1].page_link=16853378
sg[1].offset=0
sg[1].length=16
plaintext_local start address:ffff8800770e7c60
sg_virt(&sg[0]):ffff8800770e7c60
plaintext_global start address:ffffffffc04a6000
sg_virt(&sg[1]):ffff8800404a6000
First question is why with local plaintext buffer sg_virt return the same value as local buffer address, but with global plaintext buffer return value of sg_virt have another prefix than global buffer address?
Next. Now I use crypto api:
#include <linux/init.h>
#include <linux/module.h>
#include <linux/crypto.h>
#include <linux/scatterlist.h>
u8 aes_in[]={0x00, 0x11, 0x22, 0x33, 0x44, 0x55, 0x66, 0x77, 0x88, 0x99, 0xaa, 0xbb, 0xcc, 0xdd, 0xee, 0xff};
u8 aes_key[]={0x00, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x0a, 0x0b, 0x0c, 0x0d, 0x0e, 0x0f};
u8 aes_out[]={0x69, 0xc4, 0xe0, 0xd8, 0x6a, 0x7b, 0x04, 0x30, 0xd8, 0xcd, 0xb7, 0x80, 0x70, 0xb4, 0xc5, 0x5a};
static int __init simple_init (void){
struct crypto_blkcipher *blk;
struct blkcipher_desc desc;
struct scatterlist sg[3];
u8 encrypted[100];
u8 decrypted[100];
blk=crypto_alloc_blkcipher("ecb(aes)",0,0);
crypto_blkcipher_setkey(blk, aes_key, 16);
sg_init_one(&sg[0], aes_in, 16);
sg_init_one(&sg[1], encrypted, 16);
sg_init_one(&sg[2], decrypted, 16);
desc.tfm=blk;
desc.flags=0;
sg_copy_from_buffer(&sg[0],1,aes_128_in, 16);
crypto_blkcipher_encrypt(&desc, &sg[1], &sg[0], 16);
crypto_blkcipher_decrypt(&desc, &sg[2], &sg[1], 16);
crypto_free_blkcipher(blk);
}
Encrypted data: 69 c4 e0 d8 6a 7b 04 30 d8 cd b7 80 70 b4 c5 5a
Decrypted data: 00 11 22 33 44 55 66 77 88 99 aa bb cc dd ee ff
Next question, what in detail did sg_copy_from_buffer function? Without this function encrypted data not right:
Encrypted data without sg_copy_from_buffer : 03 07 23 fc 20 11 42 c6 60 b3 36 07 eb c8 c9 62
Encrypted data without sg_copy_from_buffer : 00 00 00 00 00 00 00 00 58 51 02 a0 f7 7f 00 00

For your first question, the scatterlist saves the buffer you give it as a struct page internally(the "page link is actually a pointer to a struct page"), of which you can think as a physical address(Not exactly, but a struct page does represent a unique physical page).
That means scatterlist will first convert the buffer's virtual address to the corresponding physical address through sg_init_one, which finally calls the macro function ___pa to do that. When you call sg_virt, it will convert the physical address stored in the scatterlist back to a virtual address through another macro ___va.
Actually, ___pa is used to convert a virtual address within the linear mapping address range or in the kernel image address range to its corresponding physical address. ___va is used to convert a physical address to its corresponding virtual address within the linear mapping address range. They probably give a wrong output when converting an address out of aforementioned address ranges.
However, the global buffer you give to a scatterlist is within the kernel module address range which is behind the kernel image address range and the local buffer is within the kernel stack address range which is before the kernel image address range. Both of them are not within the kernel linear mapping address range, so after being converted through "___pa" and "___va", they are probably wrong.
According to your test, the local buffer address is right but the global buffer address is wrong, this is because the local buffer address is before the kernel image address range and the global buffer address is after the kernel image address range, so they are converted in a different way in "__pa" but in the same way in "___va". You can see it from the following code snippet in linux kernel source code /arch/x86/include/asm/page.h and
/arch/x86/include/asm/page_64.h.
// This function is the implementation of ___pa on x86-64
static inline unsigned long __phys_addr_nodebug(unsigned long x)
{
// __START_KERNEL_map is the start address of the kernel image address range.
unsigned long y = x - __START_KERNEL_map;
// You can see that this function behaves differently depending on x and __START_KERNEL_map
/* use the carry flag to determine if x was < __START_KERNEL_map */
// phys_base is the start of system's physical address
// PAGE_OFFSET is the start of linear mapping address range
x = y + ((x > y) ? phys_base : (__START_KERNEL_map - PAGE_OFFSET));
return x;
}
// This is the implementation of ___va on x86-64
#define __va(x) ((void *)((unsigned long)(x)+PAGE_OFFSET))
For a virtual address before the kernel image address range, ___va just adds an offset and ___pa just subtracts the same offset, so the local buffer address is right. However, for a virtual address after the kernel image address, ___va does the same work but ___pa behaves differently, so the global buffer address is wrong.
For x86-64 linux kernel memory layout, please refer to linux source code at Documentation/x86/x86_64/mm.rst
Note that only the buffer allocated by kmalloc will stay in the kernel linear mapping address range, so you should always use kmalloc to allocate memory for linux kernel crypto operations.
For your second question, a scatterlist can be made of a single struct scatterlist. However it is actually designed for managing a list of memory chunks, and every chunk is represented by a struct scatterlist. Using sg_copy_from_buffer, you can copy the data stored in a continuous buffer to a list of memory chunks managed by several struct scatterlists. In short, sg_copy_from_buffer has no concern with encryption.
For more details, please refer to the following kernel source code files.
/include/linux/scatterlist.h
/lib/scatterlist.c
/arch/x86/include/asm/page.h
/arch/x86/include/asm/page_64.h

Related

How to convert hex_dump of packets, which were captured in kernel module, to pcap file?

I am writing a kernel module on Linux (Xubuntu x64). The version of the kernel is 5.4.0-52-generic. My kernel module is capturing traffic from an interface and printing it in hex:
Nov 10 14:04:34 ubuntu kernel: [404009.566887] Packet hex dump:
Nov 10 14:04:34 ubuntu kernel: [404009.566889] 000000 00 00 00 00 00 00 00 00 00 00 00 00 08 00 45 00
Nov 10 14:04:34 ubuntu kernel: [404009.566899] 000010 00 54 49 4C 40 00 40 01 A7 EF C0 A8 64 0E C0 A8
Nov 10 14:04:34 ubuntu kernel: [404009.566907] 000020 64 0E 08 00 9E FE 00 03 00 08 72 0E AB 5F 00 00
Nov 10 14:04:34 ubuntu kernel: [404009.566914] 000030 00 00 7B B5 01 00 00 00 00 00 10 11 12 13 14 15
Nov 10 14:04:34 ubuntu kernel: [404009.566922] 000040 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 22 23 24 25
Nov 10 14:04:34 ubuntu kernel: [404009.566929] 000050 26 27 28 29
This output I've got using this command under root: tail -f /var/log/kern.log
The whole problem is that I need to save this output as pcap-file. I know that there is text2pcap but its library (libpcap) is user-mode only so I can't use it in kernel module (or maybe not? Correct me if I'm wrong).
Is it possible to use text2pcap in kernel module? Otherwise, How can I save an output as pcap file while being in kernel module?
Source code:
#include <linux/module.h> // included for all kernel modules
#include <linux/kernel.h> // included for KERN_INFO
#include <linux/init.h> // included for __init and __exit macros
#include <linux/skbuff.h> // included for struct sk_buff
#include <linux/if_packet.h> // include for packet info
#include <linux/ip.h> // include for ip_hdr
#include <linux/netdevice.h> // include for dev_add/remove_pack
#include <linux/if_ether.h> // include for ETH_P_ALL
#include <linux/unistd.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Tester");
MODULE_DESCRIPTION("Sample linux kernel module program to capture all network packets");
struct packet_type ji_proto;
void pkt_hex_dump(struct sk_buff *skb)
{
size_t len;
int rowsize = 16;
int i, l, linelen, remaining;
int li = 0;
uint8_t *data, ch;
printk("Packet hex dump:\n");
data = (uint8_t *) skb_mac_header(skb);
if (skb_is_nonlinear(skb)) {
len = skb->data_len;
} else {
len = skb->len;
}
remaining = len;
for (i = 0; i < len; i += rowsize) {
printk("%06d\t", li);
linelen = min(remaining, rowsize);
remaining -= rowsize;
for (l = 0; l < linelen; l++) {
ch = data[l];
printk(KERN_CONT "%02X ", (uint32_t) ch);
}
data += linelen;
li += 10;
printk(KERN_CONT "\n");
}
}
int ji_packet_rcv (struct sk_buff *skb, struct net_device *dev,struct packet_type *pt, struct net_device *orig_dev)
{
printk(KERN_INFO "New packet captured.\n");
/* linux/if_packet.h : Packet types */
// #define PACKET_HOST 0 /* To us */
// #define PACKET_BROADCAST 1 /* To all */
// #define PACKET_MULTICAST 2 /* To group */
// #define PACKET_OTHERHOST 3 /* To someone else */
// #define PACKET_OUTGOING 4 /* Outgoing of any type */
// #define PACKET_LOOPBACK 5 /* MC/BRD frame looped back */
// #define PACKET_USER 6 /* To user space */
// #define PACKET_KERNEL 7 /* To kernel space */
/* Unused, PACKET_FASTROUTE and PACKET_LOOPBACK are invisible to user space */
// #define PACKET_FASTROUTE 6 /* Fastrouted frame */
switch (skb->pkt_type)
{
case PACKET_HOST:
printk(KERN_INFO "PACKET to us − ");
break;
case PACKET_BROADCAST:
printk(KERN_INFO "PACKET to all − ");
break;
case PACKET_MULTICAST:
printk(KERN_INFO "PACKET to group − ");
break;
case PACKET_OTHERHOST:
printk(KERN_INFO "PACKET to someone else − ");
break;
case PACKET_OUTGOING:
printk(KERN_INFO "PACKET outgoing − ");
break;
case PACKET_LOOPBACK:
printk(KERN_INFO "PACKET LOOPBACK − ");
break;
case PACKET_FASTROUTE:
printk(KERN_INFO "PACKET FASTROUTE − ");
break;
}
//printk(KERN_CONT "Dev: %s ; 0x%.4X ; 0x%.4X \n", skb->dev->name, ntohs(skb->protocol), ip_hdr(skb)->protocol);
struct ethhdr *ether = eth_hdr(skb);
//printk("Source: %x:%x:%x:%x:%x:%x\n", ether->h_source[0], ether->h_source[1], ether->h_source[2], ether->h_source[3], ether->h_source[4], ether->h_source[5]);
//printk("Destination: %x:%x:%x:%x:%x:%x\n", ether->h_dest[0], ether->h_dest[1], ether->h_dest[2], ether->h_dest[3], ether->h_dest[4], ether->h_dest[5]);
//printk("Protocol: %d\n", ether->h_proto);
pkt_hex_dump(skb);
kfree_skb (skb);
return 0;
}
static int __init ji_init(void)
{
/* See the <linux/if_ether.h>
When protocol is set to htons(ETH_P_ALL), then all protocols are received.
All incoming packets of that protocol type will be passed to the packet
socket before they are passed to the protocols implemented in the kernel. */
/* Few examples */
//ETH_P_LOOP 0x0060 /* Ethernet Loopback packet */
//ETH_P_IP 0x0800 /* Internet Protocol packet */
//ETH_P_ARP 0x0806 /* Address Resolution packet */
//ETH_P_LOOPBACK 0x9000 /* Ethernet loopback packet, per IEEE 802.3 */
//ETH_P_ALL 0x0003 /* Every packet (be careful!!!) */
//ETH_P_802_2 0x0004 /* 802.2 frames */
//ETH_P_SNAP 0x0005 /* Internal only */
ji_proto.type = htons(ETH_P_IP);
/* NULL is a wildcard */
//ji_proto.dev = NULL;
ji_proto.dev = dev_get_by_name (&init_net, "enp0s3");
ji_proto.func = ji_packet_rcv;
/* Packet sockets are used to receive or send raw packets at the device
driver (OSI Layer 2) level. They allow the user to implement
protocol modules in user space on top of the physical layer. */
/* Add a protocol handler to the networking stack.
The passed packet_type is linked into kernel lists and may not be freed until
it has been removed from the kernel lists. */
dev_add_pack (&ji_proto);
printk(KERN_INFO "Module insertion completed successfully!\n");
return 0; // Non-zero return means that the module couldn't be loaded.
}
static void __exit ji_cleanup(void)
{
dev_remove_pack(&ji_proto);
printk(KERN_INFO "Cleaning up module....\n");
}
module_init(ji_init);
module_exit(ji_cleanup);
The problem was solved using call_usermodehelper() to call user-mode text2pcap with arguments as if text2pcap was called using terminal.
Is it possible to use text2pcap in kernel module?
Not without putting it and the code it uses to write a pcap file (which isn't from libpcap, it's from a small library that's part of Wireshark, also used by dumpcap to write pcap and pcapng files) into the kernel.
How can I save an output as pcap file while being in kernel module?
You could write your own code to open a file and write to it in the kernel module; "Writing to a file from the Kernel" talks about that.
It also says
A "preferred" technique would be to pass the parameters in via IOCTLs and implement a read() function in your module. Then reading the dump from the module and writing into the file from userspace.
so you might want to consider that; the userspace code could just use libpcap to write the file.

How to declare 16-bits pointer to string in GCC C compiler for arm processor

I tried to declare an array of short pointers to strings (16-bits instead of default 32-bits) in GNU GCC C compiler for ARM Cortex-M0 processor to reduce flash consumption. I have about 200 strings in two language, so reducing the size of pointer from 32-bits to 16-bits could save 800 bytes of flash. It should be possible because the flash size is less than 64 kB so the high word (16-bits) of pointers to flash is constans and equal to 0x0800:
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned short ptrs[] = {&str1, &str2}; //this line generate error
but i got error in 3-th line
"error: initializer element is not computable at load time"
Then i tried:
const unsigned short ptr1 = (&str1 & 0xFFFF);
and i got:
"error: invalid operands to binary & (have 'const unsigned char (*)[11]' and 'int')"
After many attempts i ended up in assembly:
.section .rodata.strings
.align 2
ptr0:
ptr3: .short (str3-str0)
ptr4: .short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
compilation pass well, but now i have problem trying to reference pointers: ptr4 and ptr0 from C code. Trying to pass "ptr4-ptr0" as an 8-bit argument to C function:
ptr = getStringFromTable (ptr4-ptr0)
declared as:
const unsigned char* getStringFromTable (unsigned char stringIndex)
i got wrong code like this:
ldr r3, [pc, #28] ; (0x8000a78 <main+164>)
ldrb r1, [r3, #0]
ldr r3, [pc, #28] ; (0x8000a7c <main+168>)
ldrb r3, [r3, #0]
subs r1, r1, r3
uxtb r1, r1
bl 0x8000692 <getStringFromTable>
instead of something like this:
movs r0, #2
bl 0x8000692 <getStringFromTable>
I would be grateful for any suggestion.
.....after a few days.....
Following #TonyK and #old_timer advices i finally solved the problem in the following way:
in assembly i wrote:
.global str0, ptr0
.section .rodata.strings
.align 2
ptr0: .short (str3-str0)
.short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
then i declared in C:
extern unsigned short ptr0[];
extern const unsigned char str0[] ;
enum ptrs {ptr3, ptr4}; //automatically: ptr3=0, ptr4=1
const unsigned char* getStringFromTable (enum ptrs index)
{
return &str0[ptr0[index]] ;
}
and now this text:
ptr = getStringFromTable (ptr4)
is compiled to the correct code:
08000988: 0x00000120 movs r0, #1
0800098a: 0xfff745ff bl 0x8000818 <getStringFromTable>
i just have to remember to keep the order of enum ptrs each time i will add a string to the assembly and a new item to enum ptrs
Declare ptr0 and str0 as .global in your assembly language file. Then in C:
extern unsigned short ptr0[] ;
extern const char str0[] ;
const char* getStringFromTable (unsigned char index)
{
return &str0[ptr0[index]] ;
}
This works as long as the total size of the str0 table is less than 64K.
A pointer is an address and addresses in arm cannot be 16 bits that makes no sense, other than Acorn based arms (24 bit if I remember right), addresses are minimum 32 bits (for arm) and going into aarch64 larger but never smaller.
This
ptr3: .short (str3-str0)
does not produce an address (so it cant be a pointer) it produces an offset that is only usable when you add it to the base address str0.
You cannot generate 16 bit addresses (in a debugged/usable arm compiler), but since everything appears to be static here (const/rodata) that makes it even easier solve, solvable runtime as well, but even simpler pre-computed based on information provided thus far.
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned char str3[] ="Third string";
brute force takes like 30 lines of code to produce the header file below, much less if you try to compact it although ad-hoc programs don't need to be pretty.
This output which is intentionally long to demonstrate the solution (and to be able to visually check the tool) but the compiler doesn't care (so best to make it long and verbose for readability/validation purposes):
mystrings.h
const unsigned char strs[39]=
{
0x46, // 0 F
0x69, // 1 i
0x72, // 2 r
0x73, // 3 s
0x74, // 4 t
0x20, // 5
0x73, // 6 s
0x74, // 7 t
0x72, // 8 r
0x69, // 9 i
0x6E, // 10 n
0x67, // 11 g
0x00, // 12
0x53, // 13 S
0x65, // 14 e
0x63, // 15 c
0x6F, // 16 o
0x6E, // 17 n
0x64, // 18 d
0x20, // 19
0x73, // 20 s
0x74, // 21 t
0x72, // 22 r
0x69, // 23 i
0x6E, // 24 n
0x00, // 25
0x54, // 26 T
0x68, // 27 h
0x69, // 28 i
0x72, // 29 r
0x64, // 30 d
0x20, // 31
0x73, // 32 s
0x74, // 33 t
0x72, // 34 r
0x69, // 35 i
0x6E, // 36 n
0x67, // 37 g
0x00, // 38
};
const unsigned short ptrs[3]=
{
0x0000 // 0 0
0x000D // 1 13
0x001A // 2 26
};
The compiler then handles all of the address generation when you use it
&strs[ptrs[n]]
depending on how you write your tool can even have things like
#define FIRST_STRING 0
#define SECOND_STRING 1
and so on so that your code could find the string with
strs[ptrs[SECOND_STRING]]
making the program that much more readable. All auto generated from an ad-hoc tool that does this offset work for you.
the main() part of the tool could look like
add_string(FIRST_STRING,"First string");
add_string(SECOND_STRING,"Second string");
add_string(THIRD_STRING,"Third string");
with that function and some more code to dump the result.
and then you simply include the generated output and use the
strs[ptrs[THIRD_STRING]]
type syntax in the real application.
In order to continue down the path you started, if that is what you prefer (looks like more work but is still pretty quick to code).
ptr0:
ptr3: .short (str3-str0)
ptr4: .short (str4-str0)
str0:
str3: .asciz "3-th string"
str4: .asciz "4-th string"
Then you need to export str0 and ptr3, ptr4 (as needed depending on your assembler's assembly language) then access them as a pointer to str0+ptr3
extern unsigned int str0;
extern unsigned short ptr3;
...
... *((unsigned char *)(str0+ptr3))
fixing whatever syntax mistakes I intentionally or unintentionally added to that pseudo code.
That would work as well and you would have the one base address then the hundreds of 16 bit offsets to that address.
could even do some flavor of
const unsigned short ptrs[]={ptr0,ptr1,ptr2,ptr3};
...
(unsigned char *)(str0+ptrs[n])
using some flavor of C syntax to create that array but probably not worth that extra effort...
The solution a few of us have mentioned thus far (one example demonstrated above)(16 bit offsets which are NOT addresses which means NOT pointers) is much easier to code and maintain and use and maybe read depending on your implementation. However implemented it requires a full sized base address and offsets. It might be possible to code this in C without an ad-hoc tool, but the ad-hoc tool literally only takes a few minutes to write.
I write programs to write programs or programs to compress/manipulate data almost daily, why not. Compression is a good example of this want to embed a black and white image into your resource limited mcu flash? Don't put all the pixels in the binary, start with a run length encoding and go from there, which means a third party tool written by you or not that converts the real data into a structure that fits, same thing here a third party tool that prepares/compresses the data for the application. This problem is really just another compression algorithm since you are trying to reduce the amount of data without losing any.
Also note depending on what these strings are if it is possible to have duplicates or fractions the tool could be even smarter:
const unsigned char str1[] ="First string";
const unsigned char str2[] ="Second string";
const unsigned char str3[] ="Third string";
const unsigned char str4[] ="string";
const unsigned char str5[] ="Third string";
creating
const unsigned char strs[39]=
{
0x46, // 0 F
0x69, // 1 i
0x72, // 2 r
0x73, // 3 s
0x74, // 4 t
0x20, // 5
0x73, // 6 s
0x74, // 7 t
0x72, // 8 r
0x69, // 9 i
0x6E, // 10 n
0x67, // 11 g
0x00, // 12
0x53, // 13 S
0x65, // 14 e
0x63, // 15 c
0x6F, // 16 o
0x6E, // 17 n
0x64, // 18 d
0x20, // 19
0x73, // 20 s
0x74, // 21 t
0x72, // 22 r
0x69, // 23 i
0x6E, // 24 n
0x00, // 25
0x54, // 26 T
0x68, // 27 h
0x69, // 28 i
0x72, // 29 r
0x64, // 30 d
0x20, // 31
0x73, // 32 s
0x74, // 33 t
0x72, // 34 r
0x69, // 35 i
0x6E, // 36 n
0x67, // 37 g
0x00, // 38
};
const unsigned short ptrs[5]=
{
0x0000 // 0 0
0x000D // 1 13
0x001A // 2 26
0x0006 // 3 6
0x001A // 4 26
};

why does i2cset send extra bytes?

I've been working on PIC18F55K42 chip for a while. The PIC is setup as a slave and it's receiving bytes correctly. But I encountered a few problems.
For example, when I do:
i2cset -y 1 0x54 0x80 0x01
It looks correct on the controller side and I can see the address 0x80(data address) and byte value 0x01.
When I send in block mode like:
i2cset -y 1 0x54 0x80 0x01 0x02 0x03 0x04 i
I see spurious bytes appearing on the controller. More precisely, it looks like this:
ADDRESS 80 6c 00 2f 01 02 03 04 STOP
At first I thought this is something to do with my controller and even tried digging into it's clock settings. Used Salae logic analyser too. There's nothing wrong with the controller or it's set up. The only place I can think of is the complex onion driver layering done by Linux.
I'd like to know why Linux is sending the 3 extra bytes (6c 00 2f). Why does i2c_smbus_write_block_data send extra bytes and how can it be avoided?
It's a bug in the i2cset implementation in Busybox. See miscutils/i2c_tools.c:
/* Prepare the value(s) to be written according to current mode. */
switch (mode) {
case I2C_SMBUS_BYTE_DATA:
val = xstrtou_range(argv[3], 0, 0, 0xff);
break;
case I2C_SMBUS_WORD_DATA:
val = xstrtou_range(argv[3], 0, 0, 0xffff);
break;
case I2C_SMBUS_BLOCK_DATA:
case I2C_SMBUS_I2C_BLOCK_DATA:
for (blen = 3; blen < (argc - 1); blen++)
block[blen] = xstrtou_range(argv[blen], 0, 0, 0xff);
val = -1;
break;
default:
val = -1;
break;
}
Should be block[blen - 3] = xstrtou_range(argv[blen], 0, 0, 0xff);. The bug results in 3 extra garbage bytes from stack being sent.
Use i2c_smbus_write_i2c_block_data for raw i2c transfers
i2c_smbus_write_block_data makes data transfer using SMBUS protocol

CRC Bluetooth Low Energy 4.2

In the core bluetooth 4.2 documentation here it talks about a CRC check for data integrity (P2456). This details the below:
With an example below:
4e 01 02 03 04 05 06 07 08 09
Producing CRC: 6d d2
I have tried a number of different methods but can't seem to reproduce the example. Can anyone provide some sample code to produce the CRC above.
You left out a key part of the example in the document, which is that the UAP used in the example is 0x47. The CRC needs to be initialized with the UAP. (Oddly, with the bits reversed and in the high byte, relative to the data bits coming in.)
The code below computes the example. The result is d26d. The CRC is transmitted least significant bit first, so it is sent 6d d2. On the receive side the same CRC is computed on the whole thing with the CRC, and the result is zero, which is how the receive side is supposed to check what was sent.
#include <stdio.h>
static unsigned crc_blue(unsigned char *payload, size_t len) {
unsigned crc = 0xe200; // UAP == 0x47
while (len--) {
crc ^= *payload++;
for (int k = 0; k < 8; k++)
crc = crc & 1 ? (crc >> 1) ^ 0x8408 : crc >> 1;
}
return crc;
}
int main(void) {
unsigned char payload[] = {
0x4e, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09};
printf("%04x\n", crc_blue(payload, sizeof(payload)));
unsigned char recvd[] = {
0x4e, 0x01, 0x02, 0x03, 0x04, 0x05, 0x06, 0x07, 0x08, 0x09, 0x6d, 0xd2};
printf("%04x\n", crc_blue(recvd, sizeof(recvd)));
return 0;
}
Your code would need to initialize the UAP appropriately for that device.

Linux sys_call_table rip relative addressing x86_64

I am trying to get offset of sys_call_table on Linux x86_64.
First of all I read pointer to system_call entry by reading it from MSR_LSTAR and it's correct
static unsigned long read_msr(unsigned int msr)
{
unsigned low, high;
asm volatile("rdmsr" : "=a" (low), "=d" (high) : "c" (msr));
return ((low) | ((u64)(high) << 32));
}
Then I parse it to find opcode of call instruction and it is also correct
#define CALL_OP 0xFF
#define CALL_MODRM 0x14
static unsigned long find_syscall_table(unsigned char *ptr)
{
//correct
for (; (*ptr != CALL_OP) || (*(ptr+1) != CALL_MODRM); ptr++);
//not correct
ptr += *(unsigned int*)(ptr + 3);
pr_info("%lx", (unsigned long)ptr);
return ptr;
}
But I failed to get address after call opcode. First byte of ptr is opcode, then ModRM byte, then SIB and then 32bit displacement, so I add 3 to ptr and dereferenced it as integer value and then add it to ptr, because it is %RIP, and address is RIP relative. But the result value is wrong, it don't coincide with value I see in gdb, so where am I wrong?
It's not x7e9fed00 but rather -0x7e9fed00 - a negative displacement.
That is the sign-magnitude form of the 2's complement negative number 0x81601300
which is stored by a little-endian processor as "00 13 60 81"
No idea if you will find sys_call_table at the resulting address however. As an alternative idea, it seems some people find it by searching memory for the known pointers to functions that should be listed in it.

Resources