I am studying ASLR randomization of mmap(), on x86 system.
I have read in a lot of places that there are 16bits of randomization on the address loaded with mmap().
But in the source code i have found:
static unsigned long mmap_rnd(void)
02 {
03 unsigned long rnd = 0;
04
05 /*
06 * 8 bits of randomness in 32bit mmaps, 20 address space bits
07 * 28 bits of randomness in 64bit mmaps, 40 address space bits
08 */
09 if (current->flags & PF_RANDOMIZE) {
10 if (mmap_is_ia32())
11 rnd = (long)get_random_int() % (1<<8);
12 else
13 rnd = (long)(get_random_int() % (1<<28));
14 }
15 return rnd << PAGE_SHIFT;
16 }
So, that would be only 8bits of randomness.
But in fact, running some test, i get the following address (stack-heap-mmap)
bf937000,09a60000,b774b000
bfa86000,090ef000,b76e2000
Its more than 16 bits if it can be b77XX000 and b76XX000!!!!
Any help on this?
PAGE_SHIFT is shifting that randomness to a different bit position. The difference between your mmap addresses is indeed:
b774b000
-b76e2000
---------
69000
I don't know what the value of PAGE_SHIFT is, but if it's 12 for example, then you have 0x69 difference which perfectly fits in 8-bits.
Related
I have some doubt regarding GDT in linux. I try to get GDT info in kernel space (Ring0), and my test code called in system call context. In the test code, I try to print ss register (Segment Selector), and get ss segment descriptor by GDTR and ss-segment-selector.
77 void printGDTInfo(void) {
78 struct desc_ptr pgdt, *pss_desc;
79 unsigned long ssr;
80 struct desc_struct *ss_desc;
81
82 // Get GDTR
83 native_store_gdt(&pgdt);
84 unsigned long gdt_addr = pgdt.address;
85 unsigned long gdt_size = pgdt.size;
86 printk("[GDT] Addr:%lu |Size:%lu\n", gdt_addr, gdt_size);
87
88 // Get SS Register
89 asm("mov %%ss, %%eax"
90 :"=a"(ssr));
91 printk("SSR In Kernel:%lu\n", ssr);
92 unsigned long desc_index = ssr >> 3; // SHIFT for Descriptor Index
93 printk("SSR Shift:%lu\n", desc_index);
94 ss_desc = (struct desc_struct*)(gdt_addr + desc_index * sizeof(struct desc_struct));
95 printk("SSR:Base0:%lu, Base1:%lu,Base2:%lu\n", ss_desc->base0, ss_desc->base1, ss_desc->base2);
96 }
What confused me most is the "base" fields in ss-descriptor are all zero (line95 print). I try to print __USER_DS segment descriptor, the "base" fields are also zero.
Is that true? All the segment in Linux use same Base Address(zero)?
I want to check the GDT initialization in Linux Source Code but I am not sure where and when Linux setup GDT?
I find codes in "arch/x86/kernel/cpu/common.c"like this, the second parameter(zero) of GDT_ENTRY_INIT is zero, which means the base0/base1/base2 fields in segment descriptor are all zero.
125 [GDT_ENTRY_KERNEL32_CS] = GDT_ENTRY_INIT(0xc09b, 0, 0xfffff),
126 [GDT_ENTRY_KERNEL_CS] = GDT_ENTRY_INIT(0xa09b, 0, 0xfffff),
127 [GDT_ENTRY_KERNEL_DS] = GDT_ENTRY_INIT(0xc093, 0, 0xfffff),
If that is true, all the segment has same base address(zero). As a result, same virtual address in Ring0 and Ring1 will mapping to the same linear address?
I am appreciate for your help :)
I am implementing a converter for IEEE 754 32 bits to a Fixed point with S15.16 in a FPGA. The IEEE-754 standard represent the number as:
Where s represent the sign, exp is the exponent denormalized and m is the mantissa. All these values separately are represented in fixed point.
Well, the simplest way is take the IEEE-754 value and multiplies by 2**16. Finally, round it to the nearest to get the less error in truncation.
Problem: I'm doing in a FPGA device, so, I can't do it in this way.
Solution: Use the binary representations from values to perform the conversion via bitwise operations
From the previous expression, and with the condition of the exponent and mantissa are in fixed point, logic says me that I can perform as this:
Because powers of two are shifts in fixed point, is possible to rewrite the expression as (with Verilog notation):
x_fixed = ({1'b1, m[22:7]}) << (exp - 126)
Ok, this works perfectly, but not all the times... The problem here is: How can I apply nearest rounding? I have performed experiments to see what happens, in different ranges. The ranges are contained within powers of 2. I want to say:
For values from 0 < x < 1
For values from 1 <= x < 2
For values from 2 <= x < 4
And so on with the values contained in the following powers of two... When the values are contained from 1 to 2, I have been able to round without problems seeing the behaviour of the 2 followings bits that have been discarded in the mantissa. This bits show that:
if 00: Rounding is not necessary
if 01 or 10: Adding one to the shifted mantissa
if 11: adding two to the shifted mantissa.
To perform the experiments I have implemented a minimal solution in Python using bitwise operations. Codes are:
# Get the bits of sign, exponent and mantissa
def FLOAT_2_BIN(num):
bits, = struct.unpack('!I', struct.pack('!f', num))
N = "{:032b}".format(bits)
a = N[0] # sign
b = N[1:9] # exponent
c = "1" + N[9:] # mantissa with hidden bit
return {'sign': a, 'exp': b, 'mantissa': c}
# Convert the floating point value to fixed via
# bitwise operations
def FLOAT_2_FIXED(x):
# Get the IEEE-754 bit representation
IEEE754 = FLOAT_2_BIN(x)
# Exponent minus 127 to normalize
shift = int(IEEE754['exp'],2) - 126
# Get 16 MSB from mantissa
MSB_mnts = IEEE754['mantissa'][0:16]
# Convert value from binary to int
value = int(MSB_mnts, 2)
# Get the rounding bits: similars to guard bits???
rnd_bits = IEEE754['mantissa'][16:18]
# Shifted value by exponent
value_shift = value << shift
# Control to rounding nearest
# Only works with values from 1 to 2
if rnd_bits == '00':
rnd = 0
elif rnd_bits == '01' or rnd_bits == '10':
rnd = 1
else:
rnd = 2
return value_shift + rnd
The test with values between 0 and 1 gives the following results:
Test for values from 1 <= x < 2
FLOAT 32 VALUE 16 MSB MANTISSA THEORICAL FIXED PRACTICAL FIXED RND BITS DIFS 4 LSB MANTISSA
---------------- ----------------- ----------------- ----------------- ---------- ------ ----------------
1 1000000000000000 65536 65536 00 0 0000
1.1 1000110011001100 72090 72090 11 0 1101
1.2 1001100110011001 78643 78643 10 0 1010
1.3 1010011001100110 85197 85197 01 0 0110
1.4 1011001100110011 91750 91750 00 0 0011
1.5 1100000000000000 98304 98304 00 0 0000
1.6 1100110011001100 104858 104858 11 0 1101
1.7 1101100110011001 111411 111411 10 0 1010
1.8 1110011001100110 117965 117965 01 0 0110
1.9 1111001100110011 124518 124518 00 0 0011
Obviously: if I take values that have a decimal part multiple of a power of two, there is don't need rounding:
In this case the values have an increment of 1/32
FLOAT 32 VALUE 16 MSB MANTISSA THEORICAL FIXED PRACTICAL FIXED RND BITS DIFS 4 LSB MANTISSA
---------------- ----------------- ----------------- ----------------- ---------- ------ ----------------
10 1010000000000000 655360 655360 00 0 0000
10.0312 1010000010000000 657408 657408 00 0 0000
10.0625 1010000100000000 659456 659456 00 0 0000
10.0938 1010000110000000 661504 661504 00 0 0000
10.125 1010001000000000 663552 663552 00 0 0000
10.1562 1010001010000000 665600 665600 00 0 0000
10.1875 1010001100000000 667648 667648 00 0 0000
10.2188 1010001110000000 669696 669696 00 0 0000
10.25 1010010000000000 671744 671744 00 0 0000
10.2812 1010010010000000 673792 673792 00 0 0000
10.3125 1010010100000000 675840 675840 00 0 0000
10.3438 1010010110000000 677888 677888 00 0 0000
10.375 1010011000000000 679936 679936 00 0 0000
10.4062 1010011010000000 681984 681984 00 0 0000
10.4375 1010011100000000 684032 684032 00 0 0000
10.4688 1010011110000000 686080 686080 00 0 0000
10.5 1010100000000000 688128 688128 00 0 0000
10.5312 1010100010000000 690176 690176 00 0 0000
10.5625 1010100100000000 692224 692224 00 0 0000
10.5938 1010100110000000 694272 694272 00 0 0000
10.625 1010101000000000 696320 696320 00 0 0000
10.6562 1010101010000000 698368 698368 00 0 0000
10.6875 1010101100000000 700416 700416 00 0 0000
10.7188 1010101110000000 702464 702464 00 0 0000
10.75 1010110000000000 704512 704512 00 0 0000
10.7812 1010110010000000 706560 706560 00 0 0000
10.8125 1010110100000000 708608 708608 00 0 0000
10.8438 1010110110000000 710656 710656 00 0 0000
10.875 1010111000000000 712704 712704 00 0 0000
10.9062 1010111010000000 714752 714752 00 0 0000
10.9375 1010111100000000 716800 716800 00 0 0000
10.9688 1010111110000000 718848 718848 00 0 0000
But, if 2 <= x < 4 and the increments is not a multiple of a power of two:
Test for values from 2 <= x < 4. Increment is 0.1
Here, I am not applying the rounding in order to show how the rounding error
increase with the exponent. e.g: shift**2 - 1, where shift is exponent - 126
FLOAT 32 VALUE 16 MSB MANTISSA THEORICAL FIXED PRACTICAL FIXED RND BITS DIFS 4 LSB MANTISSA
---------------- ----------------- ----------------- ----------------- ---------- ------ ----------------
2 1000000000000000 131072 131072 00 0 0000
2.1 1000011001100110 137626 137624 01 -2 0110
2.2 1000110011001100 144179 144176 11 -3 1101
2.3 1001001100110011 150733 150732 00 -1 0011
2.4 1001100110011001 157286 157284 10 -2 1010
2.5 1010000000000000 163840 163840 00 0 0000
2.6 1010011001100110 170394 170392 01 -2 0110
2.7 1010110011001100 176947 176944 11 -3 1101
2.8 1011001100110011 183501 183500 00 -1 0011
2.9 1011100110011001 190054 190052 10 -2 1010
3 1100000000000000 196608 196608 00 0 0000
3.1 1100011001100110 203162 203160 01 -2 0110
3.2 1100110011001100 209715 209712 11 -3 1101
3.3 1101001100110011 216269 216268 00 -1 0011
3.4 1101100110011001 222822 222820 10 -2 1010
3.5 1110000000000000 229376 229376 00 0 0000
3.6 1110011001100110 235930 235928 01 -2 0110
3.7 1110110011001100 242483 242480 11 -3 1101
3.8 1111001100110011 249037 249036 00 -1 0011
3.9 1111100110011001 255590 255588 10 -2 1010
It is clearly that the rounding is not correct, and also I have perceived that the maximun rounding error in fixed point is always 2**shift - 1.
Any idea or sugerence? I have thought that the problem here is that I'm not taking into account the guard bits: GSR, but in the other hand, if actually the problem was this: What's happens when the necessary rounding is higher than one, e.g: 2, 3, 4... ?
The ISO-C99 code below demonstrates one possible way of doing the conversion. The significand (mantissa) bits of the binary32 argument form the bits of the s15.16 result. The exponent bits tell us whether we need to shift these bits right or left to move the least significant integer bit to bit 16. If a left shift is required, rounding is not needed. If a right shift is required, we need to capture any less significant bits discarded. The most significant discarded bit is the round bit, all others collectively represent the sticky bit. Using the literal definition of the rounding mode, we need to round up if (1) either the round bit and the sticky bit are set, or (2) the round bit is set and the sticky bit clear (i.e., we have a tie case), but the least significant bit of the intermediate result is odd.
Note that real hardware implementations often deviate from such a literal application of the rounding-mode logic. One common scheme is to first increment the result when the round bit is set. Then, if such an increment occurred, clear the least significant bit of the result if the sticky bit is not set. It is easy to see that this achieves the same effect by enumerating all possible combinations of round bit, sticky bit, and result LSB.
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <string.h>
#include <math.h>
#define USE_LITERAL_RND_DEF (1)
uint32_t float_as_uint32 (float a)
{
uint32_t r;
memcpy (&r, &a, sizeof r);
return r;
}
#define FP32_MANT_FRAC_BITS (23)
#define FP32_EXPO_BITS (8)
#define FP32_EXPO_MASK ((1u << FP32_EXPO_BITS) - 1)
#define FP32_MANT_MASK ((1u << FP32_MANT_FRAC_BITS) - 1)
#define FP32_MANT_INT_BIT (1u << FP32_MANT_FRAC_BITS)
#define FP32_SIGN_BIT (1u << (FP32_MANT_FRAC_BITS + FP32_EXPO_BITS))
#define FP32_EXPO_BIAS (127)
#define FX15P16_FRAC_BITS (16)
#define FRAC_BITS_DIFF (FP32_MANT_FRAC_BITS - FX15P16_FRAC_BITS)
int32_t fp32_to_fixed (float a)
{
/* split binary32 operand into constituent parts */
uint32_t ia = float_as_uint32 (a);
uint32_t expo = (ia >> FP32_MANT_FRAC_BITS) & FP32_EXPO_MASK;
uint32_t mant = expo ? ((ia & FP32_MANT_MASK) | FP32_MANT_INT_BIT) : 0;
int32_t sign = ia & FP32_SIGN_BIT;
/* compute and clamp shift count */
int32_t shift = (expo - FP32_EXPO_BIAS) - FRAC_BITS_DIFF;
shift = (shift < (-31)) ? (-31) : shift;
shift = (shift > ( 31)) ? ( 31) : shift;
/* shift left or right so least significant integer bit becomes bit 16 */
uint32_t shifted_right = mant >> (-shift);
uint32_t shifted_left = mant << shift;
/* capture discarded bits if right shift */
uint32_t discard = mant << (32 + shift);
/* round to nearest or even if right shift */
uint32_t round = (discard & 0x80000000) ? 1 : 0;
uint32_t sticky = (discard & 0x7fffffff) ? 1 : 0;
#if USE_LITERAL_RND_DEF
uint32_t odd = shifted_right & 1;
shifted_right = (round & (sticky | odd)) ? (shifted_right + 1) : shifted_right;
#else // USE_LITERAL_RND_DEF
shifted_right = (round) ? (shifted_right + 1) : shifted_right;
shifted_right = (round & ~sticky) ? (shifted_right & ~1) : shifted_right;
#endif // USE_LITERAL_RND_DEF
/* make final selection between left shifted and right shifted */
int32_t res = (shift < 0) ? shifted_right : shifted_left;
/* negate if negative */
return (sign < 0) ? (-res) : res;
}
int main (void)
{
int32_t res, ref;
float x;
printf ("IEEE-754 binary32 to S15.16 fixed-point conversion in RNE mode\n");
printf ("use %s implementation of round to nearest or even\n",
USE_LITERAL_RND_DEF ? "literal" : "alternate");
/* test positive half-plane */
x = 0.0f;
while (x < 0x1.0p15f) {
ref = (int32_t) rint ((double)x * 65536);
res = fp32_to_fixed(x);
if (res != ref) {
printf ("error # x = % 14.6a: res=%08x ref=%08x\n", x, res, ref);
printf ("Test FAILED\n");
return EXIT_FAILURE;
}
x = nextafterf (x, INFINITY);
}
/* test negative half-plane */
x = -1.0f * 0.0f;
while (x >= -0x1.0p15f) {
ref = (int32_t) rint ((double)x * 65536);
res = fp32_to_fixed(x);
if (res != ref) {
printf ("error # x = % 14.6a: res=%08x ref=%08x\n", x, res, ref);
printf ("Test FAILED\n");
return EXIT_FAILURE;
}
x = nextafterf (x, -INFINITY);
}
printf ("Test PASSED\n");
return EXIT_SUCCESS;
}
I am writing a kernel module on Linux (Xubuntu x64). The version of the kernel is 5.4.0-52-generic. My kernel module is capturing traffic from an interface and printing it in hex:
Nov 10 14:04:34 ubuntu kernel: [404009.566887] Packet hex dump:
Nov 10 14:04:34 ubuntu kernel: [404009.566889] 000000 00 00 00 00 00 00 00 00 00 00 00 00 08 00 45 00
Nov 10 14:04:34 ubuntu kernel: [404009.566899] 000010 00 54 49 4C 40 00 40 01 A7 EF C0 A8 64 0E C0 A8
Nov 10 14:04:34 ubuntu kernel: [404009.566907] 000020 64 0E 08 00 9E FE 00 03 00 08 72 0E AB 5F 00 00
Nov 10 14:04:34 ubuntu kernel: [404009.566914] 000030 00 00 7B B5 01 00 00 00 00 00 10 11 12 13 14 15
Nov 10 14:04:34 ubuntu kernel: [404009.566922] 000040 16 17 18 19 1A 1B 1C 1D 1E 1F 20 21 22 23 24 25
Nov 10 14:04:34 ubuntu kernel: [404009.566929] 000050 26 27 28 29
This output I've got using this command under root: tail -f /var/log/kern.log
The whole problem is that I need to save this output as pcap-file. I know that there is text2pcap but its library (libpcap) is user-mode only so I can't use it in kernel module (or maybe not? Correct me if I'm wrong).
Is it possible to use text2pcap in kernel module? Otherwise, How can I save an output as pcap file while being in kernel module?
Source code:
#include <linux/module.h> // included for all kernel modules
#include <linux/kernel.h> // included for KERN_INFO
#include <linux/init.h> // included for __init and __exit macros
#include <linux/skbuff.h> // included for struct sk_buff
#include <linux/if_packet.h> // include for packet info
#include <linux/ip.h> // include for ip_hdr
#include <linux/netdevice.h> // include for dev_add/remove_pack
#include <linux/if_ether.h> // include for ETH_P_ALL
#include <linux/unistd.h>
MODULE_LICENSE("GPL");
MODULE_AUTHOR("Tester");
MODULE_DESCRIPTION("Sample linux kernel module program to capture all network packets");
struct packet_type ji_proto;
void pkt_hex_dump(struct sk_buff *skb)
{
size_t len;
int rowsize = 16;
int i, l, linelen, remaining;
int li = 0;
uint8_t *data, ch;
printk("Packet hex dump:\n");
data = (uint8_t *) skb_mac_header(skb);
if (skb_is_nonlinear(skb)) {
len = skb->data_len;
} else {
len = skb->len;
}
remaining = len;
for (i = 0; i < len; i += rowsize) {
printk("%06d\t", li);
linelen = min(remaining, rowsize);
remaining -= rowsize;
for (l = 0; l < linelen; l++) {
ch = data[l];
printk(KERN_CONT "%02X ", (uint32_t) ch);
}
data += linelen;
li += 10;
printk(KERN_CONT "\n");
}
}
int ji_packet_rcv (struct sk_buff *skb, struct net_device *dev,struct packet_type *pt, struct net_device *orig_dev)
{
printk(KERN_INFO "New packet captured.\n");
/* linux/if_packet.h : Packet types */
// #define PACKET_HOST 0 /* To us */
// #define PACKET_BROADCAST 1 /* To all */
// #define PACKET_MULTICAST 2 /* To group */
// #define PACKET_OTHERHOST 3 /* To someone else */
// #define PACKET_OUTGOING 4 /* Outgoing of any type */
// #define PACKET_LOOPBACK 5 /* MC/BRD frame looped back */
// #define PACKET_USER 6 /* To user space */
// #define PACKET_KERNEL 7 /* To kernel space */
/* Unused, PACKET_FASTROUTE and PACKET_LOOPBACK are invisible to user space */
// #define PACKET_FASTROUTE 6 /* Fastrouted frame */
switch (skb->pkt_type)
{
case PACKET_HOST:
printk(KERN_INFO "PACKET to us − ");
break;
case PACKET_BROADCAST:
printk(KERN_INFO "PACKET to all − ");
break;
case PACKET_MULTICAST:
printk(KERN_INFO "PACKET to group − ");
break;
case PACKET_OTHERHOST:
printk(KERN_INFO "PACKET to someone else − ");
break;
case PACKET_OUTGOING:
printk(KERN_INFO "PACKET outgoing − ");
break;
case PACKET_LOOPBACK:
printk(KERN_INFO "PACKET LOOPBACK − ");
break;
case PACKET_FASTROUTE:
printk(KERN_INFO "PACKET FASTROUTE − ");
break;
}
//printk(KERN_CONT "Dev: %s ; 0x%.4X ; 0x%.4X \n", skb->dev->name, ntohs(skb->protocol), ip_hdr(skb)->protocol);
struct ethhdr *ether = eth_hdr(skb);
//printk("Source: %x:%x:%x:%x:%x:%x\n", ether->h_source[0], ether->h_source[1], ether->h_source[2], ether->h_source[3], ether->h_source[4], ether->h_source[5]);
//printk("Destination: %x:%x:%x:%x:%x:%x\n", ether->h_dest[0], ether->h_dest[1], ether->h_dest[2], ether->h_dest[3], ether->h_dest[4], ether->h_dest[5]);
//printk("Protocol: %d\n", ether->h_proto);
pkt_hex_dump(skb);
kfree_skb (skb);
return 0;
}
static int __init ji_init(void)
{
/* See the <linux/if_ether.h>
When protocol is set to htons(ETH_P_ALL), then all protocols are received.
All incoming packets of that protocol type will be passed to the packet
socket before they are passed to the protocols implemented in the kernel. */
/* Few examples */
//ETH_P_LOOP 0x0060 /* Ethernet Loopback packet */
//ETH_P_IP 0x0800 /* Internet Protocol packet */
//ETH_P_ARP 0x0806 /* Address Resolution packet */
//ETH_P_LOOPBACK 0x9000 /* Ethernet loopback packet, per IEEE 802.3 */
//ETH_P_ALL 0x0003 /* Every packet (be careful!!!) */
//ETH_P_802_2 0x0004 /* 802.2 frames */
//ETH_P_SNAP 0x0005 /* Internal only */
ji_proto.type = htons(ETH_P_IP);
/* NULL is a wildcard */
//ji_proto.dev = NULL;
ji_proto.dev = dev_get_by_name (&init_net, "enp0s3");
ji_proto.func = ji_packet_rcv;
/* Packet sockets are used to receive or send raw packets at the device
driver (OSI Layer 2) level. They allow the user to implement
protocol modules in user space on top of the physical layer. */
/* Add a protocol handler to the networking stack.
The passed packet_type is linked into kernel lists and may not be freed until
it has been removed from the kernel lists. */
dev_add_pack (&ji_proto);
printk(KERN_INFO "Module insertion completed successfully!\n");
return 0; // Non-zero return means that the module couldn't be loaded.
}
static void __exit ji_cleanup(void)
{
dev_remove_pack(&ji_proto);
printk(KERN_INFO "Cleaning up module....\n");
}
module_init(ji_init);
module_exit(ji_cleanup);
The problem was solved using call_usermodehelper() to call user-mode text2pcap with arguments as if text2pcap was called using terminal.
Is it possible to use text2pcap in kernel module?
Not without putting it and the code it uses to write a pcap file (which isn't from libpcap, it's from a small library that's part of Wireshark, also used by dumpcap to write pcap and pcapng files) into the kernel.
How can I save an output as pcap file while being in kernel module?
You could write your own code to open a file and write to it in the kernel module; "Writing to a file from the Kernel" talks about that.
It also says
A "preferred" technique would be to pass the parameters in via IOCTLs and implement a read() function in your module. Then reading the dump from the module and writing into the file from userspace.
so you might want to consider that; the userspace code could just use libpcap to write the file.
I use knowles sph0645lm4h-b microphone to acquire data, which is a 24-bits PCM format with 18 data presicion. Then the 24-bits PCM data is truncated to 18-bits data, because the last 6 bits is alway 0 according to the specification. After that, the 18-bits data is stored as a 32-bits unsigned integer. When the MSB bit is 0, which means it's a positive integer, and the MSB is 0, which means it's a negative integer.
After that, i find all data is positive, no matter which sound i used to test. I tested it with a dual frequency, and do a FFT, then I found the result is almost right except the lower frequency about 0-100Hz is larger. But i reconstructed the sound with the data, which i used for FFT algorithm. The reconstructed sound is almost right but with noise.
I use a buffer to store the microphone data, which is transmitted using DMA. The buffer is
uint16_t fft_buffer[FFT_LENGTH*4]
The DMA configuration is doing as following:
DMA_InitStructure.DMA_Channel = DMA_Channel_0;
DMA_InitStructure.DMA_PeripheralBaseAddr = (uint32_t)&(SPI2->DR);
DMA_InitStructure.DMA_Memory0BaseAddr = (uint32_t)fft_buffer;
DMA_InitStructure.DMA_DIR = DMA_DIR_PeripheralToMemory;
DMA_InitStructure.DMA_PeripheralInc = DMA_PeripheralInc_Disable;
DMA_InitStructure.DMA_MemoryInc = DMA_MemoryInc_Enable;
DMA_InitStructure.DMA_PeripheralDataSize =DMA_PeripheralDataSize_HalfWord;
DMA_InitStructure.DMA_MemoryDataSize = DMA_MemoryDataSize_HalfWord;
DMA_InitStructure.DMA_BufferSize = FFT_LENGTH*4;
DMA_InitStructure.DMA_Mode = DMA_Mode_Normal;
DMA_InitStructure.DMA_Priority = DMA_Priority_VeryHigh;
DMA_InitStructure.DMA_FIFOMode = DMA_FIFOMode_Disable;
DMA_InitStructure.DMA_FIFOThreshold = DMA_FIFOThreshold_Full;
DMA_InitStructure.DMA_MemoryBurst = DMA_MemoryBurst_Single;
DMA_InitStructure.DMA_PeripheralBurst = DMA_PeripheralBurst_Single;
extract data from buffer, truncate to 18 bits and extends it to 32 bits and the store at fft_integer:
int32_t fft_integer[FFT_LENGTH];
fft_buffer stores the original data from one channel and redundant data from other channel. Original data is store at two element of array, like fft_buffer[4] and fft_buffer[5], which are both 16 bits. And fft_integer store just data from one channel and each data take a 32bits.This is why the size of fft_buffer Array is [FFT_LENGTH*4]. 2 elements are used for data from one channel and 2 element is used for the other channel. But for fft_integer, the size of fft_integer array is FFT_LENGTH. Because data from one channel is stored and 18bits can be stored in one element of type int32_t.
for (t=0;t<FFT_LENGTH*4;t=t+4){
uint8_t first_8_bits, second_8_bits, last_2_bits;
uint32_t store_int;
/* get the first 8 bits, middle 8 bits and last 2 bits, combine it to a new value */
first_8_bits = fft_buffer[t]>>8;
second_8_bits = fft_buffer[t]&0xFF;
last_2_bits = (fft_buffer[t+1]>>8)>>6;
store_int = ((first_8_bits <<10)+(second_8_bits <<2)+last_2_bits);
/* convert it to signed integer number according to the MSB of value
* if MSB is 1, then set all the bits before MSB to 1
*/
const uint8_t negative = ((store_int & (1 << 17)) != 0);
int32_t nativeInt;
if (negative)
nativeInt = store_int | ~((1 << 18) - 1);
else
nativeInt = store_int;
fft_integer[cnt] = nativeInt;
cnt++;
}
The microphone is using I2S Interface and it's a single mono microphone, which means that there is just half of the data is effective at half of the transmission time. It works for about 128ms, and then will stop working.
This picture shows the data, which i convert to a integer.
My question is why there is are large components of lower frequency although it can reconstruct the similar sound. I'm sure there is no problem in Hardware configuration.
I have done a experiment to see which original data is stored in buffer. I have done the following test:
uint8_t a, b, c, d
for (t=0;t<FFT_LENGTH*4;t=t+4){
a = (fft_buffer[t]&0xFF00)>>8;
b = fft_buffer[t]&0x00FF;
c = (fft_buffer[t+1]&0xFF00)>>8;
/* set the tri-state to 0 */
d = fft_buffer[t+1]&0x0000;
printf("%.2x",a);
printf("%.2x",b);
printf("%.2x",c);
printf("%.2x\n",d);
}
The PCM data is shown like following:
0ec40000
0ec48000
0ec50000
0ec60000
0ec60000
0ec5c000
...
0cf28000
0cf20000
0cf10000
0cf04000
0cef8000
0cef0000
0cedc000
0ced4000
0cee4000
0ced8000
0cec4000
0cebc000
0ceb4000
....
0b554000
0b548000
0b538000
0b53c000
0b524000
0b50c000
0b50c000
...
Raw data in Memory:
c4 0e ff 00
c5 0e ff 40
...
52 0b ff c0
50 0b ff c0
I use it as little endian.
The large low-frequency component starting from DC in the original data is due to the large DC offset caused by incorrectly translating the 24 bit two's complement samples to int32_t. DC offset is inaudible unless it caused clipping or arithmetic overflow to occur. There are not really any low frequencies up to 100Hz, that is merely an artefact of the FFT's response to the strong DC (0Hz) element. That is why you cannot hear any low frequencies.
Below I have stated a number of assumptions as clearly as possible so that the answer may perhaps be adapted to match the actualité.
Given:
Raw data in Memory:
c4 0e ff 00
c5 0e ff 40
...
52 0b ff c0
50 0b ff c0
I use it as little endian.
and
2 elements are used for data from one channel and 2 element is used for the other channel
and given the subsequent comment:
fft_buffer[0] stores the higher 16 bits, fft_buffer[1] stores the lower 16 bits
Then the data is in fact cross-endian such that for example, for:
c4 0e ff 00
then
fft_buffer[n] = 0x0ec4 ;
fft_buffer[n+1] = 0x00ff ;
and the reconstructed sample should be:
0x00ff0ec4
then the translation is a matter of reinterpreting fft_buffer as a 32 bit array, swapping the 16 bit word order, then a shift to move the sign-bit to the int32_t sign-bit position and (optionally) a re-scale, e.g.:
c4 0e ff 00 => 0x00ff0ec4
0x00ff0ec4<< 8 = 0xff0ec400
0xff0ec400/ 16384 = 0xffff0ec4(-61756)
thus:
// Reinterpret DMA buffer as 32bit samples
int32_t* fft_buffer32 = (int32_t*)fft_buffer ;
// For each even numbered DMA buffer sample...
for( t = 0; t < FFT_LENGTH * 2; t += 2 )
{
// ... swap 16 bit word order
int32_t sample = fft_buffer32 [t] << 16 |
fft_buffer32 [t] >> 16 ;
// ... from 24 to 32 bit 2's complement and rescale to
// maintain original magnitude. Copy to single channel
// fft_integer array.
fft_integer[t / 2] = (sample << 8) / 16384 ;
}
Background: I'm using node.js to get the volume setting from a device via serial connection. I need to obtain this data as an integer value.
I have the data in a buffer ('buf'), and am using readInt16BE() to convert to an int, as follows:
console.log( buf )
console.log( buf.readInt16BE(0) )
Which gives me the following output as I adjust the external device:
<Buffer 00 7e>
126
<Buffer 00 7f>
127
<Buffer 01 00>
256
<Buffer 01 01>
257
<Buffer 01 02>
258
Problem: All looks well until we reach 127, then we take a jump to 256. Maybe it's something to do with signed and unsigned integers - I don't know!
Unfortunately I have very limited documentation about the external device, I'm having to reverse engineer it! Is it possible it only sends a 7-bit value? Hopefully there is a way around this?
Regarding a solution - I must also be able to convert back from int to this format!
Question: How can I create a sequential range of integers when 7F seems to be the largest value my device sends, which causes a big jump in my integer scale?
Thanks :)
127 is the maximum value of a signed 8-bit integer. If the integer is overflowing into the next byte at 128 it would be safe to assume you are not being sent a 16 bit value, but rather 2 signed 8-bit values, and reading the value as a 16-bit integer would be incorrect.
I would start by using the first byte as a multiplier of 128 and add the second byte, this will give the series you are seeking.
buf = Buffer([0,127]) //<Buffer 00 7f>
buf.readInt8(0) * 128 + buf.readInt8(1)
>127
buf = Buffer([1,0]) //<Buffer 01 00>
buf.readInt8(0) * 128 + buf.readInt8(1)
>128
buf = Buffer([1,1]) //<Buffer 01 01>
buf.readInt8(0) * 128 + buf.readInt8(1)
>129
The way to get back is to divide by 128, round it down to the nearest integer for the first byte, and the second byte contains the remainder.
i = 129
buf = Buffer([Math.floor(i / 128), i % 128])
<Buffer 01 01>
Needed to treat the data as two signed 8-bit values. As per #forrestj the solution is to do:
valueInt = buf.readInt8(0) * 128 + buf.readInt8(1)
We can also convert the int value into the original format by doing the following:
byte1 = Math.floor(valueInt / 128)
byte2 = valueInt % 128