I am trying to send a command to my LAC board using visual c++. On page 6 of LAC Config it says that the Buffer is sent in a 3-byte packet.
Buffer[0]=Control
Buffer[1]=Data Low
Buffer[2]=Data High
What does this mean and how do I figure out what I should set each of these values to?
Thanks
If you read on, you will see that next comes a list of all control-codes, followed by a detailed description of each of them. The manual also mentions that sample-code is available, probably somewhere on their website.
In general, setting the values is a bit tricky. BYTE is probably a typedef or macro that resolves to an unsigned 8-bit data-type, meaning it can only hold values from 0 to 255. Two bytes could represent values up to 65535. However, if you want to store a value greater than 255 if that buffer, you'd have to decompose it into its higher and lower byte. You can do this the following way:
unsigned int value = 512;
BYTE low_byte = 0xff & value;
BYTE high_byte = value >> 8;
Related
I need to convert the number of bits, which goes from CIDR notation to actually subnetwork mask. For example 192.168.0.1/30. Here is 30 - the number of bits that are set in the network mask (in this example - 255.255.255.252). I make it by converting the number of bits to actually string and then use a u32::from_str_radix method to get the actual number (which I can use for Ipv4Addr struct since it has From<u32> trait implemented). Here is the code:
let bit_length = 30
let bits = format!("{:0<32}", "1".repeat(bit_length as usize));
net_mask = IpAddr::V4(u32::from_str_radix(bits.as_str().into(), 2)?.into());
I'm wondering is there another way, maybe more elegant, to convert the number of bits to integer when we know the size of the target number (32-bit in this example)? Maybe some bit's magic I'm not aware of?
If you want the first 30 bits to be 1, that means the last 2 bits should be 0. So, we can get a number where every bit is 1 by taking the bitwise not of 0, then shifting it left two bits to make the two 0s.
let bit_length = 30;
let bits: u32 = (!0) << (32 - bit_length);
let net_mask = IpAddr::V4(bits.into());
println!("{:?}", net_mask);
Playground link
I am writing a DPI checker(.cpp file), In this, Checker reads the 128-bit value on every line and I want to mask it with a 128-bit mask and compare it with RTL value but the issue I am seeing is the data type from which I am creating the mask only holds 32-bit value and I need to do bitwise anding with original data. Can anyone provide any suggestions?
typedef struct {
BitVector<128> data
} read_line;
svBitVecVal mask;
mask = 0xffffffffffffffffffffffffffffffff;
BitVector<128> data_masked;
data_masked = read_line->data & mask;
Here svBitVecVal can only hold a maximum of 32bit value. data_masked will not show the correct value if the mask is more than 32-bit.
I don't know where you got your BitVector template from, but svBitVecVal is usually used as svBitVecVal* pointing to an array of 32 bits chunks.
mask needs to be a 4 element array to hold 128 bits.
And you'll need to either convert mask to a BitVector type first, or make sure it has the right overloaded function to do it for you.
I have the following code, I have just copied some data from external RAM to the MCU into a buffer called "data"
double p32 = 4.294967296e+009; /// equals to 2^32 in decimal notation
int32_t longhigh;
uint32_t longlow;
offset = mapdata(); //Points to the data I want, 55 bit fixed point on HW
longhigh = data[2*offset+1]; //Gets upperpart of data
longlow = data[2*offset]; //Gets lower part
double floating = (longhigh*p32 + longlow); // What is this doing? How does it work?
Can someone explain that last line of code for me? Why are we multiplying by p32? Thanks.
Multiplying by p32 is equivalent to a left shift by 32 bits. It also results in a type conversion for the product (from int to double), as well as for the sum. This way you can essentially keep 64-bit ints in the buffer and convert them to doubles when required.
On the heart rate measurement characteristics:
http://developer.bluetooth.org/gatt/characteristics/Pages/CharacteristicViewer.aspx?u=org.bluetooth.characteristic.heart_rate_measurement.xml
EDIT
Link is now at
https://www.bluetooth.com/specifications/gatt/characteristics/
and look for "heart rate measurement".
They no longer offer an XML viewer, but instead you need to view XML directly.
Also for services it's on this page.
END EDIT
I want to make sure I'm reading it correctly. Does that actually says 5 fields? The mandatory, C1, C2, C3, and C4? And the mandatory is at the first byte, and C4 is at the last two bytes, C1 and C2 are 8-bit fields, and C3 to C4 are 16-bit each. That's a total of 8 bytes. Am I reading this document correctly?
EDIT:
I'm informed that the mandatory flag fields indicate something is 0, it means it's just not there. For example, if the first bit is 0, C1 is the next field, if 1, C2 follows instead.
END EDIT
In Apple's OSX heart rate monitor example:
- (void) updateWithHRMData:(NSData *)data
{
const uint8_t *reportData = [data bytes];
uint16_t bpm = 0;
if ((reportData[0] & 0x01) == 0)
{
/* uint8 bpm */
bpm = reportData[1];
}
else
{
/* uint16 bpm */
bpm = CFSwapInt16LittleToHost(*(uint16_t *)(&reportData[1]));
}
... // I ignore rest of the code for simplicity
}
It checks the first bit as zero, and if it isn't, it's changing the little endianness to whatever the host byte order is, by applying CFSwapInt16LittleToHost to reportData[1].
How does that bit checking work? I'm not entirely certain of endianess. Is it saying that whether it's little or big, the first byte is always the mandatory field, the second byte is the C1, etc? And since reportData is an 8-bit pointer (typedef to unsigned char), it's checking either bit 0 or bit 8 of the mandatory field.
If that bit is bit 8, the bit is reserved for future use, why is it reading in there?
If that bit is 0, it's little-endian and no transformation is required? But if it's little-endian, the first bit could be 1 according to the spec, 1 means "Heart Rate Value Format is set to UINT16. Units: beats per minute (bpm)", couldn't that be mis-read?
I don't understand how it does the checking.
EDIT:
I kept on saying there was C5, that was a blunder. It's up to C4 only and I edited above.
Am I reading this document correctly?
IMHO, you are reading it a little wrong.
C1 to C4 should be read as Conditional 1 to Conditional 4. And in the table for org.bluetooth.characteristic.heart_rate_measurement, if the lowest bit of the flag byte is 0, then C1 is met, otherwise, C2 is.
You can think it a run-time configurable union type in the C programming language(, which is determined by the flag. Beware this is not always true because the situation got complicated by C3 and C4).
// Note: this struct is only for you to better understand a simplified case.
// You should still stick to the profile documentations to implement.
typedef struct {
uint8_t flag;
union {
uint8_t bpm1;
uint16_t bpm2;
}bpm;
} MEASUREMENT_CHAR;
How does that bit checking work?
if ((reportData[0] & 0x01) == 0) effectively checks the bit with bitwise AND operator. Go and find a C/C++ programming intro book if any doubt.
The first byte is always the flag, in this case. The value of flag dynamically determines how should the rest of the bytes should be dealt with. C3 and C4 are both optional, and can be omitted if the corresponding bits in the flag were set zeroes. C1 and C2 are mutual exclusive.
There is no endianness ambiguity in the Bluetooth standard, as it has been well addressed that little-endian should be used all the time. You should always assume that those uint16_t fields are transferred as little endian. Apple's precaution is just to reassure the most portability of the code, since they would not guarantee the endianness of architectures used in their future products.
I see how it goes. It's not testing for Endianness. Rather, it's testing for whether the field is 8 bit or 16 bit, and in the case of 16 bit, it'll convert from little endianness to host order. But I see that before conversion and after conversion it's the same number. So I guess the system is little endian to begin with so I don't know what's the point.
Solved
My code was never before used for processing signed values and as such bytes -> short conversion was incorrectly handling the sign bit. Doing that properly solved the issue.
The question was...
I'm trying to change the volume of a PCM data stream. I can extract single channel data from a stereo file, do various silly experimental effects with the samples by skipping/duplicating them/inserting zeros/etc but I can't seem to find a way to modify actual sample values in any way and get a sensible output.
My attempts are really simple: http://i.imgur.com/FZ1BP.png
source audio data
values - 10000
values + 10000
values * 0.9
values * 1.1
(value = -value works fine -- reverses the wave and it sounds the same)
The code to do this is equally simple (I/O uses unsigned values in range 0-65535) <-- that was the problem, reading properly signed values solved the issue:
// NOTE: INVALID CODE
int sample = ...read unsigned 16 bit value from a stream...
sample -= 32768;
sample = (int)(sample * 0.9f);
sample += 32768;
...write unsigned 16 bit value to a stream...
// NOTE: VALID CODE
int sample = ...read *signed* 16 bit value from a stream...
sample = (int)(sample * 0.9f);
...write 16 bit value to a stream...
I'm trying to make the sample quieter. I'd imagine making the amplitude smaller (sample * 0.9) would result in a quieter file but both 4. and 5. above are clearly invalid. There is a similar question on SO where MusiGenesis saying he got correct results with 'sample *= 0.75' type of code (yes, I did experiment with other values besides 0.9 and 1.1).
The question is: am I doing something stupid or is the whole idea of multiplying by a constant wrong? I'd like the end result to be something like this: http://i.imgur.com/qUL10.png
Your 4th attempt is definitely the the correct approach. Assuming your sample range is centered around 0, multiplying each sample by another value is how you can change the volume or gain of a signal.
In this case though, I'd guess something funny happening behind the scenes when you're multiplying an int by a float and casting back to int. Hard to say without knowing what language you're using, but that might be what's causing the problem.