System Verilog DPI Checker masking 128-bit value - verilog

I am writing a DPI checker(.cpp file), In this, Checker reads the 128-bit value on every line and I want to mask it with a 128-bit mask and compare it with RTL value but the issue I am seeing is the data type from which I am creating the mask only holds 32-bit value and I need to do bitwise anding with original data. Can anyone provide any suggestions?
typedef struct {
BitVector<128> data
} read_line;
svBitVecVal mask;
mask = 0xffffffffffffffffffffffffffffffff;
BitVector<128> data_masked;
data_masked = read_line->data & mask;
Here svBitVecVal can only hold a maximum of 32bit value. data_masked will not show the correct value if the mask is more than 32-bit.

I don't know where you got your BitVector template from, but svBitVecVal is usually used as svBitVecVal* pointing to an array of 32 bits chunks.
mask needs to be a 4 element array to hold 128 bits.
And you'll need to either convert mask to a BitVector type first, or make sure it has the right overloaded function to do it for you.

Related

Vulkan - strange mapping of float shader color values to uchar values in a read buffer

I knew that a range of float color value in a shader [0..1] is mapped into range of [0..255] in UCHAR buffer.
According to this, I was expecting for steps of size of 1/255 in shader color values for each change in UCHAR buffer.
But the results were surprisingly different. Here is for the first two steps:
Red float value in Shader -> UCHAR value in a read Buffer
0.000000 -> 0
0.002197 -> 0
0.002198 -> 1
0.006102 -> 1
0.006105 -> 2
The first two steps are around 0.002197 and 0.006102 which are different than the expected steps: 0.00392 and 0.00784.
So what is the mapping formula ?
Unsigned integer normalization is based on the formula f = i/INT_MAX, where f is the floating point value (after clamping to [0, 1]), i is the integer value, and INT_MAX is the maximum integer value for the integer's bitdepth (255) in this case.
So if you have a float, and want the unsigned, normalized integer value of it, you use i = f * INT_MAX. Of course... integers do not have the same precision as floats. So if the result of f * INT_MAX is 0.5, what is the integer value of that? It could be 0, or it could be 1, depending on how things are rounded.
Implementations are permitted to round integer values in any way they prefer. They are encouraged to use nearest rounding (the post-conversion 0.49 would become 0, and 0.5 would become 1), but that is not a requirement. The only requirements are that it must pick one of the two nearest values (it can't turn 0.5 into 3) and that the exact floating-point values of 0.0 and 1.0 (which includes any values clamped to them) must be exactly represented as integer 0 and INT_MAX.
If you have an explicit need to have direct rounding, you can always do the normalization yourself. In fact, GLSL has specific functions to help you. The following assumes that you are trying to write to a texture with the Vulkan format R8G8B8A8_UNORM, and we're assuming you're writing to a storage image, not via outputs from the fragment shader (you can do that too, but you lose blending).
So, step 1 is to change your layout format to be r32ui. That is, you are now writing an unsigned 32-bit value, rather than 4 unsigned 8-bit normalized values. That's perfectly valid.
Step 2 is to employ the packUNorm4x8 function. This function does float-to-integer normalization, but the specification explicitly performs rounding correctly. Use the return value of that function in your imageStore function, and you're fine.
If you want to write to a fragment shader output, that's a bit more complex. There, you will need to use a different image view, one that uses the R32_UINT format. So you're creating a 32-bit unsigned integer view of a 4x8-bit normalized texture. That has to become a render target, so you're going to have to do subpass surgery. From there, just write the result of packUNorm4x8.
Of course, you immediately lose blending and similar operations, since you're writing integers values. And since you had to do that subpass surgery, it's likely that any shader writing to it will need to do this too.
Also, note that in both cases, you will likely need to adjust the order of the components of the value you write. packUNorm4x8 is explicitly defined to be little endian, whereas (I believe?) R8G8B8A8 is specified to be in that order, most-significant to least. So you'll probably need to essentially do endian swapping with packUNorm4x8(value.abgr).

sizeof a structure vs the offset of last element of the structure

I have a C structure comprising of a couple of elements. When i type offsetof() on the last element of the structure, it shows 144 and sizeof() the last element is 4. So, I was assuming the size of the structure is 148. However, running sizeof() on the structure itself returns a value of 152. Is there something I am misinterpreting about the offset? Shouldn't the size of the structure be 148 instead of 152? Is there some sort of padding that is being applied to fit the byte? I am running on a 64bit platform on Ubuntu 14.
Struct A{
// few elements
};
Struct B{
Struct A;
<type> C;
<type> D;
// few more elements
};
element C is at a offset of 148(may be because padding is not applied), but the sizeof struct A is 152(quite possibly due to padding) and hence, when memcpy is performed, the value assigned to element C gets zeroed out.
UPDATE : Just verified sizeof() element C is 4 bytes and hence, it makes sense it gets included immediately after 148th offset.
There is no padding, al least not before or after the structure. However if you use different datatypes within the same structute sometimes an alignment is added for performance. This is a compiler settings and if you want you can disable it.

Fixed point to Floating point

I have the following code, I have just copied some data from external RAM to the MCU into a buffer called "data"
double p32 = 4.294967296e+009; /// equals to 2^32 in decimal notation
int32_t longhigh;
uint32_t longlow;
offset = mapdata(); //Points to the data I want, 55 bit fixed point on HW
longhigh = data[2*offset+1]; //Gets upperpart of data
longlow = data[2*offset]; //Gets lower part
double floating = (longhigh*p32 + longlow); // What is this doing? How does it work?
Can someone explain that last line of code for me? Why are we multiplying by p32? Thanks.
Multiplying by p32 is equivalent to a left shift by 32 bits. It also results in a type conversion for the product (from int to double), as well as for the sum. This way you can essentially keep 64-bit ints in the buffer and convert them to doubles when required.

Apple's heart rate monitoring example and byte order of bluetooth heart rate measurement characteristics

On the heart rate measurement characteristics:
http://developer.bluetooth.org/gatt/characteristics/Pages/CharacteristicViewer.aspx?u=org.bluetooth.characteristic.heart_rate_measurement.xml
EDIT
Link is now at
https://www.bluetooth.com/specifications/gatt/characteristics/
and look for "heart rate measurement".
They no longer offer an XML viewer, but instead you need to view XML directly.
Also for services it's on this page.
END EDIT
I want to make sure I'm reading it correctly. Does that actually says 5 fields? The mandatory, C1, C2, C3, and C4? And the mandatory is at the first byte, and C4 is at the last two bytes, C1 and C2 are 8-bit fields, and C3 to C4 are 16-bit each. That's a total of 8 bytes. Am I reading this document correctly?
EDIT:
I'm informed that the mandatory flag fields indicate something is 0, it means it's just not there. For example, if the first bit is 0, C1 is the next field, if 1, C2 follows instead.
END EDIT
In Apple's OSX heart rate monitor example:
- (void) updateWithHRMData:(NSData *)data
{
const uint8_t *reportData = [data bytes];
uint16_t bpm = 0;
if ((reportData[0] & 0x01) == 0)
{
/* uint8 bpm */
bpm = reportData[1];
}
else
{
/* uint16 bpm */
bpm = CFSwapInt16LittleToHost(*(uint16_t *)(&reportData[1]));
}
... // I ignore rest of the code for simplicity
}
It checks the first bit as zero, and if it isn't, it's changing the little endianness to whatever the host byte order is, by applying CFSwapInt16LittleToHost to reportData[1].
How does that bit checking work? I'm not entirely certain of endianess. Is it saying that whether it's little or big, the first byte is always the mandatory field, the second byte is the C1, etc? And since reportData is an 8-bit pointer (typedef to unsigned char), it's checking either bit 0 or bit 8 of the mandatory field.
If that bit is bit 8, the bit is reserved for future use, why is it reading in there?
If that bit is 0, it's little-endian and no transformation is required? But if it's little-endian, the first bit could be 1 according to the spec, 1 means "Heart Rate Value Format is set to UINT16. Units: beats per minute (bpm)", couldn't that be mis-read?
I don't understand how it does the checking.
EDIT:
I kept on saying there was C5, that was a blunder. It's up to C4 only and I edited above.
Am I reading this document correctly?
IMHO, you are reading it a little wrong.
C1 to C4 should be read as Conditional 1 to Conditional 4. And in the table for org.bluetooth.characteristic.heart_rate_measurement, if the lowest bit of the flag byte is 0, then C1 is met, otherwise, C2 is.
You can think it a run-time configurable union type in the C programming language(, which is determined by the flag. Beware this is not always true because the situation got complicated by C3 and C4).
// Note: this struct is only for you to better understand a simplified case.
// You should still stick to the profile documentations to implement.
typedef struct {
uint8_t flag;
union {
uint8_t bpm1;
uint16_t bpm2;
}bpm;
} MEASUREMENT_CHAR;
How does that bit checking work?
if ((reportData[0] & 0x01) == 0) effectively checks the bit with bitwise AND operator. Go and find a C/C++ programming intro book if any doubt.
The first byte is always the flag, in this case. The value of flag dynamically determines how should the rest of the bytes should be dealt with. C3 and C4 are both optional, and can be omitted if the corresponding bits in the flag were set zeroes. C1 and C2 are mutual exclusive.
There is no endianness ambiguity in the Bluetooth standard, as it has been well addressed that little-endian should be used all the time. You should always assume that those uint16_t fields are transferred as little endian. Apple's precaution is just to reassure the most portability of the code, since they would not guarantee the endianness of architectures used in their future products.
I see how it goes. It's not testing for Endianness. Rather, it's testing for whether the field is 8 bit or 16 bit, and in the case of 16 bit, it'll convert from little endianness to host order. But I see that before conversion and after conversion it's the same number. So I guess the system is little endian to begin with so I don't know what's the point.

Sending data in 3-byte packet

I am trying to send a command to my LAC board using visual c++. On page 6 of LAC Config it says that the Buffer is sent in a 3-byte packet.
Buffer[0]=Control
Buffer[1]=Data Low
Buffer[2]=Data High
What does this mean and how do I figure out what I should set each of these values to?
Thanks
If you read on, you will see that next comes a list of all control-codes, followed by a detailed description of each of them. The manual also mentions that sample-code is available, probably somewhere on their website.
In general, setting the values is a bit tricky. BYTE is probably a typedef or macro that resolves to an unsigned 8-bit data-type, meaning it can only hold values from 0 to 255. Two bytes could represent values up to 65535. However, if you want to store a value greater than 255 if that buffer, you'd have to decompose it into its higher and lower byte. You can do this the following way:
unsigned int value = 512;
BYTE low_byte = 0xff & value;
BYTE high_byte = value >> 8;

Resources