In the risc-v architecture, what do the bits returned by the mulh[[s]u] operation look like? - 64-bit

TLDR: given 64 bit registers rs1(signed) = 0xffff'ffff'ffff'fff6 and rs2(unsigned) = 0x10 does the riscv mulhsu instruction return 0x0000'0000'0000'000f or 0xffff'ffff'ffff'ffff or something else entirely to rd?
I am working on implementing a simulated version of the RiscV architecture and have run into a snag when implementing the RV64M mulh[[s]u] instruction. I'm not sure if mulhsu returns a signed or unsigned number. If it does return a signed number, then what is the difference between mulhsu and mulh?
here is some pseudocode demonstrating the problem (s64 and u64 denote signed and unsigned 64-bit register respectively)
rs1.s64 = 0xffff'ffff'ffff'fff6; //-10
rs2.u64 = 0x10; // 16
execute(mulhsu(rs1, rs2));
// which of these is correct? Note: rd only returns the upper 64 bits of the product
EXPECT_EQ(0x0000'0000'0000'000f, rd);
EXPECT_EQ(0xffff'ffff'ffff'ffff, rd);
EXPECT_EQ(<some other value>, rd);
Should rd be signed? unsigned?
From the instruction manual:
MUL performs an XLEN-bit × XLEN-bit multiplication of rs1 by rs2 and places the lower XLEN bits
in the destination register. MULH, MULHU, and MULHSU perform the same multiplication but return
the upper XLEN bits of the full 2 × XLEN-bit product, for signed × signed, unsigned × unsigned,
and signed rs1×unsigned rs2 multiplication, respectively. If both the high and low bits of the same
product are required, then the recommended code sequence is: MULH[[S]U] rdh, rs1, rs2; MUL
rdl, rs1, rs2 (source register specifiers must be in same order and rdh cannot be the same as rs1 or
rs2). Microarchitectures can then fuse these into a single multiply operation instead of performing
two separate multiplies.

The answer for your question is :EXPECT_EQ(0xffff'ffff'ffff'ffff, rd);.
mulhsu will do a multiplication of a sign extend of rs1.s64 and a zero extend rs2.u64.
You can see that in the compiler machine description riscv.md.
so mulhsu (64 bits) will return the equivalent of :
((s128) rs1.s64 * (u128) rs2.u64) >> 64. where s128 is a signed 128 int and u128 an unsigned 128 int.
the difference between the three mul is:
mulhsu is a multiplication between a sign extended register and a zero extended register.
mulh is a multiplication of two sign extended registers.
mulhu is a multiplication of two zero extended registers.

Related

NtQueryObject returns wrong insufficient required size via WOW64, why?

I am using the NT native API NtQueryObject()/ZwQueryObject() from user mode (and I am aware of the risks in general and I have written kernel mode drivers for Windows in the past in my professional capacity).
Generally when one uses the typical "query information" function (of which there are a few) the protocol is first to ask with a too small buffer to retrieve the required size with STATUS_INFO_LENGTH_MISMATCH, then allocate a buffer of said size and query again -- this time using the buffer and previously returned size.
In order to get the list of object types (67 on my build) on the system I am doing just that:
ULONG Size = 0;
NTSTATUS Status = NtQueryObject(NULL, ObjectTypesInformation, &Size, sizeof(Size), &Size);
And in Size I get 8280 (WOW64) and 8968 (x64). I then proceed to allocate the buffer with calloc() and query again:
ULONG Size2 = 0;
BYTE* Buf = (BYTE*)::calloc(1, Size);
Status = NtQueryObject(NULL, ObjectTypesInformation, Buf, Size, &Size2);
NB: ObjectTypesInformation is 3. It isn't declared in winternl.h, but Nebbett (as ObjectAllTypesInformation) and others describe it. Since I am not querying for a particular object's traits but the system-wide list of object types, I pass NULL for the object handle.
Curiously on WOW64, i.e. 32-bit, the value in Size2 upon return from the second query is 16 Bytes (= 8296) bigger than the previously returned required size.
As far as alignment is concerned, I'd expect at most 8 Bytes for this sort of thing and indeed neither 8280 nor 8296 are at a 16 Byte alignment boundary, but on an 8 Byte one.
Certainly I can add some slack space on top of the returned required size (e.g. ALIGN_UP to the next 32 Byte alignment boundary), but this seems highly irregular to be honest. And I'd rather want to understand what's going on than to implement a workaround that breaks, because I miss something crucial.
The practical issue for the code is that in Debug configurations it tells me there's a corrupted heap somewhere, upon freeing Buf. Which suggests that NtQueryObject() was indeed writing these extra 16 Bytes beyond the buffer I provided.
Question: Any idea why it is doing that?
As usual for NT native API the sources of information are scarce. The x64 version of the exact same code returns the exact number of bytes required. So my thinking here is that WOW64 is the issue. A somewhat cursory look into wow64.dll with IDA didn't reveal any immediate points for suspicion regarding what goes wrong in translating the results to 32-bit here.
PS: Windows 10 (10.0.19043, ntdll.dll "timestamp" 77755782)
PPS: this may be related: https://wj32.org/wp/2012/11/30/obquerytypeinfo-and-ntqueryobject-buffer-overrun-in-windows-8/ Tested it, by checking that OBJECT_TYPE_INFORMATION::TypeName.Length + sizeof(WCHAR) == OBJECT_TYPE_INFORMATION::TypeName.MaximumLength in all returned items, which was the case.
The only part of ObjectTypesInformation that's public is the first field defined in winternl.h header in the Windows SDK:
typedef struct __PUBLIC_OBJECT_TYPE_INFORMATION {
UNICODE_STRING TypeName;
ULONG Reserved [22]; // reserved for internal use
} PUBLIC_OBJECT_TYPE_INFORMATION, *PPUBLIC_OBJECT_TYPE_INFORMATION;
For x86 this is 96 bytes, and for x64 this is 104 bytes (assuming you have the right packing mode enabled). The difference is the pointer in UNICODE_STRING which changes the alignment in x64.
Any additional memory space should be related to the TypeName buffer.
UNICODE_STRING accounts for 8 bytes of the difference between 8280 and 8296. The function uses the sizeof(ULONG_PTR) for alignment of the returned string plus an extra WCHAR, so that could easily account for the remaining 8 bytes.
AFAIK: The public use of NtQueryObject is supposed to be limited to kernel-mode use which of course means it always matches the OS native bitness (x86 code can't run as kernel in x64 native OS), so it's probably just a quirk of using the NT functions via the WOW64 thunk.
Alright, I think I figured out the issue with the help of WinDbg and a thorough look at wow64.dll using IDA.
NB: the wow64.dll I have has the same build number, but differs slightly in data only (checksum, security directory entry, pieces from version resources). The code is identical, which was to be expected, given deterministic builds and how they affect the PE timestamp.
There's an internal function called whNtQueryObject_SpecialQueryCase (according to PDBs), which covers the ObjectTypesInformation class queries.
For the above wow64.dll I used the following points of interest in WinDbg, from a 32 bit program which calls NtQueryObject(NULL, ObjectTypesInformation, ...) (the program itself is irrelevant, though):
0:000> .load wow64exts
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B0E0
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B14E
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B1A7
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B24A
0:000> bp wow64!whNtQueryObject_SpecialQueryCase+B252
Explanation of the above points of interest:
+B0E0: computing length required for 64 bit query, based on passed length for 32 bit
+B14E: call to NtQueryObject()
+B1A7: loop body for copying 64 to 32 bit buffer contents, after successful NtQueryObject() call
+B24A: computing written length by subtracting current (last + 1) entry from base buffer address
+B252: downsizing returned (64 bit) required length to 32 bit
The logic of this function in regards to just ObjectTypesInformation is roughly as follows:
Common steps
Take the ObjectInformationLength (32 bit query!) argument and size it up to fit the 64 bit info
Align the retrieved size up to the next 16 byte boundary
If necessary allocate the resulting amount from some PEB::ProcessHeap and store in TLS slot 3; otherwise using this as a scratch space
Call NtQueryObject() passing the buffer and length from the two previous steps
The length passed to NtQueryObject() is the one from step 1, not the one aligned to a 16 byte boundary. There seems to be some sort of header to this scratch space, so perhaps that's where the 16 byte alignment comes from?
Case 1: buffer size too small (here: 4), just querying required length
The up-sized length in this case equals 4, which is too small and consequently NtQueryObject() returns STATUS_INFO_LENGTH_MISMATCH. Required size is reported as 8968.
Down-size from the 64 bit required length to 32 bit and end up 16 bytes too short
Return the status from NtQueryObject() and the down-sized required length form the previous step
Case 2: buffer size supposedly (!) sufficient
Copy OBJECT_TYPES_INFORMATION::NumberOfTypes from queried buffer to 32 bit one
Step to the first entry (OBJECT_TYPE_INFORMATION) of source (64 bit) and target (32 bit) buffer, 8 and 4 byte aligned respectively
For for each entry up to OBJECT_TYPES_INFORMATION::NumberOfTypes:
Copy UNICODE_STRING::Length and UNICODE_STRING::MaximumLength for TypeName member
memcpy() UNICODE_STRING::Length bytes from source to target UNICODE_STRING::Buffer (target entry + sizeof(OBJECT_TYPE_INFORMATION32)
Add terminating zero (WCHAR) past the memcpy'd string
Copy the individual members past the TypeName from 64 to 32 bit struct
Compute pointer of next entry by aligning UNICODE_STRING::MaximumLength up to an 8 byte boundary (i.e. the ULONG_PTR alignment mentioned in the other answer) + sizeof(OBJECT_TYPE_INFORMATION64) (already 8 byte aligned!)
The next target entry (32 bit) gets 4 byte aligned instead
At the end compute required (32 bit) length by subtracting the value we arrived at for the "next" entry (i.e. one past the last) from the base address of the buffer passed by the WOW64 program (32 bit) to NtQueryObject()
In my debugged scenario these were: 0x008ce050 - 0x008cbfe8 = 0x00002068 (= 8296), which is 16 bytes larger than the buffer length we were told during case 1 (8280)!
The issue
That crucial last step differs between merely querying and actually getting the buffer filled. There is no further bounds checking in that loop I described for case 2.
And this means it will just overrun the passed buffer and return a written length bigger than the buffer length passed to it.
Possible solutions and workarounds
I'll have to approach this mathematically after some sleep, the workaround is obviously to top up the required length returned from case 1 in order to avoid the buffer overrun. The easiest method is to use my up_size_from_32bit() from the example below and use that on the returned required size. This way you are allocating enough for the 64 bit buffer, while querying the 32 bit one. This should never overrun during the copy loop.
However, the fix in wow64.dll is a little more involved, I guess. While adding bounds checking to the loop would help avert the overrun, it would mean that the caller would have to query for the required size twice, because the first time around it lies to us.
Which means the query-only case (1) would have to allocate that internal buffer after querying the required length for 64 bit, then get it filled and then walk the entries (just like the copy loop), skipping over the last entry to compute the required length the same as it is now done after the copy loop.
Example program demonstrating the "static" computation by wow64.dll
Build for x64, just the way wow64.dll was!
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <cstdio>
typedef struct
{
ULONG JustPretending[24];
} OBJECT_TYPE_INFORMATION32;
typedef struct
{
ULONG JustPretending[26];
} OBJECT_TYPE_INFORMATION64;
constexpr ULONG size_delta_3264 = sizeof(OBJECT_TYPE_INFORMATION64) - sizeof(OBJECT_TYPE_INFORMATION32);
constexpr ULONG down_size_to_32bit(ULONG len)
{
return len - size_delta_3264 * ((len - 4) / sizeof(OBJECT_TYPE_INFORMATION64));
}
constexpr ULONG up_size_from_32bit(ULONG len)
{
return len + size_delta_3264 * ((len - 4) / sizeof(OBJECT_TYPE_INFORMATION32));
}
// Trying to mimic the wdm.h macro
constexpr size_t align_up_by(size_t address, size_t alignment)
{
return (address + (alignment - 1)) & ~(alignment - 1);
}
constexpr auto u32 = 8280UL;
constexpr auto u64 = 8968UL;
constexpr auto from_64 = down_size_to_32bit(u64);
constexpr auto from_32 = up_size_from_32bit(u32);
constexpr auto from_32_16_byte_aligned = (ULONG)align_up_by(from_32, 16);
int wmain()
{
wprintf(L"32 to 64 bit: %u -> %u -(16-byte-align)-> %u\n", u32, from_32, from_32_16_byte_aligned);
wprintf(L"64 to 32 bit: %u -> %u\n", u64, from_64);
return 0;
}
static_assert(sizeof(OBJECT_TYPE_INFORMATION32) == 96, "Size for 64 bit struct does not match.");
static_assert(sizeof(OBJECT_TYPE_INFORMATION64) == 104, "Size for 64 bit struct does not match.");
static_assert(u32 == from_64, "Must match (from 64 to 32 bit)");
static_assert(u64 == from_32, "Must match (from 32 to 64 bit)");
static_assert(from_32_16_byte_aligned % 16 == 0, "16 byte alignment failed");
static_assert(from_32_16_byte_aligned > from_32, "We're aligning up");
This does not mimic the computation that happens in case 2, though.

How to use Arithmetic expression in Enum in system verilog?

`define REG_WIDTH 48
`define FIELD_WIDTH 32
typedef enum bit [`REG_WIDTH-1:0]
{
BIN_MIN = 'h0,
BIN_MID = BIN_MIN + `REG_WIDTH'(((1<<`FIELD_WIDTH)+2)/3),
BIN_MAX = BIN_MID + `REG_WIDTH'(((1<<`FIELD_WIDTH)+2)/3),
}reg_cover;
In the above code I am getting compilation error of enum duplicate because BIN_MID is also taking value '48{0}. But when I do $display for "BIN_MIN + `REG_WIDTH'(((1<<`FIELD_WIDTH)+2)/3)" , I don't get zero.
Since I have typecast each enum field by 48 , why I am getting zero ? I am new to system verilog.
Typically, integer constants like 1 are treated as 32-bit values (SystemVerilog LRM specifies them to be at least 32 bits but most simulators/synthesis tools use exactly 32 bits). As such, since you are preforming a shift of 32 first, you are shifting out the one completely and left with 0 during compilation (32'd1<<32 is zero). By extending the size of the integer constant first to 48 bits, you will not lose the value due to the shift:
`define REG_WIDTH 48
`define FIELD_WIDTH 32
typedef enum bit [`REG_WIDTH-1:0] {
BIN_MIN = 'h0,
BIN_MID = BIN_MIN + (((`REG_WIDTH'(1))<<`FIELD_WIDTH)+2)/3,
BIN_MAX = BIN_MID + (((`REG_WIDTH'(1))<<`FIELD_WIDTH)+2)/3
} reg_cover;
As to why when put in a $display prints a non-zero value, I'm not sure. Some simulators I tried did print non-zero values, others printed 0. There's might be some differences in compile-time optimizations and how they run the code, but casting first is the best thing to do.

SystemVerilog Array of Bits to Int Casting

I have the following SystemVerilog variable:
bit [5:0] my_bits = 6'h3E; // my_bits == 6'd62
I want to take the bit-wise inverse of it and then get that result into an int variable, treating the underlying bits as unsigned, so first I did this:
bit [5:0] my_bits_inv = ~my_bits; // my_bits_inv = 6'b00_0001
int my_int = int'(my_bits_inv); // my_int = 1
That gave me what I wanted. However, if I combine the inversion and casting into a single step, I get -63:
int my_int2 = int'(~my_bits); // my_int2 = -63 ???
Presumably it is treating my_bits as 32 bits, then taking the inverse of that to give int'(~32'h0000_003E) = int'(32'hFFFF_FFC1) = -63.
Can someone explain why this happens? Does it have to do with self-determination rules?
Your diagnosis is correct. This is explained in IEEE Std 1800-2017, section 11.6.1 Rules for expression bit lengths. In your case, casting with int' expands my_bits to match the width of int (32) before the bitwise inversion.
Consider also:
$displayb(~my_bits);
$displayb(int'(~my_bits));
Outputs:
000001
11111111111111111111111111000001

RISCV RV32IM: MULHSU - which operand is the signed one?

Question
In the risc-v RV32IM, for instruction MULHSU, which one of operands rs1 and rs2 is the signed operand?
Background
The RISC-V Instruction Set Manual
Volume I: Unprivileged ISA
Document Version 20190608-Base-Ratified
say the following (near bottom page 43):
MULH, MULHU, and MULHSU perform the same multiplication but return the upper XLEN bits of the full 2 × XLEN-bit product, for signed × signed, unsigned × unsigned, and signed rs1 × unsigned rs2 multiplication, respectively.
So this states that the signed operand is rs1.
But the explicatory note (bottom page 43) say:
MULHSU is used in multi-word signed multiplication to multiply the most-significant word of the multiplier (which contains the sign bit) with the less-significant words of the multiplicand (which are unsigned).
From the definition of the instruction (also page 43):
31 25 24 20 19 15 14 12 11 7 6 0
+--------+----------+-------------+--------------+-----+--------+
| funct7 | rs2 | rs1 | funct3 | rd | opcode |
+--------+----------+-------------+--------------+-----+--------+
7 5 5 3 5 7
MULDIV multiplier multiplicand MUL/MUL[[S]U] dest OP
I see that the multiplier is rs2. So the explicatory note states that the signed operand is rs2.
I believe either the diagram or "explicatory note" has a typo. All of my testing has shown rs1 to be signed and rs2 to be unsigned for MULHSU.
A much more comprehensive summary of instruction formats & pseudo-codes can be found here. More detail of pseudo-instructions and other things to help write assembly code for RISC-V can be found here (same website). Its documentation specifically expresses MULHSU as follows :
MULHSU rd, rs1, rs2 #rd ← (sx(rs1) × ux(rs2)) » xlen
where sx(r) means signed version, and ux(r) means unsigned version.
If you find any evidence that this isn't the case, please let me know.
Following communication with riscv fundation: rs1 is the signed operand.
See https://github.com/riscv/riscv-isa-manual/issues/463

Atomicity of a read on SPARC

I'm writing a multithreaded application and having a problem on the SPARC platform. Ultimately my question comes down to atomicity of this platform and how I could be obtaining this result.
Some pseudocode to help clarify my question:
// Global variable
typdef struct pkd_struct{
uint16_t a;
uint16_t b;
} __attribute__(packed) pkd_struct_t;
pkd_struct_t shared;
Thread 1:
swap_value() {
pkd_struct_t prev = shared;
printf("%d%d\n", prev.a, prev.b);
...
}
Thread 2:
use_value() {
pkd_struct_t next;
next.a = 0; next.b = 0;
shared = next;
printf("%d%d\n", shared.a, shared.b);
...
}
Thread 1 and 2 are accessing the shared variable "shared". One is setting, the other is getting. If Thread 2 is setting "shared" to zero, I'd expect Thread 1 to read count either before OR after the setting -- since "shared" is aligned on a 4-byte boundary. However, I will occasionally see Thread 1 reading the value of the form 0xFFFFFF00. That is the high-order 24 bits are OLD, but the low-order byte is NEW. It appears I've gotten an intermediate value.
Looking at the disassembly, the use_value function simply does an "ST" instruction. Given that the data is aligned and isn't crossing a word boundary, is there any explanation for this behavior? If ST is indeed NOT atomic to use this way, does this explain the result I see (only 1 byte changed?!?)? There is no problem on x86.
UPDATE 1:
I've found the problem, but not the cause. GCC appears to be generating assembly that reads the shared variably byte-by-byte (thus allowing a partial update to be obtained). Comments added, but I am not terribly comfortable with SPARC assembly. %i0 is a pointer to the shared variable.
xxx+0xc: ldub [%i0], %g1 // ld unsigned byte g1 = [i0] -- 0 padded
xxx+0x10: ...
xxx+0x14: ldub [%i0 + 0x1], %g5 // ld unsigned byte g5 = [i0+1] -- 0 padded
xxx+0x18: sllx %g1, 0x18, %g1 // g1 = [i0+0] left shifted by 24
xxx+0x1c: ldub [%i0 + 0x2], %g4 // ld unsigned byte g4 = [i0+2] -- 0 padded
xxx+0x20: sllx %g5, 0x10, %g5 // g5 = [i0+1] left shifted by 16
xxx+0x24: or %g5, %g1, %g5 // g5 = g5 OR g1
xxx+0x28: sllx %g4, 0x8, %g4 // g4 = [i0+2] left shifted by 8
xxx+0x2c: or %g4, %g5, %g4 // g4 = g4 OR g5
xxx+0x30: ldub [%i0 + 0x3], %g1 // ld unsigned byte g1 = [i0+3] -- 0 padded
xxx+0x34: or %g1, %g4, %g1 // g1 = g4 OR g1
xxx+0x38: ...
xxx+0x3c: st %g1, [%fp + 0x7df] // store g1 on the stack
Any idea why GCC is generating code like this?
UPDATE 2: Adding more info to the example code. Appologies -- I'm working with a mix of new and legacy code and it's difficult to separate what's relevant. Also, I understand sharing a variable like this is highly-discouraged in general. However, this is actually in a lock implementation where higher-level code will be using this to provide atomicity and using pthreads or platform-specific locking is not an option for this.
Because you've declared the type as packed, it gets one byte alignment, which means it must be read and written one byte at a time, as SPARC does not allow unaligned loads/stores. You need to give it 4-byte alignment if you want the compiler to use word load/store instructions:
typdef struct pkd_struct {
uint16_t a;
uint16_t b;
} __attribute__((packed, aligned(4))) pkd_struct_t;
Note that packed is essentially meaningless for this struct, so you could leave that out.
Answering my own question here -- this has bugged me for too long and hopefully I can save someone a bit of frustration at some point.
The problem is that although the shared data is aligned, because it is packed GCC reads it byte-by-byte.
There is some discussion here on how packing leading to load/store bloat on SPARC (and other RISC platforms I'd assume...), but in my case it has lead to a race.

Resources