Given this simple code:
code:
#[no_mangle]
pub extern "C" fn testing() -> bool {
false
}
cargo.toml:
...
[lib]
crate-type = ["cdylib"]
...
and I end up with:
File Type: DLL
Section contains the following exports for main.dll
00000000 characteristics
FFFFFFFF time date stamp
0.00 version
1 ordinal base
2 number of functions
2 number of names
ordinal hint RVA name
1 0 000014F0 rust_eh_personality = rust_eh_personality
2 1 00001000 testing = testing
How can I stop rust from exporting rust_eh_personality?
Related
Given I want to sum the first n terms of a series 1,2,3,.. with the following function in Rust
fn sum_sequence(x: u64) -> u64
{
let mut s: u64 = 0;
for n in 1..=x
{
s = s + n;
}
return s;
}
When I compile it for x64 architecture
cargo build --release
and run it with x=10000000000 the result is 13106511857580896768 - fine.
But when I compile this very function to Webassembly (WASM)
cargo build --target wasm32-unknown-unknown --release
and run it with the same argument as before, x=10000000000,
wasmtime ./target/wasm32-unknown-unknown/release/sum_it.wasm --invoke sum_sequence 1000000000
Then the result is -5340232216128654848.
I would not have expected any deviation in results between Rust being compiled to x64 in comparison to Rust being compiled to WASM. Also, from the WASM text file (below), I do not see why I should get a negative result when I run it with WASM.
How does it come that WASM shows a different result and what can I do do correct the calculation of WASM?
(module
(type (;0;) (func (param i64) (result i64)))
(func $sum_sequence (type 0) (param i64) (result i64)
(local i64 i64 i32)
block ;; label = #1
local.get 0
i64.eqz
i32.eqz
br_if 0 (;#1;)
i64.const 0
return
end
i64.const 1
local.set 1
i64.const 0
local.set 2
block ;; label = #1
loop ;; label = #2
local.get 1
local.get 2
i64.add
local.set 2
local.get 1
local.get 1
local.get 0
i64.lt_u
local.tee 3
i64.extend_i32_u
i64.add
local.tee 1
local.get 0
i64.gt_u
br_if 1 (;#1;)
local.get 3
br_if 0 (;#2;)
end
end
local.get 2)
(table (;0;) 1 1 funcref)
(memory (;0;) 16)
(global (;0;) (mut i32) (i32.const 1048576))
(global (;1;) i32 (i32.const 1048576))
(global (;2;) i32 (i32.const 1048576))
(export "memory" (memory 0))
(export "sum" (func $sum))
(export "__data_end" (global 1))
(export "__heap_base" (global 2)))
It seems to be because wasm does not support native u64 as a type, only the signed variants (notably, i64), which is why it's using i64 as the type for the arithmetic operations. Since this then overflows a 64-bit integer (the correct output is n * (n+1) / 2, or 50000000005000000000, you're getting a negative value due to the overflow, which is then getting printed to console. This is due to a lack of type support in wasm.
Just for reference, a Σ n=0 to N := (N * (N+1) / 2, which I use from here on out since it's much faster computationally, and correct for our purposes.
The result, 50000000005000000000, takes ~65.4 bits in memory to accurately represent in memory, which is why you get wrapping behavior for x86_64 and wasm, just the types it wraps to are different.
Using NumPy, we can clearly confirm this:
>>> import numpy as np
>>> a = np.uint64(10000000000)
>>> b = np.uint64(10000000001)
>>> (a >> np.uint64(1)) * b
13106511857580896768
>>> import numpy as np
>>> a = np.int64(10000000000)
>>> b = np.int64(10000000001)
>>> (a >> np.int64(1)) * b
-5340232216128654848
The values you are getting are due to unsigned and signed (two's complement) integer overflow. (Note: I'm using a right bitshift to simulate division-by-two, I could probably also use the // operator).
EDIT: Also, a good point was raised in the comments by Herohtar: it clearly overflows if run in debug mode, panicking with 'attempt to add with overflow'.
This code works for setting the program counter to the address of the vector_table on the ARM architecture:
static mut JUMP: Option<extern "C" fn()> = None;
JUMP = Some(core::mem::transmute(vector_table));
(JUMP.unwrap())();
I calculate the vector table using let vector_table = *((address + 4) as * const u32);
Is there any way of expressing the same in pure Rust code?
The equivalent C code is:
((void (*)(void))address[1])();
address is an uint32_t *address, so you offset it by 4 bytes to hit the vector table.
I have compiled a DLL from a C++ Library I have written according to this tutorial.
The dumpbin for the dll is as follows:
Section contains the following exports for HelloDLL.dll
00000000 characteristics
FFFFFFFF time date stamp
0.00 version
1 ordinal base
6 number of functions
6 number of names
ordinal hint RVA name
1 0 00011032 ?Add#Functions#MathLibrary##SANNN#Z = #ILT+45(?Add#Functions#MathLibrary##SANNN#Z)
2 1 00011037 ?AddMultiply#Functions#MathLibrary##SANNN#Z = #ILT+50(?AddMultiply#Functions#MathLibrary##SANNN#Z)
3 2 000112EE ?Multiply#Functions#MathLibrary##SANNN#Z = #ILT+745(?Multiply#Functions#MathLibrary##SANNN#Z)
4 3 000110F5 Add = #ILT+240(_Add)
5 4 00011073 AddMultiply = #ILT+110(_AddMultiply)
6 5 0001105F Multiply = #ILT+90(_Multiply)
Now I want to use the Functions in an Excel-VBA Project like this:
Declare Function LIB_AddMultiply Lib "C:\Users\xxxx\source\repos\HelloDLL\Debug\HelloDLL.dll" Alias "AddMultiply" (ByVal a As Double, ByVal b As Double) As Double
Public Sub test()
Dim a As Double
Dim b As Double
a = 3
b = 4
Dim c As Double
c = LIB_AddMultiply(a, b)
MsgBox ("hi " + c)
End Sub
But whenever I want to run the test() I get a Bad DLL Calling convention; Error 49.
I already looked at the following (and some other) resources, but couldn't resolve my problem:
Runtime Error 49, Bad DLL calling convention
Error 49
Declare Statement MSDN
Do you have any advice?
Thanks a lot...
UPDATE:
This is the code for the header file:
#pragma once
#ifdef MATHLIBRARY_EXPORTS
#define MATHLIBRARY_API __declspec(dllexport)
#else
#define MATHLIBRARY_API __declspec(dllexport)
#endif
namespace MathLibrary
{
class Functions
{
public:
static MATHLIBRARY_API double Add(double a, double b);
//[...]
};
extern "C" MATHLIBRARY_API double Add(double a, double b)
{
return MathLibrary::Functions::Add(a, b);
}
//[...]
}
Thank you Hans Passant for your help;
I have changed the project properties to the calling convention;
then did another dumpbin with the result of my Functions being named as
_Add#16
and then just changed the Alias in the VBA code...
I have the following code in a .dll:
namespace MyNamespace
{
extern "C" __declspec(dllexport) int __stdcall GetOptionID(unsigned long num)
{
return 0;
}
}
This is compiled on Visual C++ 2010, so I also have a .def file containing GetOptionID. I can see that the function is exported, and mangled as _GetOptionID#4, using dumpbin /exports:
File Type: DLL
Section contains the following exports for MyLibrary.dll
00000000 characteristics
53D269CB time date stamp Fri Jul 25 15:29:31 2014
0.00 version
1 ordinal base
13 number of functions
13 number of names
ordinal hint RVA name
1 0 0006F030 CmdOne = _CmdOne#16
2 1 0006F510 CmdUnimpl = _CmdUnimpl#16
3 2 0006EBB0 DefineThing = _DefineThing#32
4 3 0006E0C0 GetOptionID = _GetOptionID#4
In a separate executable, I attempt to check for the presence of GetOptionID:
HINSTANCE hinst = LoadLibraryEx(file_name, NULL, DONT_RESOLVE_DLL_REFERENCES);
if(!hinst)
return FALSE;
FARPROC_IDI lp = (FARPROC_IDI) GetProcAddress(hinst, "_GetOptionID#4");
auto e = GetLastError();
Running through this code in the debugger, I can see that:
LoadLibraryEx succeeds - I have a valid-looking hinst
GetProcAddress fails - lp is 0x00000000
GetLastError returns 127
I can see the function has been exported, and I can see its name matches the entry point I'm looking for. How come GetProcAddress is failing?
Ah, solved it myself. Defining the function in the .def file causes its name to be completely unmangled, meaning the correct target for GetProcAddress was simply GetOptionID.
However, since I have other .dlls that undergo the same check and really are _GetOptionID#4, the actual solution was to remove GetOptionID from the .def file.
At the moment I'm working on implementing a managed wrapper in C++/CLI, for a native C library.
I have no hard experience with the language, and this week has been a sort of crash course while trying to complete my project. Although it has a few surprising and frustrating quirks, there are plenty of good articles floating around.
Background:
I have a class (WrappedAvType) declared in a Common.h and the class members are implemented in a corresponding Common.cpp, and they are both in the project root directory.
Other files in the project are organized in their own directories, and include ..\Common.h to access the class declaration it contains
Environment: Visual Studio 2012 on Windows 8.1 x64, targeting .NET Framework 4.5, generating x86 binaries
Common.h:
#pragma once
namespace LibavDotNet {
template<class T>
public ref class WrappedAvType abstract
{
public:
property bool IsDirty;
WrappedAvType();
WrappedAvType(T avObject);
~WrappedAvType();
virtual T Unwrap();
private:
T _avObject;
protected:
virtual void SetAvObject(T avObject);
};
}
Common.cpp:
#include "Common.h"
namespace LibavDotNet {
template<class T>
WrappedAvType<T>::WrappedAvType()
{
IsDirty = false;
}
template<class T>
WrappedAvType<T>::WrappedAvType(T avObject)
{
_avObject = avObject;
}
template<class T>
WrappedAvType<T>::~WrappedAvType()
{ }
template<class T>
T WrappedAvType<T>::Unwrap()
{
return _avObject;
}
template<class T>
void WrappedAvType<T>::SetAvObject(T avObject)
{
_avObject = avObject;
}
}
However I'm currently stumped (maybe blinded from staring at the code for too damn long...) with a peculiar problem: whenever I compile the project in its described state, I receive error LNK2020 indicating that it no token can be found for WrappedAvType.
To confirm that the symbols were available in the generated binary, I used dumpbin /symbols Common.obj and in fact there don't appear to be any:
Microsoft (R) COFF/PE Dumper Version 11.00.61030.0
Copyright (C) Microsoft Corporation. All rights reserved.
Dump of file Common.obj
File Type: COFF OBJECT
COFF SYMBOL TABLE
000 00CFEE66 ABS notype Static | #comp.id
001 80000191 ABS notype Static | #feat.00
002 00000000 SECT1 notype Static | .drectve
Section length 78, #relocs 0, #linenums 0, checksum 0
004 00000000 SECT2 notype Static | .debug$S
Section length 35C, #relocs 0, #linenums 0, checksum 0
006 00000000 SECT3 notype Static | .debug$T
Section length 7C, #relocs 0, #linenums 0, checksum 0
008 00000000 SECT4 notype Static | .cormeta
Section length 45C, #relocs 0, #linenums 0, checksum 0
String Table Size = 0x0 bytes
Summary
45C .cormeta
35C .debug$S
7C .debug$T
78 .drectve
If I include the class implementation directly with #include "..\Common.cpp" in the other source files that need it (only one other so far), then the project compiles as expected.
At this point I have no idea what else to try and have searched for a solution for hours. I'd very much like to keep my headers with class declarations and source files with class implementations separate.
I fully expect this to be something minor that I've overlooked or that I'm just too green to notice. What am I missing?
template<class T>
This is a standard C++ problem, it doesn't have anything to do with C++/CLI. Templates do not have external linkage. The entire implementation of the template class must be present in the .h file. Which is why the #include worked.
Do keep the strong code smell in mind, templates are a pure C++ feature. Your wrapper isn't actually usable by any other .NET language, they don't support templates of course. Only ref classes that are declared with the generic keyword are usable.