I'm trying to do some dynamic library loading in Rust. I'm getting a segmentation fault when passing a large Vec from a dynamically loaded library function. It's a basic function that creates a Vec<i32> of a specified size. If the Vec gets much bigger than 8MB, the program hits a segfault on OSX. I haven't had the same problem when running on linux, can anyone look at this and tell me if I'm doing something wrong here? I'm running this with:
$ cargo build --release
$ ./target/release/linkfoo
8281
[1] 84253 segmentation fault ./target/release/linkfoo
Cargo.toml
[package]
name = "linkfoo"
version = "0.1.0"
authors = ["Nate Mara <nathan.mara#kroger.com>"]
[dependencies]
libloading = "0.3.0"
[lib]
name = "foo"
crate-type = ["dylib"]
main.rs
extern crate libloading as lib;
fn main() {
let dylib = lib::Library::new("./target/release/libfoo.dylib").expect("Failed to load library");
let func = unsafe {
let wrapped_func: lib::Symbol<fn(&[i32]) -> Vec<i32>> = dylib.get(b"alloc")
.expect("Failed to load function");
wrapped_func.into_raw()
};
let args = vec![8182];
println!("{}", func(&args).len());
}
lib.rs
#[no_mangle]
pub fn alloc(args: &[i32]) -> Vec<i32> {
let size = args[0] as usize;
let mut mat = Vec::with_capacity(size);
for _ in 0..size {
mat.push(0);
}
mat
}
Rust uses the system allocator for dynamic libraries, and jemalloc for all other code. This difference in loaders was causing the segfault, and I was able to fix it by adding this to the top of main.rs:
#![feature(alloc_system)]
extern crate alloc_system;
Related
I want to build a no_std static library with rust.
I got the following:
[package]
name = "nostdlb"
version = "0.1.0"
edition = "2021"
[lib]
crate-type = ["staticlib"]
[profile.dev]
panic = "abort"
[profile.release]
panic = "abort"
lib.rs:
#![no_std]
pub fn add(left: usize, right: usize) -> usize {
left + right
}
Despite setting the panic behaviour for both dev and release to abort cargo gives the following error:
error: `#[panic_handler]` function required, but not found
error: could not compile `nostdlb` due to previous error
I thought the panic handler is only required when there is no stack unwinding provided by std?
No, you need to write your own one. If you need just aborting, consider using the following one:
#[panic_handler]
fn panic(_info: &core::panic::PanicInfo) -> ! {
loop {}
}
I also highly recommend to unmangle the function and use C calling convention if you’re going to use this symbol somewhere (not import it the library as a crate, but link it manually):
#[no_mangle]
extern fn add(right: usize, left: usize) -> usize {
right + left
}
I am working on a library that is configurable with cargo features, and I can't figure out how to get the bencher crate (for benchmarking) to work. In lib.rs I have
#[cfg(feature = "single_threaded")]
pub fn my_function() {...}
In benches/bench.rs I have
#[macro_use]
extern crate bencher;
extern crate my_crate;
use bencher::Bencher;
use my_crate::*;
fn single_thread(bench: &mut Bencher) {
bench.iter(|| {
for _ in 0..10 {
my_function();
}
})
}
benchmark_group!(benches, single_thread);
benchmark_main!(benches);
As is, the compiler says that it cannot find my_function because I haven't specified a configuration. If I add #[cfg(feature = "single_threaded")] above fn single_thread(), it can then find my_function, but that seems to put single_thread in a different context from everything else, such that the two macros at the bottom cannot find single_thread().
If I add #[cfg(feature = "single_threaded")] above each of the two macros, the compiler says to "consider adding a main function to benches/bench.rs," but a main function is added by benchmark_main!. If I put the entire file into a module and declare #[cfg(feature = "single_threaded")] once for the whole module, I get the same error about not having a main function. Any suggestions?
Oh and my Cargo.toml looks like this
[package]
name = "my_crate"
version = "0.1.0"
edition = "2021"
authors = ["Me"]
[dependencies]
[dev-dependencies]
bencher = "0.1.5"
[features]
single_threaded = []
[[bench]]
name = "benches"
harness = false
I am writing a library which will leverage LLVM (via inkwell) to JIT compile some functions. The functions need to be able to call back into some rust functions in my code.
I have it working, but my unit tests don't work, because it seems that the callback functions are optimized away when building the tests. These callback functions are not called by the Rust code itself - only by the dynamically generated JIT functions - so I guess the linker thinks they are unused and drops them.
If I call them from the rust code, within unit test, then they are not removed - but that is not a desirable workaround. Also note that the functions are not removed when the package is built as a library, only when building the tests.
Below is an MVCE.
// lib.rs
use inkwell::{OptimizationLevel, context::Context};
use inkwell::execution_engine::JitFunction;
#[no_mangle]
pub extern "C" fn my_callback(x:i64) {
println!("Called Back x={}", x);
}
type FuncType = unsafe extern "C" fn();
pub fn compile_fn() {
let context = Context::create();
let module = context.create_module("test");
let execution_engine = module.create_jit_execution_engine(OptimizationLevel::None).unwrap();
let builder = context.create_builder();
let func_type = context.void_type().fn_type(&[], false);
let function = module.add_function("test", func_type, None);
let basic_block = context.append_basic_block(function, "entry");
builder.position_at_end(basic_block);
let cb_fn_type = context.void_type().fn_type(&[context.i64_type().into()], false);
let cb_fn = module.add_function("my_callback", cb_fn_type, None);
let x = context.i64_type().const_int(42, false);
builder.build_call(cb_fn, &[x.into()], "callback");
builder.build_return(None);
function.print_to_stderr();
let jit_func:JitFunction<FuncType> = unsafe { execution_engine.get_function("test").unwrap() };
unsafe { jit_func.call() };
}
#[cfg(test)]
mod tests {
#[test]
fn it_works() {
// If I uncomment this line, it works, otherwise it segfaults
//super::my_callback(1);
super::compile_fn();
}
}
# Cargo.toml
[package]
name = "so_example_lib"
version = "0.1.0"
authors = ["harmic <harmic#no-reply.com>"]
edition = "2018"
[dependencies]
inkwell = { git = "https://github.com/TheDan64/inkwell", branch = "master", features = ["llvm7-0"] }
// build.rs
fn main() {
println!("cargo:rustc-link-search=/usr/lib64/llvm7.0/lib");
println!("cargo:rustc-link-lib=LLVM-7");
}
You can tell the function has been removed by running nm on the resulting test binary. When you run the test, it segfaults. If you run it in gdb, you can see it is trying to call a function at 0x0000000000000000.
#0 0x0000000000000000 in ?? ()
#1 0x00007ffff7ff201f in test ()
How can I instruct Rust not to drop these functions?
As the title indicates I'm confused as to how shared libraries work with thread locals in rust. I have a minimal example below:
In a crate called minimal_thread_local_example:
Cargo.toml:
[package]
name = "minimal_thread_local_example"
version = "0.1.0"
edition = "2018"
[dependencies]
has_thread_local = {path ="./has_thread_local"}
libloading = "0.5"
[workspace]
members = ["shared_library","has_thread_local"]
src/main.rs:
extern crate libloading;
use libloading::{Library, Symbol};
use has_thread_local::{set_thread_local, get_thread_local};
fn main() {
let lib = Library::new("libshared_library.so").unwrap();
set_thread_local(10);
unsafe {
let func: Symbol<unsafe extern fn() -> u32> = lib.get(b"print_local").unwrap();
func();
};
println!("From static executable:{}", get_thread_local());
}
In a crate called has_thread_local:
Cargo.toml:
[package]
name = "has_thread_local"
version = "0.1.0"
edition = "2018"
[lib]
[dependencies]
src/lib.rs:
use std::cell::RefCell;
use std::ops::Deref;
thread_local! {
pub static A_THREAD_LOCAL : RefCell<u64> = RefCell::new(0);
}
pub fn set_thread_local(val: u64) {
A_THREAD_LOCAL.with(|refcell| { refcell.replace(val); })
}
pub fn get_thread_local() -> u64 {
A_THREAD_LOCAL.with(|refcell| *refcell.borrow().deref())
}
In a crate called shared_library:
Cargo.toml:
[package]
name = "shared-library"
version = "0.1.0"
edition = "2018"
[lib]
crate-type = ["cdylib"]
[dependencies]
has_thread_local = {path = "../has_thread_local"}
src/lib.rs:
use has_thread_local::get_thread_local;
#[no_mangle]
unsafe extern "system" fn print_local() {
println!("From shared library:{}",get_thread_local());
}
Here's a github link for the above.
In essence I have a static executable and a shared library, with a thread local variable declared in the static executable. I then set that variable to 10 and access it from the shared library and static executable.
This outputs:
From shared library:0
From static executable:10
I'm confused as to why this is outputted(occurs on both stable and nightly). I would have imagined that both would be 10, since the thread local is declared in a static executable and is only accessed via functions also located in that static executable. I'm looking for an explanation as to why I am observing this behavior, and how to make my thread local have the same value across the entire thread, aka have the same value in the shared library and static library.
The reason this behavior is observed is because the shared library contains it's own copy of the code of crates it depends on, resulting in two different thread local declarations.
The solution to this is to pass a reference to the thread local in question, instead of directly accessing the thread local. See here for more information on how to obtain a reference to a thread local: How to create a thread local variable inside of a Rust struct?
I'm trying to link a Rust program with libsoundio. I'm using Windows and there's a GCC binary download available. I can link it like this if I put it in the same folder as my project:
#[link(name = ":libsoundio-1.1.0/i686/libsoundio.a")]
#[link(name = "ole32")]
extern {
fn soundio_version_string() -> *const c_char;
}
But I really want to specify #[link(name = "libsoundio")] or even #[link(name = "soundio")], and then provide a linker path somewhere else.
Where can I specify that path?
I tried the rustc-link-search suggestion as follows:
#[link(name = "libsoundio")]
#[link(name = "ole32")]
extern {
fn soundio_version_string() -> *const c_char;
}
And in .cargo/config:
[target.i686-pc-windows-gnu.libsoundio]
rustc-link-search = ["libsoundio-1.1.0/i686"]
rustc-link-lib = ["libsoundio.a"]
[target.x86_64-pc-windows-gnu.libsoundio]
rustc-link-search = ["libsoundio-1.1.0/x86_64"]
rustc-link-lib = ["libsoundio.a"]
But it still only passes "-l" "libsoundio" to gcc and fails with the same ld: cannot find -llibsoundio. Am I missing something really obvious? The docs seem to suggest this should work.
As stated in the documentation for a build script:
All the lines printed to stdout by a build script [... starting] with cargo: is interpreted directly by Cargo [...] rustc-link-search indicates the specified value should be passed to the compiler as a -L flag.
In your Cargo.toml:
[package]
name = "link-example"
version = "0.1.0"
authors = ["An Devloper <an.devloper#example.com>"]
build = "build.rs"
And your build.rs:
fn main() {
println!(r"cargo:rustc-link-search=C:\Rust\linka\libsoundio-1.1.0\i686");
}
Note that your build script can use all the power of Rust and can output different values depending on target platform (e.g. 32- and 64-bit).
Finally, your code:
extern crate libc;
use libc::c_char;
use std::ffi::CStr;
#[link(name = "soundio")]
extern {
fn soundio_version_string() -> *const c_char;
}
fn main() {
let v = unsafe { CStr::from_ptr(soundio_version_string()) };
println!("{:?}", v);
}
The proof is in the pudding:
$ cargo run
Finished debug [unoptimized + debuginfo] target(s) in 0.0 secs
Running `target\debug\linka.exe`
"1.0.3"
Ideally, you will create a soundio-sys package, using the convention for *-sys packages. That simply has a build script that links to the appropriate libraries and exposes the C methods. It will use the Cargo links key to uniquely identify the native library and prevent linking to it multiple times. Other libraries can then include this new crate and not worry about those linking details.
Another possible way is setting the RUSTFLAGS like:
RUSTFLAGS='-L my/lib/location' cargo build # or cargo run
I don't know if this is the most organized and recommended approach, but it worked for my simple project.
I found something that works OK: you can specify links in your Cargo.toml:
[package]
links = "libsoundio"
build = "build.rs"
This specifies that the project links to libsoundio. Now you can specify the search path and library name in the .cargo/config file:
[target.i686-pc-windows-gnu.libsoundio]
rustc-link-search = ["libsoundio-1.1.0/i686"]
rustc-link-lib = [":libsoundio.a"]
[target.x86_64-pc-windows-gnu.libsoundio]
rustc-link-search = ["libsoundio-1.1.0/x86_64"]
rustc-link-lib = [":libsoundio.a"]
(The : prefix tells GCC to use the actual filename and not to do all its idiotic lib-prepending and extension magic.)
You also need to create an empty build.rs:
fn main() {}
This file is never run, because the values in .cargo/config override its output, but for some reason Cargo still requires it - any time you use links = you have to have build =, even if it isn't used.
Finally in main.rs:
#[link(name = "libsoundio")]
#[link(name = "ole32")]
extern {
fn soundio_version_string() -> *const c_char;
}