A subset of my tests need a resource that is only allowed to be created once.
This resource also has to be dropped at some point (not before the last test needing it finished).
Currently I use a static OnceCell:
struct MyResource {}
impl Drop for MyResource {
fn drop(&mut self) {
println!("Dropped MyResource");
}
}
impl MyResource {
fn new() -> MyResource {
println!("Created MyResource");
MyResource {}
}
}
fn main() {
let _my_resource = MyResource::new();
println!("Hello, world!");
}
#[cfg(test)]
mod tests {
use super::*;
use tokio::sync::OnceCell;
static GET_RESOURCE: OnceCell<MyResource> = OnceCell::const_new();
async fn init_my_resource() {
GET_RESOURCE.get_or_init(|| async {
MyResource::new()
}).await;
}
#[tokio::test]
async fn some_test() {
init_my_resource().await;
assert_eq!("some assert", "some assert");
}
}
This works fine except that it never drops the resource.
Is there a better way to do this that also drops the resource?
The answer to How to run setup code before any tests run in Rust? does not answer this question as it only answers how to initialize something once, but not how to drop something after the tests ran.
The crate ctor provides the dtor macro, which allows you to manually drop the resources once all your tests are done.
The most straight-forward solution I found is offered by the crate static_init:
struct MyResource {}
impl Drop for MyResource {
fn drop(&mut self) {
println!("Dropped my rescource");
}
}
impl MyResource {
fn new() -> MyResource {
println!("Created MyResource");
MyResource {}
}
}
fn main() {
let _my_resource = MyResource::new();
println!("Hello, world!");
}
#[cfg(test)]
mod tests {
use super::*;
use static_init::dynamic;
#[dynamic(drop)]
static mut RES: MyResource = MyResource::new();
#[tokio::test]
async fn some_test() {
println!("Running test");
assert_eq!("some assert", "some assert");
}
}
Another proposed solution was using ctor and dtor of the crate ctor. Though using this with a plain static mut RES: MyResource results in unsafe code as we need to modify a mutable static.
Related
Within the crate we can happily do something like this:
mod boundary {
pub struct EventLoop;
impl EventLoop {
pub fn run(&self) {
for _ in 0..2 {
self.handle("bundled");
self.foo();
}
}
pub fn handle(&self, message: &str) {
println!("{} handling", message)
}
}
pub trait EventLoopExtend {
fn foo(&self);
}
}
use boundary::EventLoopExtend;
impl EventLoopExtend for boundary::EventLoop {
fn foo(&self) {
self.handle("extended")
}
}
fn main() {
let el = boundary::EventLoop{};
el.run();
}
But if mod boundary were a crate boundary we get error[E0117]: only traits defined in the current crate can be implemented for arbitrary types.
I gather that a potential solution to this could be the New Type idiom, so something like this:
mod boundary {
pub struct EventLoop;
impl EventLoop {
pub fn run(&self) {
for _ in 0..2 {
self.handle("bundled");
self.foo();
}
}
pub fn handle(&self, message: &str) {
println!("{} handling", message)
}
}
pub trait EventLoopExtend {
fn foo(&self);
}
impl EventLoopExtend for EventLoop {
fn foo(&self) {
self.handle("unimplemented")
}
}
}
use boundary::{EventLoop, EventLoopExtend};
struct EventLoopNewType(EventLoop);
impl EventLoopExtend for EventLoopNewType {
fn foo(&self) {
self.0.handle("extended")
}
}
fn main() {
let el = EventLoopNewType(EventLoop {});
el.0.run();
}
But then the problem here is that the extended trait behaviour isn't accessible from the underlying EventLoop instance.
I'm still quite new to Rust, so I'm sure I'm missing something obvious, I wouldn't be surprised if I need to take a completely different approach.
Specifically in my case, the event loop is actually from wgpu, and I'm curious if it's possible to build a library where end users can provide their own "render pass" stage.
Thanks to #AlexN's comment I dug deeper into the Strategy Pattern and found a solution:
mod boundary {
pub struct EventLoop<'a, T: EventLoopExtend> {
extension: &'a T
}
impl<'a, T: EventLoopExtend> EventLoop<'a, T> {
pub fn new(extension: &'a T) -> Self {
Self { extension }
}
pub fn run(&self) {
for _ in 0..2 {
self.handle("bundled");
self.extension.foo(self);
}
}
pub fn handle(&self, message: &str) {
println!("{} handling", message)
}
}
pub trait EventLoopExtend {
fn foo<T: EventLoopExtend>(&self, el: &EventLoop<T>) {
el.handle("unimplemented")
}
}
}
use boundary::{EventLoop, EventLoopExtend};
struct EventLoopExtension;
impl EventLoopExtend for EventLoopExtension {
fn foo<T: EventLoopExtend>(&self, el: &EventLoop<T>) {
el.handle("extended")
}
}
fn main() {
let el = EventLoop::new(&EventLoopExtension {});
el.run();
}
The basic idea is to use generics with a trait bound. I think the first time I looked into this approach I was worried about type recursion. But it turns out passing the EventLoop object as an argument to EventLoopExtend trait methods is perfectly reasonable.
I'm creating a simple app with rust and wasm-bindgen. I want to be able to call the initialize function from javascript, call functions to control the state from the rust program, and then get a value from the state with a function. I cannot pass the entire struct, it isn't thread safe or able to be serialized.
Here's a simplified example of what I would want
use wasm_bindgen::prelude::*
#[wasm_bindgen]
pub fn initialize_state(value: i32) {
state = State { a_value: value };
}
#[wasm_bindgen]
pub fn increment_value() {
state.a_value += 1;
}
#[wasm_bindgen]
pub fn get_value() -> i32 {
state.a_value
}
struct State {
a_value: i32,
}
I tried using lazy static and some other tricks to create a global variable for this, but I couldn't figure it out.
Eventually I ended up with this. It's unsafe, but it seems to work
use wasm_bindgen::prelude::*;
static mut STATE: *mut State = std::ptr::null_mut::<State>();
#[wasm_bindgen]
pub fn initialize_state(value: i32) {
unsafe {
STATE = Box::leak(Box::new(State { a_value: value }));
}
}
#[wasm_bindgen]
pub fn increment_value() {
unsafe {
(*STATE).a_value += 1;
}
}
#[wasm_bindgen]
pub fn get_value() -> i32 {
unsafe { (*STATE).a_value }
}
struct State {
a_value: i32,
}
Is there a way of doing this safely?
How do I inject dependencies into my route handlers in Warp? A trivial example is as follows. I have a route that I want to serve a static value that is determined at startup time, but the filter is what passes values into the final handler. How do I pass additional data without creating global variables? This would be useful for dependency injection.
pub fn root_route() -> BoxedFilter<()> {
warp::get().and(warp::path::end()).boxed()
}
pub async fn root_handler(git_sha: String) -> Result<impl warp::Reply, warp::Rejection> {
Ok(warp::reply::json(
json!({
"sha": git_sha
})
.as_object()
.unwrap(),
))
}
#[tokio::main]
async fn main() {
let git_sha = "1234567890".to_string();
let api = root_route().and_then(root_handler);
warp::serve(api).run(([0,0,0,0], 8080)).await;
}
Here is a simple example. By using the .and() in conjunction with .map(move ||)
you can add parameters to the tuple that will be passed into the final handler function.
use warp::filters::BoxedFilter;
use warp::Filter;
#[macro_use]
extern crate serde_json;
pub fn root_route() -> BoxedFilter<()> {
warp::get().and(warp::path::end()).boxed()
}
pub async fn root_handler(git_sha: String) -> Result<impl warp::Reply, warp::Rejection> {
Ok(warp::reply::json(
json!({
"sha": git_sha
})
.as_object()
.unwrap(),
))
}
pub fn with_sha(git_sha: String) -> impl Filter<Extract = (String,), Error = std::convert::Infallible> + Clone {
warp::any().map(move || git_sha.clone())
}
#[tokio::main]
async fn main() {
let git_sha = "1234567890".to_string();
let api = root_route().and(with_sha(git_sha)).and_then(root_handler);
warp::serve(api).run(([0,0,0,0], 8080)).await;
}
I'm using the Iron framework to create a simple endpoint. I have stateful, mutable data that the endpoint needs access to.
Here's some code that shows my intention:
extern crate iron;
extern crate mount;
use iron::{Iron, Request, Response, IronResult};
use iron::status;
use mount::Mount;
static mut s_counter: Option<Counter> = None;
struct Counter {
pub count: u8
}
impl Counter {
pub fn new() -> Counter {
Counter {
count: 0
}
}
pub fn inc(&mut self) {
self.count += 1;
}
}
fn main() {
unsafe { s_counter = Some(Counter::new()); }
let mut mount = Mount::new();
mount.mount("/api/inc", inc);
println!("Server running on http://localhost:3000");
Iron::new(mount).http("127.0.0.1:3000").unwrap();
}
fn inc(req: &mut Request) -> IronResult<Response> {
let mut counter: Counter;
unsafe {
counter = match s_counter {
Some(counter) => counter,
None => { panic!("counter not initialized"); }
};
}
counter.inc();
let resp = format!("{}", counter.count);
Ok(Response::with((status::Ok, resp)))
}
This code doesn't compile:
error: cannot move out of static item
I'm hoping that there is nicer way to do this, not involving any unsafe code or static mut. My question is, what is the idiomatic way to accomplish this?
I'd highly recommend reading the entirety of The Rust Programming Language, especially the chapter on concurrency. The Rust community has put a lot of effort into producing high quality documentation to help people out.
In this case, I'd probably just make the Counter struct an Iron Handler. Then, I'd use an atomic variable inside the struct to hold the count without requiring mutablity:
extern crate iron;
extern crate mount;
use std::sync::atomic::{AtomicUsize, Ordering};
use iron::{Iron, Request, Response, IronResult};
use iron::status;
use mount::Mount;
struct Counter {
count: AtomicUsize,
}
impl Counter {
pub fn new() -> Counter {
Counter {
count: AtomicUsize::new(0),
}
}
}
fn main() {
let mut mount = Mount::new();
mount.mount("/api/inc", Counter::new());
println!("Server running on http://localhost:3000");
Iron::new(mount).http("127.0.0.1:3000").unwrap();
}
impl iron::Handler for Counter {
fn handle(&self, _: &mut Request) -> IronResult<Response> {
let old_count = self.count.fetch_add(1, Ordering::SeqCst);
let resp = format!("{}", old_count);
Ok(Response::with((status::Ok, resp)))
}
}
I’m currently trying to understand how drop works. The following code crashes and I don’t understand why. From my understanding, the usage of std::ptr::write should prevent the destructor (edit: of the original value, here: Rc) from running (in this case nothing bad should happen beside the memory leak). But it doesn’t seem to do that as (playpen, compile with -O0)
use std::rc::Rc;
use std::mem;
use std::ptr;
enum Foo {
Bar(Rc<usize>),
Baz
}
use Foo::*;
impl Drop for Foo {
fn drop(&mut self) {
match *self {
Bar(_) => {
unsafe { ptr::write(self, Foo::Baz) }
//unsafe { mem::forget(mem::replace(self, Foo::Baz)) }
}
Baz => ()
}
}
}
fn main() {
let _ = Foo::Bar(Rc::new(23));
}
gives an overflow error:
thread '<main>' panicked at 'arithmetic operation overflowed', /Users/rustbuild/src/rust-buildbot/slave/nightly-dist-rustc-mac/build/src/liballoc/rc.rs:755
The other variant quits with an illegal instruction. Why does that happen? How can I replace self with a value that will be properly dropped?
I'm not sure yet how to accomplish your goal, but I can show that
From my understanding, the usage of std::ptr::write should prevent the destructor from running
isn't true:
use std::mem;
use std::ptr;
struct Noisy;
impl Drop for Noisy {
fn drop(&mut self) { println!("Dropping!") }
}
enum Foo {
Bar(Noisy),
Baz
}
use Foo::*;
impl Drop for Foo {
fn drop(&mut self) {
println!("1");
match self {
&mut Bar(_) => {
println!("2");
unsafe { ptr::write(self, Foo::Baz) }
println!("3");
}
&mut Baz => {
println!("4");
}
}
println!("5");
}
}
fn main() {
let _ = Foo::Bar(Noisy);
}
This prints:
1
2
3
5
Dropping!
Indicating that the destructor for Foo::Bar is still being run, including the destructor of Noisy.
A potential solution is to use Option::take:
use std::mem;
struct Noisy;
impl Drop for Noisy {
fn drop(&mut self) { println!("Dropping!") }
}
enum Foo {
Bar(Option<Noisy>),
Baz
}
impl Drop for Foo {
fn drop(&mut self) {
match *self {
Foo::Bar(ref mut x) => {
unsafe { mem::forget(x.take()) }
}
Foo::Baz => {}
}
}
}
fn main() {
let _ = Foo::Bar(Some(Noisy));
}