I wrote this example (not working):
use std::sync::Mutex;
use std::ops::Deref;
pub struct DBAndCFs {
db: i32,
cfs: Vec<i32>,
}
fn main() {
let x: Mutex<Option<DBAndCFs>> = Mutex::new(Some(DBAndCFs{ db: 0, cfs: Vec::new() } ));
let DBAndCFs { ref db, ref cfs } = x.lock().unwrap().deref();
}
I have been following the docs but I am still unable to assign the dband cfsfields to the variables on the left.
Mutex::lock() returns Result<MutexGuard, PoisonError>. The first .unwrap() unwraps this Result. You need another .unwrap() to unwrap the Option, and also .as_ref() to not move out of it:
let DBAndCFs { ref db, ref cfs } = x.lock().unwrap().as_ref().unwrap();
You also don't need the ref because of match ergonomics (see What does pattern-matching a non-reference against a reference do in Rust?):
let DBAndCFs { db, cfs } = x.lock().unwrap().as_ref().unwrap();
Related
Using an async predicate to filter a list of values makes Rust complain about lifetimes. Even if the collection is awaited, which means the predicate will not outlive the filtered value, Rust remains skeptical.
Full repro below with playground here. Note that it filters on a non-copy struct we'd rather pass by reference, rather than a simple value we could just copy and forget without incurring in overhead.
use futures::stream::iter;
use futures::StreamExt;
#[derive(Debug)]
struct Foo {
bar: usize
}
impl Foo {
fn new(bar: usize) -> Self {
Self {
bar
}
}
}
#[tokio::main]
async fn main() {
let arr = vec![
Foo::new(0),
Foo::new(1),
Foo::new(2)
];
let filtered = iter(arr)
.filter(|f| async {compute_baz(f).await > 0})
.collect::<Vec<_>>()
.await;
// should print Foo{bar:1} and Foo{bar:2}
println!("{:?}", filtered)
}
async fn compute_baz(foo: &Foo) -> usize {
// ...do lengthy task...
foo.bar
}
Update
As #Ceasar pointed out below, the async functions are not run in parallel, can that be done?
I'm trying to do something like:
let filter_mask = join_all(items.map(predicate));
let filtered = items.filter(|i| filter_mask[i]).collect::<Vec<_>>();
without the clutter.
An easy workaround is to avoid the closure:
let mut filtered = vec![];
for f in arr.iter() {
if compute_baz(f).await > 0 {
filtered.push(f);
}
}
I am new to Rust, as will probably be obvious.
Basically I have this scenario you can see below where, I create a new type that has a closure added to it, but this closure needs to access data which has not yet been created. The data will be created by the time the closure gets called, but when the closure is initially created the data is not yet available.
What is the best way to do deal with?
I am also curious if my closure was not a closure, but rather a private function in my implementation, how would I access that data? This closure/function is a callback from WasmTime and requires an explicit method signature which does not allow me to add $self to it. So how could I get at the instance fields of the implementation without a reference to $self in the function parameters?
pub struct EmWasmNode {
wasmStore: Store<WasiCtx>,
wasmTable: Table,
}
impl EmWasmNode {
pub fn new(filePath: &str) -> Result<Self> {
let engine = Engine::default();
// let module = Module::from_file(&engine, "wasm/index.wast")?;
let module = Module::from_file(&engine, filePath)?;
let mut linker = Linker::new(&engine);
wasmtime_wasi::add_to_linker(&mut linker, |s| s)?;
let wasi = WasiCtxBuilder::new()
.inherit_stdio()
.inherit_args()?
.build();
let mut store = Store::new(&engine, wasi);
linker.func_wrap("env", "emscripten_set_main_loop", |p0: i32, p1: i32, p2: i32| {
println!("emscripten_set_main_loop {} {} {}", p0, p1, p2);
/*** How would I access wasmTable and wasmStore from here to execute more methods??? ***/
//let browserIterationFuncOption:Option<wasmtime::Val> = Self::wasmTable.get(&mut Self::wasmStore, p0 as u32);
// browserIterationFuncOption.unwrap().unwrap_funcref().call(&store, ());
})?;
let instance = linker.instantiate(&mut store, &module)?;
let table = instance
.get_export(&mut store, "__indirect_function_table")
.as_ref()
.and_then(extern_table)
.cloned();
let start = instance.get_typed_func::<(), (), _>(&mut store, "_start")?;
start.call(&mut store, ())?;
Ok(EmWasmNode {
wasmStore: store,
wasmTable: table.unwrap(),
})
}
You have to instantiate a struct before. I suggest the more simple code below to see my idea.
struct Atype
{
name: String,
}
impl Atype
{
pub fn new() -> Self
{
Self{ name: String::from("zeppi")}
}
pub fn test(&self) -> ()
{
let func = | x | { println!("{} {}", &self.name, x);};
func(3)
}
}
fn main() {
let o = Atype::new();
o.test();
}
I'm using the enum_dispatch crate and want to wrap some of my MyBehaviorEnums in a Mutex, then insert them into a HashMap. Without the Mutex, when I get items from the HashMap, I can easily pattern match for different MyBehaviorEnum values. But I'm not exactly sure how to do this kind of matching when the MyHeaviorEnum values are wrapped in a Mutex, or what an idiomatic approach might look like.
enum_dispatch = "0.3.7"
use core::convert::TryInto;
use enum_dispatch::enum_dispatch;
use std::sync::Mutex;
use std::collections::HashMap;
struct MyImplementorA {}
impl MyBehavior for MyImplementorA {
fn my_trait_method(&self) {}
}
struct MyImplementorB {}
impl MyBehavior for MyImplementorB {
fn my_trait_method(&self) {}
}
#[enum_dispatch]
enum MyBehaviorEnum {
MyImplementorA,
MyImplementorB,
}
#[enum_dispatch(MyBehaviorEnum)]
trait MyBehavior {
fn my_trait_method(&self);
}
fn main() {
//No Mutex wrapper
let a: MyBehaviorEnum = MyImplementorA {}.into();
let a2: MyBehaviorEnum = MyImplementorA {}.into();
let mut map = HashMap::new();
map.insert("First", a);
map.insert("Second", a2);
match map.get_mut("First"){
Some(MyBehaviorEnum::MyImplementorA(a_instance)) =>{
a_instance.my_trait_method();
}
_=>()
}
//Implementor enum values are wrapped in Mutex then inserted into HashMap
let a: MyBehaviorEnum = MyImplementorA {}.into();
let a2: MyBehaviorEnum = MyImplementorA {}.into();
let mut map = HashMap::new();
map.insert("First", Mutex::new(a));
map.insert("Second", Mutex::new(a2));
match map.get_mut("First"){
Some(mutex_impl_a)=>{
match Mutex::into_inner(mutex_impl_a){
Ok(MyBehaviorEnum::MyImplementorA(a_instance)) =>{
a_instance.my_trait_method();
}
_=>()
}
}
}
}
When working with the mutex wrapped values, use:
match map.get_mut("First"){
Some(mutex_impl_a)=>{
match &*mutex_impl_a.lock().unwrap(){
MyBehaviorEnum::MyImplementorA(a_instance)=>{
dbg!("got it!");
}
_=>()
}
}
_=>()
}
I've failed to get this code past the borrow-checker:
use std::sync::Arc;
use std::thread::{sleep, spawn};
use std::time::Duration;
#[derive(Debug, Clone)]
struct State {
count: u64,
not_copyable: Vec<u8>,
}
fn bar(thread_num: u8, arc_state: Arc<State>) {
let state = arc_state.clone();
loop {
sleep(Duration::from_millis(1000));
println!("thread_num: {}, state.count: {}", thread_num, state.count);
}
}
fn main() -> std::io::Result<()> {
let mut state = State {
count: 0,
not_copyable: vec![],
};
let arc_state = Arc::new(state);
for i in 0..2 {
spawn(move || {
bar(i, arc_state.clone());
});
}
loop {
sleep(Duration::from_millis(300));
state.count += 1;
}
}
I'm probably trying the wrong thing.
I want one (main) thread which can update state and many threads which can read state.
How should I do this in Rust?
I have read the Rust book on shared state, but that uses mutexes which seem overly complex for a single writer / multiple reader situation.
In C I would achieve this with a generous sprinkling of _Atomic.
Atomics are indeed a proper way, there are plenty of those in std (link. Your example needs 2 fixes.
Arc must be cloned before moving into the closure, so your loop becomes:
for i in 0..2 {
let arc_state = arc_state.clone();
spawn(move || { bar(i, arc_state); });
}
Using AtomicU64 is fairly straight forward, though you need explicitly use newtype methods with specified Ordering (Playground):
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc;
use std::thread::{sleep, spawn};
use std::time::Duration;
#[derive(Debug)]
struct State {
count: AtomicU64,
not_copyable: Vec<u8>,
}
fn bar(thread_num: u8, arc_state: Arc<State>) {
let state = arc_state.clone();
loop {
sleep(Duration::from_millis(1000));
println!(
"thread_num: {}, state.count: {}",
thread_num,
state.count.load(Ordering::Relaxed)
);
}
}
fn main() -> std::io::Result<()> {
let state = State {
count: AtomicU64::new(0),
not_copyable: vec![],
};
let arc_state = Arc::new(state);
for i in 0..2 {
let arc_state = arc_state.clone();
spawn(move || {
bar(i, arc_state);
});
}
loop {
sleep(Duration::from_millis(300));
// you can't use `state` here, because it moved
arc_state.count.fetch_add(1, Ordering::Relaxed);
}
}
Is it possible to borrow a mutable reference to the contents of a HashMap and use it for an extended period of time without impeding read-only access?
This is for trying to maintain a window into the state of various components in a system that are running independently (via Tokio) and need to be monitored.
As an example:
use std::sync::Arc;
use std::collections::HashMap;
struct Container {
running : bool,
count : u8
}
impl Container {
fn run(&mut self) {
for i in 1..100 {
self.count = i;
}
self.running = false;
}
}
fn main() {
let mut map = HashMap::new();
let mut container = Arc::new(
Box::new(
Container {
running: true,
count: 0
}
)
);
map.insert(0, container.clone());
container.run();
map.remove(&0);
}
This is for a Tokio-driven program where multiple operations will be happening asynchronously and visibility into the overall state of them is required.
There's this question where a temporary mutable reference can be borrowed, but that won't work as the run() function needs time to complete.
Based on suggestions from Jmb and Stargateur reworked this to use a RwLock internally. These internals could be reworked by having methods that manipulate them, but the basics are here:
use std::sync::Arc;
use std::sync::RwLock;
use std::collections::HashMap;
#[derive(Debug)]
struct ContainerState {
running : bool,
count : u8
}
struct Container {
state : Arc<RwLock<ContainerState>>
}
impl Container {
fn run(&self) {
for i in 1..100 {
let mut state = self.state.write().unwrap();
state.count = i;
}
{
let mut state = self.state.write().unwrap();
state.running = false;
}
}
}
fn main() {
let mut map = HashMap::new();
let state = Arc::new(
RwLock::new(
ContainerState {
running: true,
count: 0
}
)
);
map.insert(0, state);
let container = Container {
state: map[&0].clone()
};
container.run();
println!("Final state: {:?}", map[&0]);
map.remove(&0);
}
Where the key thing I was missing is you can have a mutable reference or multiple immutable references, and they're mutually exclusive. My initial understanding was that these two limits were independent.