I try to use panic::catch_unwind to catch some errors, but it seems no use, and there is an example:
rust:
use std::{sync::Mutex};
use wasm_bindgen::prelude::*;
use std::sync::PoisonError;
use std::panic;
pub struct CurrentStatus {
pub index: i32,
}
#[wasm_bindgen]
extern {
pub fn alert(s: &str);
}
impl CurrentStatus {
fn new() -> Self {
CurrentStatus {
index: 1,
}
}
fn get_index(&mut self) -> i32 {
self.index += 1;
self.index.clone()
}
fn add_index(&mut self) {
self.index += 2;
}
}
lazy_static! {
pub static ref FOO: Mutex<CurrentStatus> = Mutex::new(CurrentStatus::new());
}
unsafe impl Send for CurrentStatus {}
#[wasm_bindgen]
pub fn add_index() {
FOO.lock().unwrap_or_else(PoisonError::into_inner).add_index();
}
#[wasm_bindgen]
pub fn get_index() -> i32 {
let mut foo = FOO.lock().unwrap_or_else(PoisonError::into_inner);
let result = panic::catch_unwind(|| {
panic!("error happen!"); // original panic! code
});
if result.is_err() {
alert("panic!!!!!panic");
}
return foo.get_index();
}
js:
const js = import("../pkg/hello_wasm.js");
js.then(js => {
window.js = js;
console.log(js.get_index());
js.add_index();
});
I think it should catch the panic and I can call add_index then.
But In fact I can call neither after the panic.
I wish I can catch the panic from one function so when the users call other functions just all right..
Thanks very much
Related
If I have an optional method that has to be called many many times what is better if I want to skip it: have an empty body and call it or check the bool/Option before calling it?
The following benchmark make no sense. It gave zeroes.
#![feature(test)]
extern crate test;
trait OptTrait: 'static {
fn cheap_call(&mut self, data: u8);
fn expensive_call(&mut self, data: u8);
}
type ExpensiveFnOf<T> = &'static dyn Fn(&mut T, u8);
struct Container<T: OptTrait> {
inner: T,
expensive_fn: Option<ExpensiveFnOf<T>>,
}
impl<T: OptTrait> Container<T> {
fn new(inner: T, expensive: bool) -> Self {
let expensive_fn = {
if expensive {
Some(&T::expensive_call as ExpensiveFnOf<T>)
} else {
None
}
};
Self {
inner,
expensive_fn,
}
}
}
struct MyStruct;
impl OptTrait for MyStruct {
fn cheap_call(&mut self, _data: u8) {
}
fn expensive_call(&mut self, _data: u8) {
}
}
#[cfg(test)]
mod tests {
use super::*;
use test::Bencher;
#[bench]
fn bench_always_call_empty(b: &mut Bencher) {
let mut cont = Container::new(MyStruct, false);
b.iter(|| {
cont.inner.cheap_call(0);
cont.inner.expensive_call(1);
});
}
#[bench]
fn bench_alwaws_skip_empty(b: &mut Bencher) {
let mut cont = Container::new(MyStruct, false);
b.iter(|| {
cont.inner.cheap_call(0);
if let Some(func) = cont.expensive_fn {
func(&mut cont.inner, 1);
}
});
}
}
I'm currently trying to wrap a C library in rust that has a few requirements. The C library can only be run on a single thread, and can only be initialized / cleaned up once on the same thread. I want something something like the following.
extern "C" {
fn init_lib() -> *mut c_void;
fn cleanup_lib(ctx: *mut c_void);
}
// This line doesn't work.
static mut CTX: Option<(ThreadId, Rc<Context>)> = None;
struct Context(*mut c_void);
impl Context {
fn acquire() -> Result<Rc<Context>, Error> {
// If CTX has a reference on the current thread, clone and return it.
// Otherwise initialize the library and set CTX.
}
}
impl Drop for Context {
fn drop(&mut self) {
unsafe { cleanup_lib(self.0); }
}
}
Anyone have a good way to achieve something like this? Every solution I try to come up with involves creating a Mutex / Arc and making the Context type Send and Sync which I don't want as I want it to remain single threaded.
A working solution I came up with was to just implement the reference counting myself, removing the need for Rc entirely.
#![feature(once_cell)]
use std::{error::Error, ffi::c_void, fmt, lazy::SyncLazy, sync::Mutex, thread::ThreadId};
extern "C" {
fn init_lib() -> *mut c_void;
fn cleanup_lib(ctx: *mut c_void);
}
#[derive(Debug)]
pub enum ContextError {
InitOnOtherThread,
}
impl fmt::Display for ContextError {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
match *self {
ContextError::InitOnOtherThread => {
write!(f, "Context already initialized on a different thread")
}
}
}
}
impl Error for ContextError {}
struct StaticPtr(*mut c_void);
unsafe impl Send for StaticPtr {}
static CTX: SyncLazy<Mutex<Option<(ThreadId, usize, StaticPtr)>>> =
SyncLazy::new(|| Mutex::new(None));
pub struct Context(*mut c_void);
impl Context {
pub fn acquire() -> Result<Context, ContextError> {
let mut ctx = CTX.lock().unwrap();
if let Some((id, ref_count, ptr)) = ctx.as_mut() {
if *id == std::thread::current().id() {
*ref_count += 1;
return Ok(Context(ptr.0));
}
Err(ContextError::InitOnOtherThread)
} else {
let ptr = unsafe { init_lib() };
*ctx = Some((std::thread::current().id(), 1, StaticPtr(ptr)));
Ok(Context(ptr))
}
}
}
impl Drop for Context {
fn drop(&mut self) {
let mut ctx = CTX.lock().unwrap();
let (_, ref_count, ptr) = ctx.as_mut().unwrap();
*ref_count -= 1;
if *ref_count == 0 {
unsafe {
cleanup_lib(ptr.0);
}
*ctx = None;
}
}
}
I think the most 'rustic' way to do this is with std::sync::mpsc::sync_channel and an enum describing library operations.
The only public-facing elements of this module are launch_lib(), the SafeLibRef struct (but not its internals), and the pub fn that are part of the impl SafeLibRef.
Also, this example strongly represents the philosophy that the best way to deal with global state is to not have any.
I have played fast and loose with the Result::unwrap() calls. It would be more responsible to handle error conditions better.
use std::sync::{ atomic::{ AtomicBool, Ordering }, mpsc::{ SyncSender, Receiver, sync_channel } };
use std::ffi::c_void;
extern "C" {
fn init_lib() -> *mut c_void;
fn do_op_1(ctx: *mut c_void, a: u16, b: u32, c: u64) -> f64;
fn do_op_2(ctx: *mut c_void, a: f64) -> bool;
fn cleanup_lib(ctx: *mut c_void);
}
enum LibOperation {
Op1(u16,u32,u64,SyncSender<f64>),
Op2(f64, SyncSender<bool>),
Terminate(SyncSender<()>),
}
#[derive(Clone)]
pub struct SafeLibRef(SyncSender<LibOperation>);
fn lib_thread(rx: Receiver<LibOperation>) {
static LIB_INITIALIZED: AtomicBool = AtomicBool::new(false);
if LIB_INITIALIZED.compare_exchange(false, true, Ordering::SeqCst, Ordering::SeqCst).is_err() {
panic!("Tried to double-initialize library!");
}
let libptr = unsafe { init_lib() };
loop {
let op = rx.recv();
if op.is_err() {
unsafe { cleanup_lib(libptr) };
break;
}
match op.unwrap() {
LibOperation::Op1(a,b,c,tx_res) => {
let res: f64 = unsafe { do_op_1(libptr, a, b, c) };
tx_res.send(res).unwrap();
},
LibOperation::Op2(a, tx_res) => {
let res: bool = unsafe { do_op_2(libptr, a) };
tx_res.send(res).unwrap();
}
LibOperation::Terminate(tx_res) => {
unsafe { cleanup_lib(libptr) };
tx_res.send(()).unwrap();
break;
}
}
}
}
/// This needs to be called no more than once.
/// The resulting SafeLibRef can be cloned and passed around.
pub fn launch_lib() -> SafeLibRef {
let (tx,rx) = sync_channel(0);
std::thread::spawn(|| lib_thread(rx));
SafeLibRef(tx)
}
// This is the interface that most of your code will use
impl SafeLibRef {
pub fn op_1(&self, a: u16, b: u32, c: u64) -> f64 {
let (res_tx, res_rx) = sync_channel(1);
self.0.send(LibOperation::Op1(a, b, c, res_tx)).unwrap();
res_rx.recv().unwrap()
}
pub fn op_2(&self, a: f64) -> bool {
let (res_tx, res_rx) = sync_channel(1);
self.0.send(LibOperation::Op2(a, res_tx)).unwrap();
res_rx.recv().unwrap()
}
pub fn terminate(&self) {
let (res_tx, res_rx) = sync_channel(1);
self.0.send(LibOperation::Terminate(res_tx)).unwrap();
res_rx.recv().unwrap();
}
}
I want to write a vscode extension that displays the content of a large binary file, written with bincode:
#[macro_use]
extern crate serde_derive;
use std::collections::HashMap;
use std::fs::File;
use std::io::{BufReader, BufWriter};
#[derive(Serialize, Deserialize)]
pub struct MyValue {
pub name: String,
}
#[derive(Serialize, Deserialize)]
pub struct MyStruct {
pub data: HashMap<String, MyValue>,
}
impl MyStruct {
pub fn dump(&self, filename: &str) -> Result<(), String> {
let file = File::create(filename).map_err(|msg| msg.to_string())?;
let writer = BufWriter::new(file);
bincode::serialize_into(writer, self).map_err(|msg| msg.to_string())
}
pub fn load(filename: &str) -> Result<Self, String> {
let file = File::open(filename).map_err(|msg| msg.to_string())?;
let reader = BufReader::new(file);
bincode::deserialize_from::<BufReader<_>, Self>(reader).map_err(|msg| msg.to_string())
}
}
Therefore there is a wasm binding:
#[wasm_bindgen]
#[derive(Clone)]
pub struct PyMyStruct {
inner: Arc<MyStruct>,
}
#[wasm_bindgen]
impl PyMyStruct {
pub fn new(filename: &str) -> Self {
Self {
inner: Arc::new(MyStruct::load(filename).unwrap()),
}
}
pub fn header(self) -> Array {
let keys = Array::new();
for key in self.inner.data.keys() {
keys.push(&JsValue::from_str(key));
}
keys
}
pub fn value(&self, name: &str) -> JsValue {
if let Some(value) = self.inner.data.get(name) {
JsValue::from_serde(value).unwrap_or(JsValue::NULL)
} else {
JsValue::NULL
}
}
}
which provides a simple interface to the JavaScript world in order to access the content of that file.
Using Arc in order to prevent expensive unintended memory copy when handling on the JavaScript side.
(It might look strange that keys is not marked as mutable but the rust compiler recomended that way)
When running the test code:
const {PyMyStruct} = require("./corejs.js");
let obj = new PyMyStruct("../../dump.spb")
console.log(obj.header())
you get the error message:
Error: null pointer passed to rust
Does someone know how to handle this use case?
Thank you!
The issue here is that you are using new PyMyStruct() instead of PyMyStruct.new(). In wasm-bindgen's debug mode you will get an error about this at runtime. Using .new() will fix your issue:
let obj = PyMyStruct.new("../../dump.spb")
If you add the #[wasm_bindgen(constructor)] annotation to the new method, then new PyMyStruct() will work as well:
#[wasm_bindgen]
impl PyMyStruct {
#[wasm_bindgen(constructor)]
pub fn new(filename: &str) -> Self {
Self {
inner: 1,
}
}
}
Now this is fine:
let obj = new PyMyStruct("../../dump.spb")
I solved that problem by using https://neon-bindings.com instead of compiling the API to web-assembly.
The binding here looks as follow:
use core;
use std::rc::Rc;
use neon::prelude::*;
#[derive(Clone)]
pub struct MyStruct {
inner: Rc<core::MyStruct>,
}
declare_types! {
pub class JsMyStruct for MyStruct {
init(mut cx) {
let filename = cx.argument::<JsString>(0)?.value();
match core::MyStruct::load(&filename) {
Ok(inner) => return Ok(MyStruct{
inner: Rc::new(inner)
}),
Err(msg) => {
panic!("{}", msg)
}
}
}
method header(mut cx) {
let this = cx.this();
let container = {
let guard = cx.lock();
let this = this.borrow(&guard);
(*this).clone()
};
let keys = container.inner.data.keys().collect::<Vec<_>>();
let js_array = JsArray::new(&mut cx, keys.len() as u32);
for (i, obj) in keys.into_iter().enumerate() {
let js_string = cx.string(obj);
js_array.set(&mut cx, i as u32, js_string).unwrap();
}
Ok(js_array.upcast())
}
method value(mut cx) {
let key = cx.argument::<JsString>(0)?.value();
let this = cx.this();
let container = {
let guard = cx.lock();
let this = this.borrow(&guard);
(*this).clone()
};
if let Some(data) = container.inner.data.get(&key) {
return Ok(neon_serde::to_value(&mut cx, data)?);
} else {
panic!("No value for key \"{}\" available", key);
}
}
}
}
register_module!(mut m, {
m.export_class::<JsMyStruct>("MyStruct")?;
Ok(())
});
In the __enter__ method I want to return an object which is accessible in Rust and Python, so that Rust is able to update values in the object and Python can read the updated values.
I would like to have something like this:
#![feature(specialization)]
use std::thread;
use pyo3::prelude::*;
use pyo3::types::{PyType, PyAny, PyDict};
use pyo3::exceptions::ValueError;
use pyo3::PyContextProtocol;
use pyo3::wrap_pyfunction;
#[pyclass]
#[derive(Debug, Clone)]
pub struct Statistics {
pub files: u32,
pub errors: Vec<String>,
}
fn counter(
root_path: &str,
statistics: &mut Statistics,
) {
statistics.files += 1;
statistics.errors.push(String::from("Foo"));
}
#[pyfunction]
pub fn count(
py: Python,
root_path: &str,
) -> PyResult<PyObject> {
let mut statistics = Statistics {
files: 0,
errors: Vec::new(),
};
let rc: std::result::Result<(), std::io::Error> = py.allow_threads(|| {
counter(root_path, &mut statistics);
Ok(())
});
let pyresult = PyDict::new(py);
match rc {
Err(e) => { pyresult.set_item("error", e.to_string()).unwrap();
return Ok(pyresult.into())
},
_ => ()
}
pyresult.set_item("files", statistics.files).unwrap();
pyresult.set_item("errors", statistics.errors).unwrap();
Ok(pyresult.into())
}
#[pyclass]
#[derive(Debug)]
pub struct Count {
root_path: String,
exit_called: bool,
thr: Option<thread::JoinHandle<()>>,
statistics: Statistics,
}
#[pymethods]
impl Count {
#[new]
fn __new__(
obj: &PyRawObject,
root_path: &str,
) {
obj.init(Count {
root_path: String::from(root_path),
exit_called: false,
thr: None,
statistics: Statistics {
files: 0,
errors: Vec::new(),
},
});
}
#[getter]
fn statistics(&self) -> PyResult<Statistics> {
Ok(Statistics { files: self.statistics.files,
errors: self.statistics.errors.to_vec(), })
}
}
#[pyproto]
impl<'p> PyContextProtocol<'p> for Count {
fn __enter__(&mut self) -> PyResult<Py<Count>> {
let gil = GILGuard::acquire();
self.thr = Some(thread::spawn(|| {
counter(self.root_path.as_ref(), &mut self.statistics)
}));
Ok(PyRefMut::new(gil.python(), *self).unwrap().into())
}
fn __exit__(
&mut self,
ty: Option<&'p PyType>,
_value: Option<&'p PyAny>,
_traceback: Option<&'p PyAny>,
) -> PyResult<bool> {
self.thr.unwrap().join();
let gil = GILGuard::acquire();
self.exit_called = true;
if ty == Some(gil.python().get_type::<ValueError>()) {
Ok(true)
} else {
Ok(false)
}
}
}
#[pymodule(count)]
fn init(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<Count>()?;
m.add_wrapped(wrap_pyfunction!(count))?;
Ok(())
}
But I'm getting the following error:
error[E0477]: the type `[closure#src/lib.rs:90:39: 92:10 self:&mut &'p mut Count]` does not fulfill the required lifetime
--> src/lib.rs:90:25
|
90 | self.thr = Some(thread::spawn(|| {
| ^^^^^^^^^^^^^
|
= note: type must satisfy the static lifetime
I've found a solution. The use of a guarded reference does the trick:
#![feature(specialization)]
use std::{thread, time};
use std::sync::{Arc, Mutex};
extern crate crossbeam_channel as channel;
use channel::{Sender, Receiver, TryRecvError};
use pyo3::prelude::*;
use pyo3::types::{PyType, PyAny};
use pyo3::exceptions::ValueError;
use pyo3::PyContextProtocol;
#[pyclass]
#[derive(Debug, Clone)]
pub struct Statistics {
pub files: u32,
pub errors: Vec<String>,
}
pub fn counter(
statistics: Arc<Mutex<Statistics>>,
cancel: &Receiver<()>,
) {
for _ in 1..15 {
thread::sleep(time::Duration::from_millis(100));
{
let mut s = statistics.lock().unwrap();
s.files += 1;
}
match cancel.try_recv() {
Ok(_) | Err(TryRecvError::Disconnected) => {
println!("Terminating.");
break;
}
Err(TryRecvError::Empty) => {}
}
}
{
let mut s = statistics.lock().unwrap();
s.errors.push(String::from("Foo"));
}
}
#[pyclass]
#[derive(Debug)]
pub struct Count {
exit_called: bool,
statistics: Arc<Mutex<Statistics>>,
thr: Option<thread::JoinHandle<()>>,
cancel: Option<Sender<()>>,
}
#[pymethods]
impl Count {
#[new]
fn __new__(obj: &PyRawObject) {
obj.init(Count {
exit_called: false,
statistics: Arc::new(Mutex::new(Statistics {
files: 0,
errors: Vec::new(),
})),
thr: None,
cancel: None,
});
}
#[getter]
fn statistics(&self) -> PyResult<u32> {
let s = Arc::clone(&self.statistics).lock().unwrap().files;
Ok(s)
}
}
#[pyproto]
impl<'p> PyContextProtocol<'p> for Count {
fn __enter__(&'p mut self) -> PyResult<()> {
let statistics = self.statistics.clone();
let (sender, receiver) = channel::bounded(1);
self.cancel = Some(sender);
self.thr = Some(thread::spawn(move || {
counter(statistics, &receiver)
}));
Ok(())
}
fn __exit__(
&mut self,
ty: Option<&'p PyType>,
_value: Option<&'p PyAny>,
_traceback: Option<&'p PyAny>,
) -> PyResult<bool> {
let _ = self.cancel.as_ref().unwrap().send(());
self.thr.take().map(thread::JoinHandle::join);
let gil = GILGuard::acquire();
self.exit_called = true;
if ty == Some(gil.python().get_type::<ValueError>()) {
Ok(true)
} else {
Ok(false)
}
}
}
#[pyproto]
impl pyo3::class::PyObjectProtocol for Count {
fn __str__(&self) -> PyResult<String> {
Ok(format!("{:?}", self))
}
}
#[pymodule(count)]
fn init(_py: Python, m: &PyModule) -> PyResult<()> {
m.add_class::<Count>()?;
Ok(())
}
Now I can run the following code:
import time
import count
c = count.Count()
with c:
for _ in range(5):
print(c.statistics)
time.sleep(0.1)
As the example shows thread cancelling also works, although a maybe nicer solution is using the crate thread_control.
Sorry for newbie question. The error here is
<anon>:30:5: 30:17 error: cannot borrow immutable borrowed content as mutable
<anon>:30 routing_node.put(3);
^^^^^^^^^^^^
I have tried many things to work around this but know for sure this is a simple error. Any help much appreciated.
use std::thread;
use std::thread::spawn;
use std::sync::Arc;
struct RoutingNode {
data: u16
}
impl RoutingNode {
pub fn new() -> RoutingNode {
RoutingNode { data: 0 }
}
pub fn run(&self) {
println!("data : {}", self.data);
}
pub fn put(&mut self, increase: u16) {
self.data += increase;
}
}
fn main() {
let mut routing_node = Arc::new(RoutingNode::new());
let mut my_node = routing_node.clone();
{
spawn(move || {my_node.run(); });
}
routing_node.put(3);
}
Arc isn't allowing to mutate of it's inner state, even if container is marked as mutable. You should use one of Cell, RefCell or Mutex. Both Cell and RefCell are non-threadsafe so you should use Mutex (last paragraph in docs).
Example:
use std::thread::spawn;
use std::sync::Mutex;
use std::sync::Arc;
struct RoutingNode {
data: u16,
}
impl RoutingNode {
pub fn new() -> Self { RoutingNode { data: 0, } }
pub fn run(&self) { println!("data : {}" , self.data); }
pub fn put(&mut self, increase: u16) { self.data += increase; }
}
fn main() {
let routing_node = Arc::new(Mutex::new(RoutingNode::new()));
let my_node = routing_node.clone();
let thread = spawn(move || { my_node.lock().unwrap().run(); });
routing_node.lock().unwrap().put(3);
let _ = thread.join();
}
Playpen