I'm using actix framework to build a server that should support an opportunity to show both age/balance to a user given a user_id:
fn show_balance(req: &HttpRequest) -> HttpResponse {
let client = create_client();
let user_id = req.match_info().get("user_id").unwrap();
let balance = client.load_grade(user_id); // Returns a balance as a String
HttpResponse::Ok()
.content_type("text/plain")
.body(format!("Hello! Your balance is {}", balance))
}
fn show_age(req: &HttpRequest) -> HttpResponse {
let client = create_client();
let user_id = req.match_info().get("user_id").unwrap();
let age = client.load_grade(user_id); // Returns an age as a String
HttpResponse::Ok()
.content_type("text/plain")
.body(format!("Hello! Your balance is {}", age))
}
fn main() {
env::set_var("RUST_LOG", "actix_web=debug");
env::set_var("RUST_BACKTRACE", "1");
env_logger::init();
let sys = actix::System::new("basic-example");
let addr = server::new(
|| App::new()
// enable logger
.middleware(middleware::Logger::default())
.resource("/balance/{user_id}", |r| r.method(Method::GET).f(show_balance))
.resource("/age/{user_id}", |r| r.method(Method::GET).f(show_age))
.bind("127.0.0.1:8080").expect("Can not bind to 127.0.0.1:8080")
.start();
println!("Starting http server: 127.0.0.1:8080");
let _ = sys.run();
}
fn create_client() -> UserDataClient {
let enviroment = grpcio::EnvBuilder::new().build();
let channel = grpcio::ChannelBuilder::new(enviroment)
.connect(API_URL);
UserDataClient::new(channel)
}
This code works, but my concern is that I have to create a client (and open a channel) for every incoming request which is inefficient and readable, I think it's a good idea to make sort of a singleton instead (since I can reuse it). I looked through the example folder and found that todo example is kinda similar to what I'm doing. So I found the following two options to inject my client object (after I create a single instance of it in main():
Put it in an app state
Inject it as a middleware (1, 2)?
What's the best/correct one to implement?
I thought about just passing a client object to every handler as an argument but I didn't manage to make it work (and doesn't look good anyway).
Related
I'm new to rust and encountered an issue while building an API with warp. I'm trying to pass some requests to another thread with a channel(trying to avoid using arc/mutex). Still, I noticed that when I pass an mpsc::sync::Sender to a warp handler, I get this error.
"std::sync::mpsc::Sender cannot be shared between threads safely"
and
"the trait Sync is not implemented for `std::sync::mpsc::Sender"
Can someone lead me in the right direction?
use std::sync::mpsc::Sender;
pub async fn init_server(run_tx: Sender<Packet>) {
let store = Store::new();
let store_filter = warp::any().map(move || store.clone());
let run_tx_filter = warp::any().map(move || run_tx.clone());
let update_item = warp::get()
.and(warp::path("v1"))
.and(warp::path("auth"))
.and(warp::path::end())
.and(post_json())
.and(store_filter.clone())
.and(run_tx_filter.clone()) //where I'm trying to send "Sender"
.and_then(request_token);
let routes = update_item;
println!("HTTP server started on port 8080");
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
pub async fn request_token(
req: TokenRequest,
store: Store,
run_tx: Sender<Packet>,
) -> Result<impl warp::Reply, warp::Rejection> {
let (tmp_tx, tmp_rx) = std::sync::mpsc::channel();
run_tx
.send(Packet::IsPlayerLoggedIn(req.address, tmp_tx))
.unwrap();
let logged_in = tmp_rx.recv().unwrap();
if logged_in {
return Ok(warp::reply::with_status(
"Already logged in",
http::StatusCode::BAD_REQUEST,
));
}
Ok(warp::reply::with_status("some token", http::StatusCode::OK))
}
I've looked through some of the examples for warp, and was also wondering what are some good resources to get knowledgable of the crate. Thank you!
This is because you're using std::sync::mpsc::Sender. std::sync implements !Sync, so you won't be able to use that. You don't want to use that anyway since it's blocking.
When you use async functionality in rust, you need to provide a runtime for it. The good news is that warp runs on the tokio runtime, so you should have access to tokio::sync::mpsc If you take a look at that link, the Sender for that mpsc implementation implements Sync and Send so it's safe to share across threads.
TLDR:
Use tokio::sync::mpsc instead of std::sync::mpsc and this won't be an issue.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 1 year ago.
Improve this question
I'm reading this Rust code and I barely have the mental capacity of understanding what's going on with all the mutexes and handles. It's all overhead to make the Rust gods happy, and it's making it hard to focus on what's actually going on. Take a look:
#[tauri::command]
fn spawn(param: String, window: Window<Wry>) {
let window_arc = Arc::new(Mutex::new(window));
// Spawn bin
let (mut rx, child) = tauri::api::process::Command::new_sidecar("bin")
.expect("failed to create binary command")
.args([param])
.spawn()
.expect("Failed to spawn sidecar");
let child_arc = Arc::new(Mutex::new(child));
// Handle data from bin
let window = window_arc.clone();
let (handle, mut handle_rx) = broadcast::channel(1);
let handle_arc = Arc::new(Mutex::new(handle));
tauri::async_runtime::spawn(async move {
loop {
tokio::select! {
_ = handle_rx.recv() => {
return;
}
Some(event) = rx.recv() => {
if let CommandEvent::Stdout(line) = &event {
let data = decode_and_xor(line.clone());
println!("Data from bin: {}", data);
window.lock().unwrap().emit("from_bin", data).expect("failed to emit message");
}
if let CommandEvent::Stderr(line) = &event {
println!("Fatal error bin: {}", &line);
window.lock().unwrap().emit("bin_fatal_error", line).expect("failed to emit message");
}
}
}
;
}
});
let window = window_arc.clone();
let window_cc = window.clone();
window_cc.lock().unwrap().listen("kill_bin", move |event| {
let handle = handle_arc.clone();
handle.lock().unwrap().send(true).unwrap();
window.lock().unwrap().unlisten(event.id());
});
// Handle data to bin
let window = window_arc.clone();
tauri::async_runtime::spawn(async move {
let child_clone = child_arc.clone();
let (handle, handle_rx) = broadcast::channel(1);
let handle_rx_arc = Arc::new(Mutex::new(handle_rx));
let handle_arc = Arc::new(Mutex::new(handle));
let window_c = window.clone();
window.lock().unwrap().listen("to_bin", move |event| {
let handle_rx = handle_rx_arc.clone();
if handle_rx.lock().unwrap().try_recv().is_ok() {
window_c.lock().unwrap().unlisten(event.id());
return;
}
// Send data to bin
let payload = String::from(event.payload().unwrap());
let encrypted = xor_and_encode(payload) + "\n";
println!("Data to send: {}", event.payload().unwrap());
child_clone.clone().lock().unwrap().write(encrypted.as_bytes()).expect("could not write to child");
});
let window_c = window.clone();
window.lock().unwrap().listen("kill_bin", move |event| {
let handle = handle_arc.clone();
handle.lock().unwrap().send(true).unwrap();
window_c.lock().unwrap().unlisten(event.id());
});
});
}
Are all these Arcs, Mutexes and clones necessary? How would I go about cleaning this up in a Rust idiomatic way, making it easier to see what's going on?
Are all these Arcs, Mutexes and clones necessary?
Probably not, you seem to be way over-cloning -- and rewrapping concurrent structure, but you'll have to look at the specific APIs
e.g. assuming broascast::channel is Tokio's it's designed for concurrent usage (that's kinda the point) so senders are designed to be clonable (for multiple producers) and you can create as many receivers as you need from the senders.
There's no need to wrap in an Arc, and there's especially no need whatsoever to protect them behind locks, they're designed to work as-is.
Furthermore, in this case it's even less necessary because you have just one sender task and one receiver tasks, neither is shared. Nor do you need to clone them when you use them. So e.g.
let handle_arc = Arc::new(Mutex::new(handle));
[...]
window_cc.lock().unwrap().listen("kill_bin", move |event| {
let handle = handle_arc.clone();
handle.lock().unwrap().send(true).unwrap();
window.lock().unwrap().unlisten(event.id());
});
I'm pretty sure can just be
window_cc.lock().unwrap().listen("kill_bin", move |event| {
handle.send(true).unwrap();
window.lock().unwrap().unlisten(event.id());
});
that'll move the handle inside the closure, then send on that. Sender is internally mutable so it needs no locking to send an event (that would rather defeat the point).
I am developing a CLI program for rendering template files using the new MiniJinja library by mitsuhiko.
The program is here: https://github.com/benwilber/temple.
I would like to be able to extend the program by allowing the user to load custom Lua scripts for things like custom filters, functions, and tests. However, I am running into Rust lifetime errors that I've not been able to solve.
Basically, I would like to be able to register a Lua function as a custom filter function. But it's showing an error when compiling. Here is the code:
https://github.com/benwilber/temple/compare/0.3.1..lua
Error:
https://gist.github.com/c649a0b240cf299d3dbbe018c24cbcdc
How can I call a Lua function from the MiniJinja add_filter function? I would prefer to try to do this in the regular/safe way. But I'm open to unsafe alternatives if required.
Thanks!
Edit: Posted the same on Reddit and users.rust-lang.org
Lua uses state that is not safe to use from more than one thread.
A consequence of this is that LuaFunction is neither Sync or Send.
This is being enforced by this part of the error message:
help: within `LuaFunction<'_>`, the trait `Sync` is not implemented for `*mut rlua::ffi::lua_State`
In contrast a minijinja::Filter must implement Send + Sync + 'static.
(See https://docs.rs/minijinja/0.5.0/minijinja/filters/trait.Filter.html)
This means we can't share LuaFunctions (or even LuaContext) between calls to the Filters.
One option is to not pass your lua state into the closures, and instead create a new lua state every call, something like this.
env.add_filter(
"concat2",
|_env: &Environment, s1: String, s2: String|
-> anyhow::Result<String, minijinja::Error> {
lua.context(|lua_ctx| {
lua_ctx.load(include_str!("temple.lua")).exec().unwrap();
let globals = lua_ctx.globals();
let temple: rlua::Table = globals.get("temple").unwrap();
let filters: rlua::Table = temple.get("_filters").unwrap();
let concat2: rlua::Function = filters.get("concat2").unwrap();
let res: String = concat2.call::<_, String>((s1, s2)).unwrap();
Ok(res)
}
}
);
This is likely to have relatively high overhead.
Another option is to create your rlua state in one thread and communicate with it via pipes. This would look more like this:
pub fn test() {
let mut env = minijinja::Environment::new();
let (to_lua_tx, to_lua_rx) = channel::<(String,String,SyncSender<String>)>();
thread::spawn(move|| {
let lua = rlua::Lua::new();
lua.context(move |lua_ctx| {
lua_ctx.load("some_code").exec().unwrap();
let globals = lua_ctx.globals();
let temple: rlua::Table = globals.get("temple").unwrap();
let filters: rlua::Table = temple.get("_filters").unwrap();
let concat2: rlua::Function = filters.get("concat2").unwrap();
while let Ok((s1,s2, channel)) = to_lua_rx.recv() {
let res: String = concat2.call::<_, String>((s1, s2)).unwrap();
channel.send(res).unwrap()
}
})
});
let to_lua_tx = Mutex::new(to_lua_tx);
env.add_filter(
"concat2",
move |_env: &minijinja::Environment,
s1: String,
s2: String|
-> anyhow::Result<String, minijinja::Error> {
let (tx,rx) = sync_channel::<String>(0);
to_lua_tx.lock().unwrap().send((s1,s2,tx)).unwrap();
let res = rx.recv().unwrap();
Ok(res)
}
);
}
It would even be possible to start multiple lua states this way, but would require a bit more plumbing.
DISCLAIMER: This code is all untested - however, it builds with a stubbed version of minijinja and rlua in the playground. You probably want better error handling and might need some additional code to handle cleanly shutting down all the threads.
I'm figuring out how to use the tokio-proto crate, particularly on the handshake made when a connection is established. I've got the example from the official documentation working:
impl<T: AsyncRead + AsyncWrite + 'static> ClientProto<T> for ClientLineProto {
type Request = String;
type Response = String;
/// `Framed<T, LineCodec>` is the return value of `io.framed(LineCodec)`
type Transport = Framed<T, line::LineCodec>;
type BindTransport = Box<Future<Item = Self::Transport, Error = io::Error>>;
fn bind_transport(&self, io: T) -> Self::BindTransport {
// Construct the line-based transport
let transport = io.framed(line::LineCodec);
// Send the handshake frame to the server.
let handshake = transport.send("You ready?".to_string())
// Wait for a response from the server, if the transport errors out,
// we don't care about the transport handle anymore, just the error
.and_then(|transport| transport.into_future().map_err(|(e, _)| e))
.and_then(|(line, transport)| {
// The server sent back a line, check to see if it is the
// expected handshake line.
match line {
Some(ref msg) if msg == "Bring it!" => {
println!("CLIENT: received server handshake");
Ok(transport)
}
Some(ref msg) if msg == "No! Go away!" => {
// At this point, the server is at capacity. There are a
// few things that we could do. Set a backoff timer and
// try again in a bit. Or we could try a different
// remote server. However, we're just going to error out
// the connection.
println!("CLIENT: server is at capacity");
let err = io::Error::new(io::ErrorKind::Other, "server at capacity");
Err(err)
}
_ => {
println!("CLIENT: server handshake INVALID");
let err = io::Error::new(io::ErrorKind::Other, "invalid handshake");
Err(err)
}
}
});
Box::new(handshake)
}
}
But the official docs only mention a handshake without stateful information. Is there a common way to retrieve and store useful data from the handshake?
For example, if during the handshake (in the first message after the connection is established) the server sends some key that should be used later by the client, how should the ClientProto implementation look into that key? And where should it be stored?
You can add fields to ClientLineProto, so this should work:
pub struct ClientLineProto {
handshakes: Arc<Mutex<HashMap<String, String>>>
}
And then you can reference it and store data as needed:
let mut handshakes = self.handshakes.lock();
handshakes.insert(handshake_key, "Blah blah handshake data")
This sort of access would work in bind_transport() for storing things. Then when you create the Arc::Mutex::HashMap in your main() function and you will have access to the whole thing in the serve() method as well, which means you can pass it in to the Service object instantiation and then the handshakes will be available during call().
I'm not able to create a client that tries to connect to a server and:
if the server is down it has to try again in an infinite loop
if the server is up and connection is successful, when the connection is lost (i.e. server disconnects the client) the client has to restart the infinite loop to try to connect to the server
Here's the code to connect to a server; currently when the connection is lost the program exits. I'm not sure what the best way to implement it is; maybe I have to create a Future with an infinite loop?
extern crate tokio_line;
use tokio_line::LineCodec;
fn get_connection(handle: &Handle) -> Box<Future<Item = (), Error = io::Error>> {
let remote_addr = "127.0.0.1:9876".parse().unwrap();
let tcp = TcpStream::connect(&remote_addr, handle);
let client = tcp.and_then(|stream| {
let (sink, from_server) = stream.framed(LineCodec).split();
let reader = from_server.for_each(|message| {
println!("{}", message);
Ok(())
});
reader.map(|_| {
println!("CLIENT DISCONNECTED");
()
}).map_err(|err| err)
});
let client = client.map_err(|_| { panic!()});
Box::new(client)
}
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = get_connection(&handle);
let client = client.and_then(|c| {
println!("Try to reconnect");
get_connection(&handle);
Ok(())
});
core.run(client).unwrap();
}
Add the tokio-line crate with:
tokio-line = { git = "https://github.com/tokio-rs/tokio-line" }
The key question seems to be: how do I implement an infinite loop using Tokio? By answering this question, we can tackle the problem of reconnecting infinitely upon disconnection. From my experience writing asynchronous code, recursion seems to be a straightforward solution to this problem.
UPDATE: as pointed out by Shepmaster (and the folks of the Tokio Gitter), my original answer leaks memory since we build a chain of futures that grows on each iteration. Here follows a new one:
Updated answer: use loop_fn
There is a function in the futures crate that does exactly what you need. It is called loop_fn. You can use it by changing your main function to the following:
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = future::loop_fn((), |_| {
// Run the get_connection function and loop again regardless of its result
get_connection(&handle).map(|_| -> Loop<(), ()> {
Loop::Continue(())
})
});
core.run(client).unwrap();
}
The function resembles a for loop, which can continue or break depending on the result of get_connection (see the documentation for the Loop enum). In this case, we choose to always continue, so it will infinitely keep reconnecting.
Note that your version of get_connection will panic if there is an error (e.g. if the client cannot connect to the server). If you also want to retry after an error, you should remove the call to panic!.
Old answer: use recursion
Here follows my old answer, in case anyone finds it interesting.
WARNING: using the code below results in unbounded memory growth.
Making get_connection loop infinitely
We want to call the get_connection function each time the client is disconnected, so that is exactly what we are going to do (look at the comment after reader.and_then):
fn get_connection(handle: &Handle) -> Box<Future<Item = (), Error = io::Error>> {
let remote_addr = "127.0.0.1:9876".parse().unwrap();
let tcp = TcpStream::connect(&remote_addr, handle);
let handle_clone = handle.clone();
let client = tcp.and_then(|stream| {
let (sink, from_server) = stream.framed(LineCodec).split();
let reader = from_server.for_each(|message| {
println!("{}", message);
Ok(())
});
reader.and_then(move |_| {
println!("CLIENT DISCONNECTED");
// Attempt to reconnect in the future
get_connection(&handle_clone)
})
});
let client = client.map_err(|_| { panic!()});
Box::new(client)
}
Remember that get_connection is non-blocking. It just constructs a Box<Future>. This means that when calling it recursively, we still don't block. Instead, we get a new future, which we can link to the previous one by using and_then. As you can see, this is different to normal recursion since the stack doesn't grow on each iteration.
Note that we need to clone the handle (see handle_clone), and move it into the closure passed to reader.and_then. This is necessary because the closure is going to live longer than the function (it will be contained in the future we are returning).
Handling errors
The code you provided doesn't handle the case in which the client is unable to connect to the server (nor any other errors). Following the same principle shown above, we can handle errors by changing the end of get_connection to the following:
let handle_clone = handle.clone();
let client = client.or_else(move |err| {
// Note: this code will infinitely retry, but you could pattern match on the error
// to retry only on certain kinds of error
println!("Error connecting to server: {}", err);
get_connection(&handle_clone)
});
Box::new(client)
Note that or_else is like and_then, but it operates on the error produced by the future.
Removing unnecessary code from main
Finally, it is not necessary to use and_then in the main function. You can replace your main by the following code:
fn main() {
let mut core = Core::new().unwrap();
let handle = core.handle();
let client = get_connection(&handle);
core.run(client).unwrap();
}