In my sample program I'm trying to do the following:
use futures::StreamExt;
use sqlx::Connection;
#[tokio::main]
async fn main() {
let mut conn = sqlx::MySqlConnection::connect("mysql://user:pass#localhost/db")
.await
.unwrap();
let mut s = sqlx::query(concat!(
"UPDATE foo SET bar=false WHERE id=34;",
"UPDATE foo SET bar=true WHERE id=43;"
))
.execute_many(&mut conn)
.await;
while let Some(k) = s.next().await {
println!("{k:?}");
}
}
But MySQL complains with You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'UPDATE error. So the question is how one is supposed to pass multiple queries so that they are properly recognized. Needless to say I am aware that I can run each query individually but I'd like to stack them up all together for performance reasons (server network latency).
Related
I'm new to actix, and I'm trying to understand how I can run a server on one thread and send requests from another.
This is the code I have so far
use actix_web::{web, App, HttpResponse, HttpServer};
use std::{sync::mpsc::channel, thread};
#[actix_web::main]
async fn main() {
let (tx, rx) = channel();
thread::spawn(move || {
let srv =
HttpServer::new(|| App::new().default_service(web::to(|| HttpResponse::NotFound())))
.bind("localhost:12347")
.unwrap()
.run();
let _ = tx.send(srv);
});
reqwest::get("http://localhost:12347").await.unwrap();
let srv = rx.recv().unwrap();
srv.handle().stop(false).await;
}
It compiles just fine, but it gets stuck on on sending the request. It seems like the server is running, soI can't figure out why I am not getting a response.
EDIT: As suggested by #Finomnis and #cafce25,I changed the code to use tasks instead of threads, and awaited te result of .run()
use actix_web::{web, App, HttpResponse, HttpServer};
use std::{sync::mpsc::channel, thread};
#[actix_web::main]
async fn main() {
let (tx, rx) = channel();
tokio::spawn(async move {
let srv =
HttpServer::new(|| App::new().default_service(web::to(|| HttpResponse::NotFound())))
.bind("localhost:12347")
.unwrap()
.run();
let _ = tx.send(srv.handle());
srv.await.unwrap();
});
reqwest::get("http://localhost:12347").await.unwrap();
let handle = rx.recv().unwrap();
handle.stop(false).await;
}
which solves the problem. I'm still curious if it is possible to do it on different threads since I can't use await inside a synchronous function.
There are a couple of things wrong with your code; the biggest one being that you never .await the run() method.
For that fact alone you cannot run it in a normal thread, it has to exist in an async task.
So what happens is:
you create the server
the server never runs because it doesn't get awaited
you query the server for a response
the response never comes because the server doesn't run, so you get stuck in reqwest::get
What you should do instead:
start the server.
Also:
You don't need to propagate the server object out to stop it. You can create a .handle() first before you move it into the task. The server handle does not contain a reference to the server, it's based on smart pointers instead.
NEVER use synchronous channels with async tasks. It will block the runtime, dead-locking everything. (The only reason it worked in your second example is because it is most likely a multi-threaded runtime and you only dead-locked one of the runtime cores. Still bad.)
(Maybe) don't tokio::spawn if you use #[actix_web::main]. actix-web has its own runtime, you need to actix_web::rt::spawn with it. If you want to use tokio based tasks, you need to do #[tokio::main]. actix-web is compatible with the tokio runtime. (EDIT: actix-web might be compatible with tokio::spawn(), I just didn't find documentation anywhere that says it is)
With all that fixed, here is a working version:
use actix_web::{rt, web, App, HttpResponse, HttpServer};
#[actix_web::main]
async fn main() {
let srv = HttpServer::new(|| App::new().default_service(web::to(|| HttpResponse::NotFound())))
.bind("localhost:12347")
.unwrap()
.run();
let srv_handle = srv.handle();
rt::spawn(srv);
let response = reqwest::get("http://localhost:12347").await.unwrap();
println!("Response code: {:?}", response.status());
srv_handle.stop(false).await;
}
Response code: 404
I am working on this kind of code:
database.rs
use tokio::task::JoinHandle;
use tokio_postgres::{Client, Connection, Error, NoTls, Socket, tls::NoTlsStream};
use crate::secret;
pub struct DatabaseConnection {
pub client: Client,
pub connection: Connection<Socket, NoTlsStream>
// pub connection: JoinHandle<Connection<Socket, NoTlsStream>>
}
impl DatabaseConnection {
async fn new() -> Result<DatabaseConnection, Error> {
let (new_client, new_connection) = DatabaseConnection::create_connection().await.expect("Error, amazing is amazing");
Ok(Self {
client: new_client,
// connection: tokio::spawn( async move { new_connection } )
connection: new_connection
})
}
async fn create_connection() -> Result<(Client, Connection<Socket, NoTlsStream>), Error> {
// Connect to the database.
let (client, connection) =
tokio_postgres::connect(
&format!(
"postgres://{user}:{pswd}#localhost/{db}",
user = secret::USERNAME,
pswd = secret::PASSWORD,
db = secret::DATABASE
)[..],
NoTls)
.await?;
Ok((client, connection))
}
}
pub async fn init_db() -> Result<(), Error> {
let database_connection =
DatabaseConnection::new()
.await;
let db_connection = tokio::spawn(async move {
if let Err(e) = database_connection {
println!("Connection error: {:?}", e);
}
});
let create = database_connection.unwrap().client
.query("CREATE TABLE person (
id SERIAL PRIMARY KEY,
name VARCHAR NOT NULLL;
)", &[]).await?;
Ok(())
}
main.rs
#[tokio::main]
async fn main() {
match database::init_db().await {
Ok(()) => println!("Successfully connected to the database"),
Err(e) => eprintln!("On main: {}", e)
}
}
I can't use the database_connection variable to perform my SQL statements because it's moved into the tokio routine.
I already tried to return on my struct a `connection: tokio::spawn( async move { new_connection } ), but the routine it's never spawned, till i call the attribute, and returns an ended database connection.
How can I solve this?
Thanks in advice
Going with 2 concurrent tasks (a concurrent event loop for the connection) allows sharing your connection between various program modules, and have a non-blocking (potentially non-async) query API.
It is possible to achieve by implementing a reactor pattern, but not as trivial as in some other languages, because Rust is going to make sure you follow the strict multi-thread correctness.
Let's say that task 1 is your main program, it spawns task 2 - a DatabaseConnection reactor event loop. Since this instance might be accessed by multiple threads potentially at the same time to start or process an SQL query, wrap it with Arc<Mutex<DatabaseConnection>>.
To execute a query task 1 needs to send an SQL command to task 2, and wait for the result. One way to do this is to use an mpsc channel for sending commands, and oneshot for the result. oneshot is similar to a promise/future in principle: you can await on one end, and push the value and wake from the other end.
For some code example check out the channels tutorial. The "Spawn manager task" chapter is gonna be a part of your DatabaseConnection task 2 waiting for SQL queries and processing them. The "Receive responses" chapter that shows how to use oneshot to send the result back.
Also note that async doesn't imply that you have to block on await. If your program is simple enough, there's a possiblity to avoid having the reactor while still not blocking. This can be done with tokio::select! inside a loop. An example of such usage can be found in the select tutorial - chapter "Resuming an async operation". Imagine that action() is your .query() method. Note that they are calling it, but not await-ing. Then the select! is able to return when the query operation results are ready, and if they are not ready - you are free to do any other async work.
I'm trying to write a simple Rust program that reads Docker stats using shiplift and exposes them as Prometheus metrics using rust-prometheus.
The shiplift stats example runs correctly on its own, and I'm trying to integrate it in the server as
fn handle(_req: Request<Body>) -> Response<Body> {
let docker = Docker::new();
let containers = docker.containers();
let id = "my-id";
let stats = containers
.get(&id)
.stats().take(1).wait();
for s in stats {
println!("{:?}", s);
}
// ...
}
// in main
let make_service = || {
service_fn_ok(handle)
};
let server = Server::bind(&addr)
.serve(make_service);
but it appears that the stream hangs forever (I cannot produce any error message).
I've also tried the same refactor (using take and wait instead of tokio::run) in the shiplift example, but in that case I get the error executor failed to spawn task: tokio::spawn failed (is a tokio runtime running this future?). Is tokio somehow required by shiplift?
EDIT:
If I've understood correctly, my attempt does not work because wait will block tokio executor and stats will never produce results.
shiplift's API is asynchronous, meaning wait() and other functions return a Future, instead of blocking the main thread until a result is ready. A Future won't actually do any I/O until it is passed to an executor. You need to pass the Future to tokio::run as in the example you linked to. You should read the tokio docs to get a better understanding of how to write asynchronous code in rust.
There were quite a few mistakes in my understanding of how hyper works. Basically:
if a service should handle futures, do not use service_fn_ok to create it (it is meant for synchronous services): use service_fn;
do not use wait: all futures use the same executor, the execution will just hang forever (there is a warning in the docs but oh well...);
as ecstaticm0rse notices, hyper::rt::spawn could be used to read stats asynchronously, instead of doing it in the service
Is tokio somehow required by shiplift?
Yes. It uses hyper, which throws executor failed to spawn task if the default tokio executor is not available (working with futures nearly always requires an executor anyway).
Here is a minimal version of what I ended up with (tokio 0.1.20 and hyper 0.12):
use std::net::SocketAddr;
use std::time::{Duration, Instant};
use tokio::prelude::*;
use tokio::timer::Interval;
use hyper::{
Body, Response, service::service_fn_ok,
Server, rt::{spawn, run}
};
fn init_background_task(swarm_name: String) -> impl Future<Item = (), Error = ()> {
Interval::new(Instant::now(), Duration::from_secs(1))
.map_err(|e| panic!(e))
.for_each(move |_instant| {
futures::future::ok(()) // unimplemented: call shiplift here
})
}
fn init_server(address: SocketAddr) -> impl Future<Item = (), Error = ()> {
let service = move || {
service_fn_ok(|_request| Response::new(Body::from("unimplemented")))
};
Server::bind(&address)
.serve(service)
.map_err(|e| panic!("Server error: {}", e))
}
fn main() {
let background_task = init_background_task("swarm_name".to_string());
let server = init_server(([127, 0, 0, 1], 9898).into());
run(hyper::rt::lazy(move || {
spawn(background_task);
spawn(server);
Ok(())
}));
}
I'm trying to get into Rust from a Python background and I'm having an issue with a PoC I'm messing around with. I've read through a bunch of blogs and documentation on how to handle errors in Rust, but I can't figure out how to implement it when I use unwrap and get a panic. Here is part of the code:
fn main() {
let listener = TcpListener::bind("127.0.0.1:5432").unwrap();
// The .0 at the end is indexing a tuple, FYI
loop {
let stream = listener.accept().unwrap().0;
stream.set_read_timeout(Some(Duration::from_millis(100)));
handle_request(stream);
}
}
// Things change a bit in here
fn handle_request(stream: TcpStream) {
let address = stream.peer_addr().unwrap();
let mut reader = BufReader::new(stream);
let mut payload = "".to_string();
for line in reader.by_ref().lines() {
let brap = line.unwrap();
payload.push_str(&*brap);
if brap == "" {
break;
}
}
println!("{0} -> {1}", address, payload);
send_response(reader.into_inner());
}
It is handling the socket not receiving anything with set_read_timeout on the stream as expected, but when that triggers my unwrap on line in the loop it is causing a panic. Can someone help me understand how I'm properly supposed to apply a match or Option to this code?
There seems to be a large disconnect here. unwrap or expect handle errors by panicking the thread. You aren't really supposed to "handle" a panic in 99.9% of Rust programs; you just let things die.
If you don't want a panic, don't use unwrap or expect. Instead, pass back the error via a Result or an Option, as described in the Error Handling section of The Rust Programming Language.
You can match (or any other pattern matching technique) on the Result or Option and handle an error appropriately for your case. One example of handling the error in your outer loop:
use std::net::{TcpStream, TcpListener};
use std::time::Duration;
use std::io::prelude::*;
use std::io::BufReader;
fn main() {
let listener = TcpListener::bind("127.0.0.1:5432")
.expect("Unable to bind to the port");
loop {
if let Ok((stream, _)) = listener.accept() {
stream
.set_read_timeout(Some(Duration::from_millis(100)))
.expect("Unable to set timeout");
handle_request(stream);
}
}
}
Note that I highly recommend using expect instead of unwrap in just about every case.
I'm trying to do a simple extension to the comments example by creating a REST API and committing the post to the database. I'm creating the connection outside the scope of the handler itself which I'm assuming is where my problem lies. I'm just not sure how to fix it.
This is the code for the post handler:
server.get("/comments", middleware! {
let mut stmt = conn.prepare("SELECT * FROM comment").unwrap();
let mut iter = stmt.query_map(&[], |row| {
Comment { id: row.get(0), author: row.get(1), text: row.get(2) }
}).unwrap();
let mut out: Vec<Comment> = Vec::new();
for comment in iter {
out.push(comment.unwrap());
}
json::encode(&out).unwrap()
});
This is the error I get:
<nickel macros>:22:50: 22:66 error: the trait `core::marker::Sync` is not implemented for the type `core::cell::UnsafeCell<rusqlite::InnerConnection>` [E0277]
I assume the error is because I have created the instance and then tried to use it in a closure and that variable is probably destroyed once my main function completes.
Here's an MCVE that reproduces the problem (you should provide these when asking questions):
extern crate rusqlite;
#[macro_use]
extern crate nickel;
use nickel::{Nickel, HttpRouter};
use rusqlite::Connection;
fn main() {
let mut server = Nickel::new();
let conn = Connection::open_in_memory().unwrap();
server.get("/comments", middleware! {
let _stmt = conn.prepare("SELECT * FROM comment").unwrap();
""
});
server.listen("127.0.0.1:6767");
}
The Sync trait says:
Types that are not Sync are those that have "interior mutability" in a non-thread-safe way, such as Cell and RefCell
Which matches with the error message you get. Something inside the Connection has interior mutability which means that the compiler cannot automatically guarantee that sharing it across threads is safe. I had a recent question that might be useful to the implementor of Connection, if they can guarantee it's safe to share (perhaps SQLite itself makes guarantees).
The simplest thing you can do is to ensure that only one thread has access to the database object at a time:
use std::sync::Mutex;
fn main() {
let mut server = Nickel::new();
let conn = Mutex::new(Connection::open_in_memory().unwrap());
server.get("/comments", middleware! {
let conn = conn.lock().unwrap();
let _stmt = conn.prepare("SELECT * FROM comment").unwrap();
""
});
server.listen("127.0.0.1:6767");
}