I am currently learning Rust and Rocket
Using Rust 1.54.0+Rocket 0.5.0_rc1+ Diesel 1.4.7 + r2d2 0.8.9
I created a DB Postgres connection pool with r2d2. I want to share the connection pool between requests/Routes, to do that I am trying to use Rocket Managed State. https://rocket.rs/v0.5-rc/guide/state/#managed-state
I created a DB Connection Pool, saved it on the state, but when I tried to access that DB Connection Pool from the Route. I am getting 2 error on the same line
Cell<i32> cannot be shared between threads safely
RefCell<HashMap<StatementCacheKey<Pg>, pg::connection::stmt::Statement>> cannot be shared between threads safely
here my code
pub async fn establish_pooled_connection() -> Result<PooledConnection<ConnectionManager<PgConnection>>, r2d2_diesel::Error> {
dotenv().ok();
let database_url = env::var("DATABASE_URL")
.expect("DATABASE_URL must be set");
let manager = ConnectionManager::<PgConnection>::new(&database_url);
let pool = r2d2::Pool::builder().build(manager).expect("Failed to create pool.");
let conn = pool.clone().get().unwrap();
Ok(conn)
}
struct DBPool{
db_pool: PooledConnection<ConnectionManager<PgConnection>>
}
#[rocket::main]
async fn main() {
let pool = establish_pooled_connection();
rocket::build()
.mount("/",routes![callapi])
.manage(DBPool{db_pool: pool})
.launch()
.await.ok();
}
#[post("/callapi", data = "<request>")]
async fn callapi(request: RequestAPI<'_>, _dbpool: &State<DBPool>) -> Json<models::api_response::ApiResponse> {
......
The errors are for this parameter
_dbpool: &State<DBPool>
thanks in advance
Finally, I was able to make this work.
I mostly used this GitHub repo as base
https://github.com/practical-rust-web-development/mystore/tree/v1.1
The repo uses Actix but I am using Rocket.
I think my main miss understanding is that the base PostgreSQL connection is a PGConnection and what you get from the Pool is a PGPooledConnection which at the end is the same.
Here my code
db_connection.rs
use diesel::pg::PgConnection;
use dotenv::dotenv;
use std::env;
use diesel::r2d2::{ Pool, PooledConnection, ConnectionManager, PoolError };
pub type PgPool = Pool<ConnectionManager<PgConnection>>;
pub type PgPooledConnection = PooledConnection<ConnectionManager<PgConnection>>;
//Connects to Postgres and call init pool
pub fn establish_connection() -> PgPool {
dotenv().ok();
let database_url = env::var("DATABASE_URL")
.expect("DATABASE_URL must be set");
init_pool(&database_url).expect("Failed to create pool")
}
//Creates a default R2D2 Postgres DB Pool
fn init_pool(database_url: &str) -> Result<PgPool, PoolError> {
let manager = ConnectionManager::<PgConnection>::new(database_url);
Pool::builder().build(manager)
}
//this functions returns a connection from the Pool
pub fn pg_pool_handler(pool: &PgPool) -> Result<PgPooledConnection, PoolError> {
let _pool = pool.get().unwrap();
Ok(_pool)
}
main.rs
mod db_connection;
use db_connection::{PgPool};
#[rocket::main]
async fn main() {
rocket::build()
.mount("/API",routes![demo])
.manage(db_connection::establish_connection()) //here is where you pass the pool to Rocket state.
.launch()
.await.ok();
}
#[post("/demo", data = "<request>")]
async fn demo _dbpool: &State<PgPool>) -> Json<models::Response> {
let connection = db_connection::pg_pool_handler(_dbpool).unwrap();
let results = users.limit(1)
.load::<User>(&connection)
.expect("Error loading users");
........
That is the basic code, the code can be improved to handle errors in a better way.
Related
My code seems to be ok, as it properly compiles and is quite simple. But when I run my app with cargo run, even though the program executes properly and outputs some debug printlns, it won't answer to any request.
This is my main.rs:
use actix_web::{web, App, HttpServer};
use diesel::r2d2::{ConnectionManager, Pool};
use diesel::sqlite::SqliteConnection;
use dotenvy::dotenv;
#[path = "api/books/books_handlers.rs"]
mod books_handlers;
#[path = "api/books_relationships/books_relationships_handlers.rs"]
mod books_relationships_handlers;
mod models;
mod routes;
mod schema;
mod logger;
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
// Load .env file and set initialization variables
dotenv().ok();
std::env::set_var("RUST_LOG", "actix_web=debug");
let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");
// Create db connection pool with SQLite
let manager = ConnectionManager::<SqliteConnection>::new(database_url);
let pool: Pool<ConnectionManager<SqliteConnection>> = r2d2::Pool::builder()
.build(manager)
.expect("Failed to create pool.");
// Start HTTP server and register routes
println!("Starting server at http://localhost:8080");
HttpServer::new(move || {
App::new()
.app_data(pool.clone())
// Book class
.route("/create_book", web::post().to(books_handlers::create_book_handler))
.route("/list_books", web::get().to(books_handlers::list_books_handler))
.route("/get_book/{id}", web::post().to(books_handlers::read_book_by_id_handler))
.route("/update_book/{id}", web::put().to(books_handlers::update_book_handler))
.route("/delete_book/{id}", web::delete().to(books_handlers::delete_book_handler))
// BookRelationships class
.route("/create_book_relationship", web::post().to(books_relationships_handlers::create_book_relationship_handler))
.route("/list_book_relationships", web::get().to(books_relationships_handlers::list_books_handler))
.route("/get_book_relationship/{id}", web::post().to(books_relationships_handlers::read_book_by_id_handler))
.route("/update_book_relationship/{id}", web::put().to(books_relationships_handlers::update_book_handler))
.route("/delete_book_relationship/{id}", web::delete().to(books_relationships_handlers::delete_book_handler))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
This is the first handler, the one I'm trying with Postman:
pub async fn create_book_handler(book_data: web::Json<Book>, pool: web::Data<DbPool>) -> HttpResponse {
println!("create_book_handler: {:#?}", book_data); // <-- this never gets executed
let result = books_dao::create_book(book_data, pool).await;
match result {
Ok(book) => {
println!("create_book_handler, OK. Book: {:#?}", book);
HttpResponse::Ok()
.content_type(ContentType::json())
.json(&book)
},
Err(err) => {
println!("create_book_handler, ERROR: {:#?}", err);
log(LogType::Error, err.to_string());
HttpResponse::InternalServerError()
.content_type(ContentType::json())
.body("{err: 'Unable to insert book into database'")
}
}
}
Then the code executes this function, calling Diesel and altering the DB:
pub async fn create_book(book: web::Json<Book>, pool: web::Data<DbPool>) -> Result<usize, Error> {
let mut conn = pool
.get()
.expect("Failed to get database connection from pool");
diesel::insert_into(books::table)
.values(book.into_inner())
.execute(&mut conn)
}
But the problem seems to be even before: not even the println! at the beginning of the handler get executed. When I start the app and send a POST request to http://127.0.0.1:8080/create_book, I get the following error in Postman:
Requested application data is not configured correctly. View/enable debug logs for more details.
Am I sending the requests in a wrong way, or is the API malfunctioning?
The DbPool is wrapped incorrectly. It should look like
...
App::new()
.app_data(actix_web::web::Data::new(pool.clone()))
...
This correctly wraps the DB Pool in the smart pointer that the route handlers can then use across your application
I want to initialize thread local variables for all 4 threads at the beginning of the program.
thread_local! {
static local: i32
}
#[tokio::main(worker_threads = 4)]
async fn main() {
// local = get_local().await;
}
Your tokio runtime is configured to have 4 worker threads, your thread local is provided to the main thread but not to the worker threads.
If you intend to initialize the gRPC client just once, a OnceCell could be appropriate:
use once_cell::sync::OnceCell;
pub static CLIENT: OnceCell<hello_world::greeter_client::GreeterClient<tonic::transport::Channel>> =
OnceCell::new();
pub fn client() -> hello_world::greeter_client::GreeterClient<tonic::transport::Channel> {
CLIENT.get().unwrap().clone()
}
#[tokio::main]
async fn main() {
let channel = tonic::transport::Endpoint::new("http://helloworld")
.unwrap()
.connect_lazy();
let client = hello_world::greeter_client::GreeterClient::new(channel);
CLIENT.set(client).unwrap();
main_().await;
}
async fn main_() {
let _ = client()
.say_hello(hello_world::HelloRequest { name: "foo".into() })
.await;
}
pub mod hello_world {
tonic::include_proto!("helloworld");
}
If you want to stick to something more similar to a thread local or you need more control over the client values, then you can use tokio's task local.
It allows you to provide context to tasks, but keep in mind that tokio::spawn introduces new tasks, so this context is lost when you use tokio::spawn.
The following snippet makes a tonic client available through a client() helper function that internally calls .with() on the task local. This function panics if the task local is not set, there is also try_with() which returns a Result if the value is not provided.
use tokio::task_local;
task_local! {
pub static CLIENT: hello_world::greeter_client::GreeterClient<tonic::transport::Channel>
}
pub fn client() -> hello_world::greeter_client::GreeterClient<tonic::transport::Channel> {
CLIENT.with(|c| c.clone())
}
#[tokio::main]
async fn main() {
let channel = tonic::transport::Endpoint::new("http://helloworld")
.unwrap()
.connect_lazy();
let client = hello_world::greeter_client::GreeterClient::new(channel);
CLIENT.scope(client, main_()).await;
}
async fn main_() {
let _ = client()
.say_hello(hello_world::HelloRequest { name: "foo".into() })
.await;
}
pub mod hello_world {
tonic::include_proto!("helloworld");
}
I am using redis in actix-web 4
actix-web = "4"
redis = { version = "0.21", features = ["r2d2", "cluster", "connection-manager", "tokio-comp", "tokio-native-tls-comp"] }
I created redis client in main
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let redis_client = redis::Client::open("redis://127.0.0.1:6379/").expect("err");
let serve = HttpServer::new(move || {
App::new()
.app_data(web::Data::new(redis_client.clone()))
});
}
Then get this redis client in the route
pub async fn list(
query: web::Query<ListRequest>,
redis_client: web::Data<redis::Client>,
) -> HttpResult {
let mut con = redis_client.get_tokio_connection().await.map_err(hje)?;
let _: () = con.set("my_key", 42).await.map_err(hje)?;
}
The above code works fine, but I would like to know how to use r2d2 to create a pool of redis connections instead of a single connection
I found in the documentation that r2d2 can be supported, but I don't know how to use it, can you help me?
The "r2d2" feature will implement r2d2::ManageConnection for redis::Client meaning you can create a pool like so:
r2d2::Pool::new(redis_client).unwrap();
I'm trying to enable the TLS feature to Open an encrypted AMQP connection on a stream.
In the amiquip crate docs, there's an example https://docs.rs/amiquip/0.3.0/amiquip/struct.Connection.html#examples
I copied the code example and implement the tcp_stream function requested as simple as possible to just return the mio::net::TcpStream instance as said.
use amiquip::{Auth, Connection, ConnectionOptions, ConnectionTuning, Result};
use mio::net::TcpStream;
use std::net::SocketAddr;
use std::{io::Read, time::Duration};
// Examples below assume a helper function to open a TcpStream from an address string with
// a signature like this is available:
fn tcp_stream() -> mio::net::TcpStream {
let address: SocketAddr = "127.0.0.0:5671".parse().unwrap();
mio::net::TcpStream::connect(address).unwrap()
}
fn main() -> Result<()> {
// Empty amqp URL is equivalent to default options; handy for initial debugging and
// development.
// let temp = Connection::open_tls_stream(connector, domain, stream, options, tuning)
let conn1 = Connection::insecure_open("amqp://")?;
let conn1 = Connection::insecure_open_stream(
tcp_stream(),
ConnectionOptions::<Auth>::default(),
ConnectionTuning::default(),
)?;
// All possible options specified in the URL except auth_mechanism=external (which would
// override the username and password).
let conn3 = Connection::insecure_open(
"amqp://user:pass#example.com:12345/myvhost?heartbeat=30&channel_max=1024&connection_timeout=10000",
)?;
let conn3 = Connection::insecure_open_stream(
tcp_stream(),
ConnectionOptions::default()
.auth(Auth::Plain {
username: "user".to_string(),
password: "pass".to_string(),
})
.heartbeat(30)
.channel_max(1024)
.connection_timeout(Some(Duration::from_millis(10_000))),
ConnectionTuning::default(),
)?;
Ok(())
}
However, I get compilation error (on every version of the crate) as following
error[E0277]: the trait bound `mio::net::TcpStream: IoStream` is not satisfied
--> src\main.rs:28:17
|
28 | let conn3 = Connection::insecure_open_stream(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ the trait `IoStream` is not implemented for `mio::net::TcpStream`
|
::: C:\Users\Tamir Cohen\.cargo\registry\src\github.com-1ecc6299db9ec823\amiquip-0.3.3\src\connection.rs:283:48
|
283 | pub fn insecure_open_stream<Auth: Sasl, S: IoStream>(
| -------- required by this bound in `Connection::insecure_open_stream`
and I just can't figure out what went wrong.
Which version of mio do you use?
You should use mio version 0.6, like the crate amiquip.
Cargo file:
[package]
name = "message-broker"
version = "0.1.0"
edition = "2018"
[dependencies]
amiquip = "0.4.0"
mio = { version = "0.6.23" }
native-tls = "0.2.8"
Code with tls:
use amiquip::{Auth, Connection, ConnectionOptions, ConnectionTuning, Exchange, Publish, Result};
use mio::net::TcpStream;
use native_tls::TlsConnector;
use std::time::Duration;
fn tcp_stream(addr: &str) -> Result<TcpStream> {
Ok(TcpStream::connect(&addr.parse().unwrap()).unwrap())
}
fn main() -> Result<()> {
let mut connection = Connection::open_tls_stream(
TlsConnector::new().unwrap(),
"domain",
tcp_stream("127.0.0.0:5671")?,
ConnectionOptions::default()
.auth(Auth::Plain {
username: "user".to_string(),
password: "pass".to_string(),
})
.heartbeat(30)
.channel_max(1024)
.connection_timeout(Some(Duration::from_millis(10_000))),
ConnectionTuning::default(),
)?;
let channel = connection.open_channel(None)?;
let exchange = Exchange::direct(&channel);
exchange.publish(Publish::new("hello there".as_bytes(), "hello"))?;
connection.close()
}
I have a Tokio client that talks to a remote server and is supposed to keep the connection alive permanently. I've implemented the initial authentication handshake and found that when my test terminates, I get an odd panic:
---- test_connect_without_database stdout ----
thread 'test_connect_without_database' panicked at 'Cannot drop a runtime in a context where blocking is not allowed. This happens when a runtime is dropped from within an asynchronous context.', /playground/.cargo/registry/src/github.com-1ecc6299db9ec823/tokio-0.3.5/src/runtime/blocking/shutdown.rs:51:21
I'm at an absolute loss as to what might be causing this so I don't know what other bits of code to add for context.
Here's my minimal reproducible example (playground):
use std::cell::RefCell;
use std::net::{IpAddr, SocketAddr};
use tokio::net::TcpStream;
use tokio::prelude::*;
use tokio::runtime;
#[derive(PartialEq, Debug)]
pub struct Configuration {
/// Database username.
username: String,
/// Database password.
password: String,
/// Database name.
db_name: String,
/// IP address for the remove server.
address: IpAddr,
/// Remote server port.
port: u16,
/// Number of connections to open.
connections: u16,
}
impl Configuration {
pub fn new(
username: &str,
password: &str,
db_name: &str,
address: &str,
port: u16,
connections: u16,
) -> Result<Configuration, Box<dyn std::error::Error>> {
let address = address.to_string().parse()?;
let configuration = Configuration {
username: username.to_string(),
password: password.to_string(),
db_name: db_name.to_string(),
address,
port,
connections,
};
Ok(configuration)
}
pub fn socket(&self) -> SocketAddr {
SocketAddr::new(self.address, self.port)
}
}
#[derive(Debug)]
pub struct Session {
configuration: Configuration,
runtime: RefCell<runtime::Runtime>,
}
impl Session {
fn new(config: Configuration) -> Result<Session, io::Error> {
let runtime = runtime::Builder::new_multi_thread()
.worker_threads(6)
.enable_all()
.build()?;
let session = Session {
configuration: config,
runtime: RefCell::new(runtime),
};
Ok(session)
}
fn configuration(&self) -> &Configuration {
&self.configuration
}
}
#[derive(Debug)]
pub(crate) struct Connection<'a> {
/// Socket uses to read and write from.
session: &'a Session,
/// Connection to the remote server.
stream: TcpStream,
/// Flag that indicates whether the connection is live.
live: bool,
}
impl<'a> Connection<'a> {
async fn new(session: &Session) -> Result<Connection<'_>, Box<dyn std::error::Error>> {
let mut stream = TcpStream::connect(session.configuration().socket()).await?;
let conn = Connection {
session,
stream,
live: true,
};
Ok(conn)
}
fn live(&self) -> bool {
self.live
}
}
#[tokio::test]
async fn test_connect_without_database() -> Result<(), Box<dyn std::error::Error>> {
let config = Configuration::new("rust", "", "", "127.0.0.1", 2345, 2).unwrap();
let session = Session::new(config).unwrap();
let conn = Connection::new(&session).await?;
assert!(conn.live());
Ok(())
}
fn main() {
println!("{}", 65u8 as char)
}
As the error message states:
This happens when a runtime is dropped from within an asynchronous context
You have created nested runtimes:
From tokio::test
From runtime::Builder::new_multi_thread
The second runtime is owned by Session which is dropped at the end of the asynchronous test. You can observe this by skipping the destructor using mem::forget:
#[tokio::test]
async fn test_connect_without_database() -> Result<(), Box<dyn std::error::Error>> {
let config = Configuration::new("rust", "", "", "127.0.0.1", 2345, 2).unwrap();
let session = Session::new(config).unwrap();
// Note that the assert was removed!
std::mem::forget(session);
Ok(())
}
Don't spawn nested runtimes and don't drop one runtime from within another.
See also:
How can I create a Tokio runtime inside another Tokio runtime without getting the error "Cannot start a runtime from within a runtime"?
"cannot recursively call into `Core`" when trying to achieve nested concurrency using Tokio
How do I synchronously return a value calculated in an asynchronous Future in stable Rust?
try:cargo run --release rather than cargo run
https://dev.to/scyart/comment/1f88p