actix-web uses redis connection pool - rust

I am using redis in actix-web 4
actix-web = "4"
redis = { version = "0.21", features = ["r2d2", "cluster", "connection-manager", "tokio-comp", "tokio-native-tls-comp"] }
I created redis client in main
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let redis_client = redis::Client::open("redis://127.0.0.1:6379/").expect("err");
let serve = HttpServer::new(move || {
App::new()
.app_data(web::Data::new(redis_client.clone()))
});
}
Then get this redis client in the route
pub async fn list(
query: web::Query<ListRequest>,
redis_client: web::Data<redis::Client>,
) -> HttpResult {
let mut con = redis_client.get_tokio_connection().await.map_err(hje)?;
let _: () = con.set("my_key", 42).await.map_err(hje)?;
}
The above code works fine, but I would like to know how to use r2d2 to create a pool of redis connections instead of a single connection
I found in the documentation that r2d2 can be supported, but I don't know how to use it, can you help me?

The "r2d2" feature will implement r2d2::ManageConnection for redis::Client meaning you can create a pool like so:
r2d2::Pool::new(redis_client).unwrap();

Related

Actix/Diesel API not responding to requests from Postman

My code seems to be ok, as it properly compiles and is quite simple. But when I run my app with cargo run, even though the program executes properly and outputs some debug printlns, it won't answer to any request.
This is my main.rs:
use actix_web::{web, App, HttpServer};
use diesel::r2d2::{ConnectionManager, Pool};
use diesel::sqlite::SqliteConnection;
use dotenvy::dotenv;
#[path = "api/books/books_handlers.rs"]
mod books_handlers;
#[path = "api/books_relationships/books_relationships_handlers.rs"]
mod books_relationships_handlers;
mod models;
mod routes;
mod schema;
mod logger;
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
// Load .env file and set initialization variables
dotenv().ok();
std::env::set_var("RUST_LOG", "actix_web=debug");
let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");
// Create db connection pool with SQLite
let manager = ConnectionManager::<SqliteConnection>::new(database_url);
let pool: Pool<ConnectionManager<SqliteConnection>> = r2d2::Pool::builder()
.build(manager)
.expect("Failed to create pool.");
// Start HTTP server and register routes
println!("Starting server at http://localhost:8080");
HttpServer::new(move || {
App::new()
.app_data(pool.clone())
// Book class
.route("/create_book", web::post().to(books_handlers::create_book_handler))
.route("/list_books", web::get().to(books_handlers::list_books_handler))
.route("/get_book/{id}", web::post().to(books_handlers::read_book_by_id_handler))
.route("/update_book/{id}", web::put().to(books_handlers::update_book_handler))
.route("/delete_book/{id}", web::delete().to(books_handlers::delete_book_handler))
// BookRelationships class
.route("/create_book_relationship", web::post().to(books_relationships_handlers::create_book_relationship_handler))
.route("/list_book_relationships", web::get().to(books_relationships_handlers::list_books_handler))
.route("/get_book_relationship/{id}", web::post().to(books_relationships_handlers::read_book_by_id_handler))
.route("/update_book_relationship/{id}", web::put().to(books_relationships_handlers::update_book_handler))
.route("/delete_book_relationship/{id}", web::delete().to(books_relationships_handlers::delete_book_handler))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
This is the first handler, the one I'm trying with Postman:
pub async fn create_book_handler(book_data: web::Json<Book>, pool: web::Data<DbPool>) -> HttpResponse {
println!("create_book_handler: {:#?}", book_data); // <-- this never gets executed
let result = books_dao::create_book(book_data, pool).await;
match result {
Ok(book) => {
println!("create_book_handler, OK. Book: {:#?}", book);
HttpResponse::Ok()
.content_type(ContentType::json())
.json(&book)
},
Err(err) => {
println!("create_book_handler, ERROR: {:#?}", err);
log(LogType::Error, err.to_string());
HttpResponse::InternalServerError()
.content_type(ContentType::json())
.body("{err: 'Unable to insert book into database'")
}
}
}
Then the code executes this function, calling Diesel and altering the DB:
pub async fn create_book(book: web::Json<Book>, pool: web::Data<DbPool>) -> Result<usize, Error> {
let mut conn = pool
.get()
.expect("Failed to get database connection from pool");
diesel::insert_into(books::table)
.values(book.into_inner())
.execute(&mut conn)
}
But the problem seems to be even before: not even the println! at the beginning of the handler get executed. When I start the app and send a POST request to http://127.0.0.1:8080/create_book, I get the following error in Postman:
Requested application data is not configured correctly. View/enable debug logs for more details.
Am I sending the requests in a wrong way, or is the API malfunctioning?
The DbPool is wrapped incorrectly. It should look like
...
App::new()
.app_data(actix_web::web::Data::new(pool.clone()))
...
This correctly wraps the DB Pool in the smart pointer that the route handlers can then use across your application

Pass redis connection object to actix web get route in rust

I am using redis-rs library to read json from RedisJSON. The program works fine when i open and create connection inside read_db_demo function. But that is not an ideal way. So i opened and created the connection inside main function. Now how should i pass the connection variable to read_db_demo function. Until now, i tried adding
App::new()
.app_data(web::Data::new(connection.clone()))
.route("/", web::get().to(read_db_demo))
})
which didn't work.
My code -
use actix_web::{get, App, HttpResponse, HttpServer, Responder};
use redis::Client;
use redis::JsonCommands;
use redis::RedisResult;
use serde_json::Value;
const TEST_KEY: &str = "results";
#[get("/")]
async fn read_db_demo() -> impl Responder {
let json_response: RedisResult<String> = connection.json_get(TEST_KEY, "$");
match json_response {
Ok(json_string) => {
let json: Value = serde_json::from_str(&json_string).unwrap();
HttpResponse::Ok().json(json)
}
Err(_) => HttpResponse::InternalServerError().body("Error reading from Redis"),
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let client = Client::open("redis://:xx").unwrap();
let mut connection = client.get_connection().unwrap(); // how to pass this connection to read_db_demo
HttpServer::new(|| {
App::new()
.service(read_db_demo)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}

Actix Web: Requested application data is not configured correctly. View/enable debug logs for more details

I have a simple application with an HTTP endpoint and a connection to a MongoDB database.
use actix_web::{
middleware, post,
web::{self},
App, HttpServer, Responder,
};
use mongodb::{options::ClientOptions, Client};
use serde::Deserialize;
#[derive(Deserialize, Debug)]
struct TestBody {
name: String,
age: u8,
}
#[post("/test")]
async fn test(query: web::Json<TestBody>, db: web::Data<Client>) -> impl Responder {
for db_name in db.list_database_names(None, None).await.unwrap() {
println!("{}", db_name);
}
let res = format!("{} {}", query.name, query.age);
res
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let connection_string = "secret-connection-string";
let client_options = ClientOptions::parse(connection_string).await.unwrap();
let client = Client::with_options(client_options).unwrap();
HttpServer::new(move || {
App::new()
.wrap(middleware::Compress::default())
.app_data(client.clone())
.app_data(web::JsonConfig::default())
.service(test)
})
.bind("0.0.0.0:7080")?
.run()
.await
}
It compiles and runs just fine. But when trying to access localhost:7080/test, I get the following response:
Requested application data is not configured correctly. View/enable debug logs for more details.
I don't see any logs in the console. How do I view or enable the Actix Web logs?
To see the logs of Actix Web, add the env_logger dependency to the cargo.toml.
[dependencies]
env_logger = "0.10.0"
You will also have to set the environment variable RUST_LOG to determine the log level. This can be done at runtime using std::env::set_var.
#[actix_web::main]
async fn main() -> std::io::Result<()> {
std::env::set_var("RUST_LOG", "debug");
env_logger::init();
/* ... */
}
This enables debug logging for Rust and Actix Web.
To solve the original issue: You always need to wrap data passed to app_data() with Data::new().
This is how I did it before:
HttpServer::new(move || {
App::new()
/* ... */
.app_data(client.clone())
/* ... */
})
How it should be instead:
HttpServer::new(move || {
App::new()
/* ... */
.app_data(Data::new(client.clone())) // <-- Data::new() here
/* ... */
})

How to add tracing to a Rust microservice?

I built a microservice in Rust. I receive messages, request a document based on the message, and call a REST api with the results. I built the REST api with warp and send out the result with reqwest. We use jaeger for tracing and the "b3" format. I have no experience with tracing and am a Rust beginner.
Question: What do I need to add the the warp / reqwest source below to propagate the tracing information and add my own span?
My version endpoint (for simplicity) looks like:
pub async fn version() -> Result<impl warp::Reply, Infallible> {
Ok(warp::reply::with_status(VERSION, http::StatusCode::OK))
}
I assume I have to extract e.g. the traceid / trace information here.
A reqwest call I do looks like this:
pub async fn get_document_content_as_text(
account_id: &str,
hash: &str,
) -> Result<String, Box<dyn std::error::Error>> {
let client = reqwest::Client::builder().build()?;
let res = client
.get(url)
.bearer_auth(TOKEN)
.send()
.await?;
if res.status().is_success() {}
let text = res.text().await?;
Ok(text)
}
I assume I have to add the traceid / trace information here.
You need to add a tracing filter into your warp filter pipeline.
From the documentation example:
use warp::Filter;
let route = warp::any()
.map(warp::reply)
.with(warp::trace(|info| {
// Create a span using tracing macros
tracing::info_span!(
"request",
method = %info.method(),
path = %info.path(),
)
}));
I'll assume that you're using tracing within your application and using opentelemetry and opentelemetry-jaeger to wire it up to an external service. The specific provider you're using doesn't matter. Here's a super simple setup to get that all working that I'll assume you're using on both applications:
# Cargo.toml
[dependencies]
opentelemetry = "0.17.0"
opentelemetry-jaeger = "0.16.0"
tracing = "0.1.33"
tracing-subscriber = { version = "0.3.11", features = ["env-filter"] }
tracing-opentelemetry = "0.17.2"
reqwest = "0.11.11"
tokio = { version = "1.21.1", features = ["macros", "rt", "rt-multi-thread"] }
warp = "0.3.2"
opentelemetry::global::set_text_map_propagator(opentelemetry_jaeger::Propagator::new());
tracing_subscriber::registry()
.with(tracing_opentelemetry::layer().with_tracer(
opentelemetry_jaeger::new_pipeline()
.with_service_name("client") // or "server"
.install_simple()
.unwrap())
).init();
Let's say the "client" application is set up like so:
#[tracing::instrument]
async fn call_hello() {
let client = reqwest::Client::default();
let _resp = client
.get("http://127.0.0.1:3030/hello")
.send()
.await
.unwrap()
.text()
.await
.unwrap();
}
#[tokio::main]
async fn main() {
// ... initialization above ...
call_hello().await;
}
The traces produced by the client are a bit chatty because of other crates but fairly simple, and does not include the server-side:
Let's say the "server" application is set up like so:
#[tracing::instrument]
fn hello_handler() -> &'static str {
tracing::info!("got hello message");
"hello world"
}
#[tokio::main]
async fn main() {
// ... initialization above ...
let routes = warp::path("hello")
.map(hello_handler);
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
Likewise, the traces produced by the server are pretty bare-bones:
The key part to marrying these two traces is to declare the client-side trace as the parent of the server-side trace. This can be done over HTTP requests with the traceparent and tracestate headers as designed by the W3C Trace Context Standard. There is a TraceContextPropagator available from the opentelemetry crate that can be used to "extract" and "inject" these values (though as you'll see, its not very easy to work with since it only works on HashMap<String, String>s).
For the "client" to send these headers, you'll need to:
get the current tracing Span
get the opentelemetry Context from the Span (if you're not using tracing at all, you can skip the first step and use Context::current() directly)
create the propagator and fields to propagate into and "inject" then from the Context
use those fields as headers for reqwest
#[tracing::instrument]
async fn call_hello() {
let span = tracing::Span::current();
let context = span.context();
let propagator = TraceContextPropagator::new();
let mut fields = HashMap::new();
propagator.inject_context(&context, &mut fields);
let headers = fields
.into_iter()
.map(|(k, v)| {(
HeaderName::try_from(k).unwrap(),
HeaderValue::try_from(v).unwrap(),
)})
.collect();
let client = reqwest::Client::default();
let _resp = client
.get("http://127.0.0.1:3030/hello")
.headers(headers)
.send()
.await
.unwrap()
.text()
.await
.unwrap();
}
For the "server" to make use of those headers, you'll need to:
pull them out from the request and store them in a HashMap
use the propagator to "extract" the values into a Context
set that Context as the parent of the current tracing Span (if you didn't use tracing, you could .attach() it instead)
#[tracing::instrument]
fn hello_handler(traceparent: Option<String>, tracestate: Option<String>) -> &'static str {
let fields: HashMap<_, _> = [
dbg!(traceparent).map(|value| ("traceparent".to_owned(), value)),
dbg!(tracestate).map(|value| ("tracestate".to_owned(), value)),
]
.into_iter()
.flatten()
.collect();
let propagator = TraceContextPropagator::new();
let context = propagator.extract(&fields);
let span = tracing::Span::current();
span.set_parent(context);
tracing::info!("got hello message");
"hello world"
}
#[tokio::main]
async fn main() {
// ... initialization above ...
let routes = warp::path("hello")
.and(warp::header::optional("traceparent"))
.and(warp::header::optional("tracestate"))
.map(hello_handler);
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
With all that, hopefully your traces have now been associated with one another!
Full code is available here and here.
Please, someone let me know if there is a better way! It seems ridiculous to me that there isn't better integration available. Sure some of this could maybe be a bit simpler and/or wrapped up in some nice middleware for your favorite client and server of choice... But I haven't found a crate or snippet of that anywhere!

Rust - Cannot Access r2d2 pool connection from Rocket State

I am currently learning Rust and Rocket
Using Rust 1.54.0+Rocket 0.5.0_rc1+ Diesel 1.4.7 + r2d2 0.8.9
I created a DB Postgres connection pool with r2d2. I want to share the connection pool between requests/Routes, to do that I am trying to use Rocket Managed State. https://rocket.rs/v0.5-rc/guide/state/#managed-state
I created a DB Connection Pool, saved it on the state, but when I tried to access that DB Connection Pool from the Route. I am getting 2 error on the same line
Cell<i32> cannot be shared between threads safely
RefCell<HashMap<StatementCacheKey<Pg>, pg::connection::stmt::Statement>> cannot be shared between threads safely
here my code
pub async fn establish_pooled_connection() -> Result<PooledConnection<ConnectionManager<PgConnection>>, r2d2_diesel::Error> {
dotenv().ok();
let database_url = env::var("DATABASE_URL")
.expect("DATABASE_URL must be set");
let manager = ConnectionManager::<PgConnection>::new(&database_url);
let pool = r2d2::Pool::builder().build(manager).expect("Failed to create pool.");
let conn = pool.clone().get().unwrap();
Ok(conn)
}
struct DBPool{
db_pool: PooledConnection<ConnectionManager<PgConnection>>
}
#[rocket::main]
async fn main() {
let pool = establish_pooled_connection();
rocket::build()
.mount("/",routes![callapi])
.manage(DBPool{db_pool: pool})
.launch()
.await.ok();
}
#[post("/callapi", data = "<request>")]
async fn callapi(request: RequestAPI<'_>, _dbpool: &State<DBPool>) -> Json<models::api_response::ApiResponse> {
......
The errors are for this parameter
_dbpool: &State<DBPool>
thanks in advance
Finally, I was able to make this work.
I mostly used this GitHub repo as base
https://github.com/practical-rust-web-development/mystore/tree/v1.1
The repo uses Actix but I am using Rocket.
I think my main miss understanding is that the base PostgreSQL connection is a PGConnection and what you get from the Pool is a PGPooledConnection which at the end is the same.
Here my code
db_connection.rs
use diesel::pg::PgConnection;
use dotenv::dotenv;
use std::env;
use diesel::r2d2::{ Pool, PooledConnection, ConnectionManager, PoolError };
pub type PgPool = Pool<ConnectionManager<PgConnection>>;
pub type PgPooledConnection = PooledConnection<ConnectionManager<PgConnection>>;
//Connects to Postgres and call init pool
pub fn establish_connection() -> PgPool {
dotenv().ok();
let database_url = env::var("DATABASE_URL")
.expect("DATABASE_URL must be set");
init_pool(&database_url).expect("Failed to create pool")
}
//Creates a default R2D2 Postgres DB Pool
fn init_pool(database_url: &str) -> Result<PgPool, PoolError> {
let manager = ConnectionManager::<PgConnection>::new(database_url);
Pool::builder().build(manager)
}
//this functions returns a connection from the Pool
pub fn pg_pool_handler(pool: &PgPool) -> Result<PgPooledConnection, PoolError> {
let _pool = pool.get().unwrap();
Ok(_pool)
}
main.rs
mod db_connection;
use db_connection::{PgPool};
#[rocket::main]
async fn main() {
rocket::build()
.mount("/API",routes![demo])
.manage(db_connection::establish_connection()) //here is where you pass the pool to Rocket state.
.launch()
.await.ok();
}
#[post("/demo", data = "<request>")]
async fn demo _dbpool: &State<PgPool>) -> Json<models::Response> {
let connection = db_connection::pg_pool_handler(_dbpool).unwrap();
let results = users.limit(1)
.load::<User>(&connection)
.expect("Error loading users");
........
That is the basic code, the code can be improved to handle errors in a better way.

Resources