Why is actix-session not correctly assigning values? - rust

I am newbie to Rust, and I am developing a Rust HTTP API with Actix Web. In order to maintain the session between requests, I wanted to use the official-provided package Actix Session, currently, this is my implementation on main.rs:
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let session_secret_key = CookieKeyHelper::generate();
let server = HttpServer::new(move || {
App::new()
.wrap(Cors::permissive())
.wrap(
SessionMiddleware::builder(
CookieSessionStore::default(),
session_secret_key.clone(),
)
.cookie_secure(false)
.build(),
)
// [...]
})
And in some places I'm calling it:
// [...]
return match token_result {
Ok(token) => {
println!("Received token: {:?}", token);
println!("Access token: {:?}", token.access_token().secret());
session
.insert::<StandardTokenResponse<EmptyExtraTokenFields, BasicTokenType>>(
"token", token,
)
.unwrap();
session
.insert::<AuthorizationCode>(
"auth_code",
AuthorizationCode::new(query["code"].to_string()),
)
.unwrap();
// [...]
No errros are raised, but then, when I try to read the session values with session.entries(), it is always empty {}.
What am I doing wrong here?
Thanks in advance!! :)

Related

Actix/Diesel API not responding to requests from Postman

My code seems to be ok, as it properly compiles and is quite simple. But when I run my app with cargo run, even though the program executes properly and outputs some debug printlns, it won't answer to any request.
This is my main.rs:
use actix_web::{web, App, HttpServer};
use diesel::r2d2::{ConnectionManager, Pool};
use diesel::sqlite::SqliteConnection;
use dotenvy::dotenv;
#[path = "api/books/books_handlers.rs"]
mod books_handlers;
#[path = "api/books_relationships/books_relationships_handlers.rs"]
mod books_relationships_handlers;
mod models;
mod routes;
mod schema;
mod logger;
#[actix_rt::main]
async fn main() -> std::io::Result<()> {
// Load .env file and set initialization variables
dotenv().ok();
std::env::set_var("RUST_LOG", "actix_web=debug");
let database_url = std::env::var("DATABASE_URL").expect("DATABASE_URL must be set");
// Create db connection pool with SQLite
let manager = ConnectionManager::<SqliteConnection>::new(database_url);
let pool: Pool<ConnectionManager<SqliteConnection>> = r2d2::Pool::builder()
.build(manager)
.expect("Failed to create pool.");
// Start HTTP server and register routes
println!("Starting server at http://localhost:8080");
HttpServer::new(move || {
App::new()
.app_data(pool.clone())
// Book class
.route("/create_book", web::post().to(books_handlers::create_book_handler))
.route("/list_books", web::get().to(books_handlers::list_books_handler))
.route("/get_book/{id}", web::post().to(books_handlers::read_book_by_id_handler))
.route("/update_book/{id}", web::put().to(books_handlers::update_book_handler))
.route("/delete_book/{id}", web::delete().to(books_handlers::delete_book_handler))
// BookRelationships class
.route("/create_book_relationship", web::post().to(books_relationships_handlers::create_book_relationship_handler))
.route("/list_book_relationships", web::get().to(books_relationships_handlers::list_books_handler))
.route("/get_book_relationship/{id}", web::post().to(books_relationships_handlers::read_book_by_id_handler))
.route("/update_book_relationship/{id}", web::put().to(books_relationships_handlers::update_book_handler))
.route("/delete_book_relationship/{id}", web::delete().to(books_relationships_handlers::delete_book_handler))
})
.bind("127.0.0.1:8080")?
.run()
.await
}
This is the first handler, the one I'm trying with Postman:
pub async fn create_book_handler(book_data: web::Json<Book>, pool: web::Data<DbPool>) -> HttpResponse {
println!("create_book_handler: {:#?}", book_data); // <-- this never gets executed
let result = books_dao::create_book(book_data, pool).await;
match result {
Ok(book) => {
println!("create_book_handler, OK. Book: {:#?}", book);
HttpResponse::Ok()
.content_type(ContentType::json())
.json(&book)
},
Err(err) => {
println!("create_book_handler, ERROR: {:#?}", err);
log(LogType::Error, err.to_string());
HttpResponse::InternalServerError()
.content_type(ContentType::json())
.body("{err: 'Unable to insert book into database'")
}
}
}
Then the code executes this function, calling Diesel and altering the DB:
pub async fn create_book(book: web::Json<Book>, pool: web::Data<DbPool>) -> Result<usize, Error> {
let mut conn = pool
.get()
.expect("Failed to get database connection from pool");
diesel::insert_into(books::table)
.values(book.into_inner())
.execute(&mut conn)
}
But the problem seems to be even before: not even the println! at the beginning of the handler get executed. When I start the app and send a POST request to http://127.0.0.1:8080/create_book, I get the following error in Postman:
Requested application data is not configured correctly. View/enable debug logs for more details.
Am I sending the requests in a wrong way, or is the API malfunctioning?
The DbPool is wrapped incorrectly. It should look like
...
App::new()
.app_data(actix_web::web::Data::new(pool.clone()))
...
This correctly wraps the DB Pool in the smart pointer that the route handlers can then use across your application

Pass redis connection object to actix web get route in rust

I am using redis-rs library to read json from RedisJSON. The program works fine when i open and create connection inside read_db_demo function. But that is not an ideal way. So i opened and created the connection inside main function. Now how should i pass the connection variable to read_db_demo function. Until now, i tried adding
App::new()
.app_data(web::Data::new(connection.clone()))
.route("/", web::get().to(read_db_demo))
})
which didn't work.
My code -
use actix_web::{get, App, HttpResponse, HttpServer, Responder};
use redis::Client;
use redis::JsonCommands;
use redis::RedisResult;
use serde_json::Value;
const TEST_KEY: &str = "results";
#[get("/")]
async fn read_db_demo() -> impl Responder {
let json_response: RedisResult<String> = connection.json_get(TEST_KEY, "$");
match json_response {
Ok(json_string) => {
let json: Value = serde_json::from_str(&json_string).unwrap();
HttpResponse::Ok().json(json)
}
Err(_) => HttpResponse::InternalServerError().body("Error reading from Redis"),
}
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let client = Client::open("redis://:xx").unwrap();
let mut connection = client.get_connection().unwrap(); // how to pass this connection to read_db_demo
HttpServer::new(|| {
App::new()
.service(read_db_demo)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}

Actix Web: Requested application data is not configured correctly. View/enable debug logs for more details

I have a simple application with an HTTP endpoint and a connection to a MongoDB database.
use actix_web::{
middleware, post,
web::{self},
App, HttpServer, Responder,
};
use mongodb::{options::ClientOptions, Client};
use serde::Deserialize;
#[derive(Deserialize, Debug)]
struct TestBody {
name: String,
age: u8,
}
#[post("/test")]
async fn test(query: web::Json<TestBody>, db: web::Data<Client>) -> impl Responder {
for db_name in db.list_database_names(None, None).await.unwrap() {
println!("{}", db_name);
}
let res = format!("{} {}", query.name, query.age);
res
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let connection_string = "secret-connection-string";
let client_options = ClientOptions::parse(connection_string).await.unwrap();
let client = Client::with_options(client_options).unwrap();
HttpServer::new(move || {
App::new()
.wrap(middleware::Compress::default())
.app_data(client.clone())
.app_data(web::JsonConfig::default())
.service(test)
})
.bind("0.0.0.0:7080")?
.run()
.await
}
It compiles and runs just fine. But when trying to access localhost:7080/test, I get the following response:
Requested application data is not configured correctly. View/enable debug logs for more details.
I don't see any logs in the console. How do I view or enable the Actix Web logs?
To see the logs of Actix Web, add the env_logger dependency to the cargo.toml.
[dependencies]
env_logger = "0.10.0"
You will also have to set the environment variable RUST_LOG to determine the log level. This can be done at runtime using std::env::set_var.
#[actix_web::main]
async fn main() -> std::io::Result<()> {
std::env::set_var("RUST_LOG", "debug");
env_logger::init();
/* ... */
}
This enables debug logging for Rust and Actix Web.
To solve the original issue: You always need to wrap data passed to app_data() with Data::new().
This is how I did it before:
HttpServer::new(move || {
App::new()
/* ... */
.app_data(client.clone())
/* ... */
})
How it should be instead:
HttpServer::new(move || {
App::new()
/* ... */
.app_data(Data::new(client.clone())) // <-- Data::new() here
/* ... */
})

How to use middleware on actix web 4?

I am trying to use a simple Middleware with ActixWeb 4:
HttpServer::new(
move || {
let app_state = AppState {
db_helper: external_db.clone(),
client: Client::new(),
};
App::new()
.wrap_fn(|req, srv| {
let header = req.headers().get("Test").unwrap().to_str().unwrap().to_owned();
let fut = srv.call(req);
async move {
let res = fut.await?;
println!("{:#?}", header);
Ok(res)
}
})
.app_data(web::Data::new(app_state))
.service(web::scope(API_PATH)
.service(user_controller::user_scope())
)
})
.bind(SERVER_URL)?
.run();
It's a very simple sample from their tutorial. However, I always get an error:
let fut = srv.call(req);
^^^^ method cannot be called on `&actix_web::app_service::AppRouting` due to unsatisfied trait bounds
How can I solve this?
You need to bring the trait into scope with
use actix_web::dev::Service;

Why does reading from a Rusoto S3 stream inside an Actix Web handler cause a deadlock?

I'm writing an application using actix_web and rusoto_s3.
When I run a command outside of an actix request directly from main, it runs fine, and the get_object works as expected. When this is encapsulated inside an actix_web request, the stream is blocked forever.
I have a client that is shared for all requests which is encapsulated into an Arc (this happens in actix data internals).
Full code:
fn index(
_req: HttpRequest,
path: web::Path<String>,
s3: web::Data<S3Client>,
) -> impl Future<Item = HttpResponse, Error = actix_web::Error> {
s3.get_object(GetObjectRequest {
bucket: "my_bucket".to_owned(),
key: path.to_owned(),
..Default::default()
})
.and_then(move |res| {
info!("Response {:?}", res);
let mut stream = res.body.unwrap().into_blocking_read();
let mut body = Vec::new();
stream.read_to_end(&mut body).unwrap();
match process_file(body.as_slice()) {
Ok(result) => Ok(result),
Err(error) => Err(RusotoError::from(error)),
}
})
.map_err(|e| match e {
RusotoError::Service(GetObjectError::NoSuchKey(key)) => {
actix_web::error::ErrorNotFound(format!("{} not found", key))
}
error => {
error!("Error: {:?}", error);
actix_web::error::ErrorInternalServerError("error")
}
})
.from_err()
.and_then(move |img| HttpResponse::Ok().body(Body::from(img)))
}
fn health() -> HttpResponse {
HttpResponse::Ok().finish()
}
fn main() -> std::io::Result<()> {
let name = "rust_s3_test";
env::set_var("RUST_LOG", "debug");
pretty_env_logger::init();
let sys = actix_rt::System::builder().stop_on_panic(true).build();
let prometheus = PrometheusMetrics::new(name, "/metrics");
let s3 = S3Client::new(Region::Custom {
name: "eu-west-1".to_owned(),
endpoint: "http://localhost:9000".to_owned(),
});
let s3_client_data = web::Data::new(s3);
Server::build()
.bind(name, "0.0.0.0:8080", move || {
HttpService::build().keep_alive(KeepAlive::Os).h1(App::new()
.register_data(s3_client_data.clone())
.wrap(prometheus.clone())
.wrap(actix_web::middleware::Logger::default())
.service(web::resource("/health").route(web::get().to(health)))
.service(web::resource("/{file_name}").route(web::get().to_async(index))))
})?
.start();
sys.run()
}
In stream.read_to_end the thread is being blocked and never resolved.
I have tried cloning the client per request and also creating a new client per request, but I've got the same result in all scenarios.
Am I doing something wrong?
It works if I don't use it async...
s3.get_object(GetObjectRequest {
bucket: "my_bucket".to_owned(),
key: path.to_owned(),
..Default::default()
})
.sync()
.unwrap()
.body
.unwrap()
.into_blocking_read();
let mut body = Vec::new();
io::copy(&mut stream, &mut body);
Is this an issue with Tokio?
let mut stream = res.body.unwrap().into_blocking_read();
Check the implementation of into_blocking_read(): it calls .wait(). You shouldn't call blocking code inside a Future.
Since Rusoto's body is a Stream, there is a way to read it asynchronously:
.and_then(move |res| {
info!("Response {:?}", res);
let stream = res.body.unwrap();
stream.concat2().map(move |file| {
process_file(&file[..]).unwrap()
})
.map_err(|e| RusotoError::from(e)))
})
process_file should not block the enclosing Future. If it needs to block, you may consider running it on new thread or encapsulate with tokio_threadpool's blocking.
Note: You can use tokio_threadpool's blocking in your implementation, but I recommend you understand how it works first.
If you are not aiming to load the whole file into memory, you can use for_each:
stream.for_each(|part| {
//process each part in here
//Warning! Do not add blocking code here either.
})
See also:
What is the best approach to encapsulate blocking I/O in future-rs?
Why does Future::select choose the future with a longer sleep period first?

Resources