Actix-web integration tests: reusing the main thread application - rust

I am using actix-web to write a small service. I'm adding integration tests to assess the functionality and have noticed that on every test I have to repeat the same definitions that in my main App except that it's wrapped by the test service:
let app = test::init_service(App::new().service(health_check)).await;
This can be easily extended if you have simple services but then when middleware and more configuration starts to be added tests start to get bulky, in addition it might be easy to miss something and not be assessing the same specs as the main App.
I've been trying to extract the App from the main thread to be able to reuse it my tests without success.
Specifically what I'd like is to create a "factory" for the App:
pub fn get_app() -> App<????> {
App::new()
.wrap(Logger::default())
.wrap(IdentityService::new(policy))
.service(health_check)
.service(login)
}
So that I can write this in my tests
let app = get_app();
let service = test::init_service(app).await;
But the compiler needs the specific return type which seems to be a chorizo composed of several traits and structs, some private.
Has anyone experience with this?
Thanks!

Define a declarative macro app! that builds the App, but define the routes using the procedural API, not the Actix build-in macros such as #[get("/")].
This example uses a database pool as a state - your application might have different kind of states or none at all.
#[macro_export]
macro_rules! app (
($pool: expr) => ({
App::new()
.wrap(middleware::Logger::default())
.app_data(web::Data::new($pool.clone()))
.route("/health", web::get().to(health_get))
.service(web::resource("/items")
.route(web::get().to(items_get))
.route(web::post().to(items_post))
)
});
);
This can be used in the tests as:
#[cfg(test)]
mod tests {
// more code here for get_test_pool
#[test]
async fn test_health() {
let app = test::init_service(app!(get_test_pool().await)).await;
let req = test::TestRequest::get().uri("/health").to_request();
let resp = test::call_service(&app, req).await;
assert!(resp.status().is_success());
}
}
and in the main app as:
// More code here for get_main_pool
#[actix_web::main]
async fn main() -> Result<(),std::io::Error> {
let pool = get_main_pool().await?;
HttpServer::new(move || app!(pool))
.bind(("127.0.0.1", 8080))?
.run()
.await
}
In this context, get_main_pool must return, say, Result<sqlx::Pool<sqlx::Postgres>, std::io::Error> to be compatible with the signature requirements of actix_web::main. On the other hand, get_test_pool can simply return sqlx::Pool<sqlx::Postgres>.

I was struggling with the same issue using actix-web#4, but I came up with a possible solution. It may not be ideal, but it works for my needs. I needed to bring in actix-service#2.0.2 and actix-http#3.2.2 in Cargo.toml as well.
I created a test.rs file with an initializer that I can use in all my tests. Here is what that file could look like for you:
use actix_web::{test::{self}, App, web, dev::{HttpServiceFactory, ServiceResponse}, Error};
use actix_service::Service;
use actix_http::{Request};
#[cfg(test)]
pub async fn init(service_factory: impl HttpServiceFactory + 'static) -> impl Service<Request, Response = ServiceResponse, Error = Error> {
// connect to your database or other things to pass to AppState
test::init_service(
App::new()
.app_data(web::Data::new(crate::AppState { db }))
.service(service_factory)
).await
}
I use this in my API services to reduce boilerplate in my integration tests. Here is an example:
// ...
#[get("/")]
async fn get_index() -> impl Responder {
HttpResponse::Ok().body("Hello, world!")
}
#[cfg(test)]
mod tests {
use actix_web::{test::TestRequest};
use super::{get_index};
#[actix_web::test]
async fn test_get_index() {
let mut app = crate::test::init(get_index).await;
let resp = TestRequest::get().uri("/").send_request(&mut app).await;
assert!(resp.status().is_success(), "Something went wrong");
}
}
I believe the issue you ran into is trying to create a factory for App (which is a bit of an anti-pattern in Actix) instead of init_service. If you want to create a function that returns App I believe the preferred convention is to use configure instead. See this issue for reference: https://github.com/actix/actix-web/issues/2039.

Related

How to create an overload API or function in Actix-web?

Background
I need to create a couple of endpoints for an API service project. These API can accept encrypted and un-encrypted one (for development). Both parameters then are passed into a same function. For example:
/api/movie/get with encrypted parameter (for production)
/dev/movie/get with un-encrypted parameter (for development)
Current implementation in actix_web
I use actix_web and routes (not macro) to provide path routing. Modified from the sample code below
use actix_web::{web, App, HttpServer, Responder};
async fn index() -> impl Responder {
"Hello world!"
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().service(
// prefixes all resources and routes attached to it...
web::scope("/app")
// ...so this handles requests for `GET /app/index.html`
.route("/index.html", web::get().to(index)),
)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
For each API endpoint, I had to create 2 functions, such as:
/// for encrypted one
/// PATH: /api/movie/get
pub async fn handle_get_movie(
app_data: Data<AppState>,
Json(payload): Json<EncryptedPayload>,
) -> Result<impl Responder, Error> {
// decrypt the content
// this is the only difference between un-encrypted and encrypted endpoint
let params = payload
.unload::<GetMovieInf>()
.map_err(|_| MainError::Malformatted)?;
// other codes here....
Ok(Json(ReplyInf {ok: true}))
}
/// for un-encrypted one
/// PATH: /dev/movie/get
pub async fn handle_get_movie(
app_data: Data<AppState>,
Json(payload): Json<UnencryptedPayload>,
) -> Result<impl Responder, Error> {
// other codes here. These codes are exactly identical with the above function...
Ok(Json(ReplyInf {ok: true}))
}
Questions
Since both functions are similar, does it possible to combine them both into a single function ? a function "overload" maybe ?
The problem is this line Json(payload): Json<UnencryptedPayload> in the parameter function. I tried to use generics like Json<T>. this doesn't work.
I can use the environment variable to control which should be active (EncryptedPayload or UnencryptedPayload). I can use one path for each endpoint (eg: /api/movie/get) and don't have to write the same functionality twice.

can i update bevy app state or resource outside the bevy system?

i have already have a bevy app that run in the browser.
what i want to do is having some function in the js/ts side that can create or destory a entity in the bevy world, can this be possible? I have try to let app=App::new();, then bind a function to run app app.run();, and bind a function to override resource app.insert_resource(...);. but when i call the function to override resource after run app, it show error with message: recursive use of an object detected which would lead to unsafe aliasing in rust.
thanks for #kmdreko 's advice, i try to use Arc to update resrouce, but it seems having another problem before this,the problem is after i init the bevy app, the rest code will never reach, there is my code:
<script type="module">
import init, {BevyApp} from '../pkg/wasm_bevy_demo.js';
init().then(() => {
// new() function create and run a bevy app, and return a Arc<Mutex> in BevyApp{}
const bevyCode = BevyApp.new();
// this log info never show in the console
console.log("reach after run bevy app");
bevyCode.update_scroll_rate(10, 10);
})
</script>
Let's look at the implementation of run() function for Bevy, when run() is executed, the app you created before has been replaced with empty():
pub fn run(&mut self) {
// ...
let mut app = std::mem::replace(self, App::empty());
let runner = std::mem::replace(&mut app.runner, Box::new(run_once));
(runner)(app);
}
So, cannot update bevy app state or resource outside the bevy system.
But, if you still want to update the bevy app more freely, you need to implement your own runner. That's what I did in bevy-in-app.
Yes absolutely! A prerequisite is some understanding of Shared-State Concurrency.
As you noted we need to tap into some of the capabilities of wasm-bindgen, specifically its ability to create a stateful struct, containing an Arc<Mutex<App>>.
Example
In this case the SimplePlugin just creates a camera and cube with a Speed component.
We are manually calling Update because App.Run() is blocking and consumes the app. I'm not sure if doing this breaks some winit capabilities.
fn main() {}
#[wasm_bindgen]
pub struct Runner {
app: Arc<Mutex<App>>
}
#[wasm_bindgen]
impl Runner {
#[wasm_bindgen(constructor)]
pub fn new() -> Runner {
core::set_panic_hook();
let mut app = App::new();
app.add_plugins(DefaultPlugins)
.add_plugin(SimplePlugin);
let app_arc = Arc::new(Mutex::new(app));
let app_update = Arc::clone(&app_arc);
let update = Closure::<dyn FnMut()>::new(move || {
app_update.lock().unwrap().update();
});
web_sys::window()
.unwrap()
.set_interval_with_callback_and_timeout_and_arguments_0(
update.as_ref().unchecked_ref(),
16,
);
update.forget(); //memory leak, use carefully!
Runner {
app: Arc::clone(&app_arc)
}
}
pub fn set_speed(&mut self, speed: f32) {
self.app.lock().unwrap().insert_resource(Speed::new(speed));
}
}
Now we can call set_speed from html:
<script type="module">
import init, { Runner } from './MY_FILE.js'
init().then(() => {
var runner = new Runner()
setInterval(() => {
runner.set_speed(performance.now() * 0.0001)
}, 10);
})
</script>

Custom openapi schema with rust, rocket and okapi

I am developing an API with Rust, using Rocket as main framework.
To create the Swagger docs I use Okapi, which allows me to create the docs automatically.
use rocket_okapi::swagger_ui::*;
extern crate dotenv;
use rocket_okapi::{openapi, openapi_get_routes};
#[openapi] // Let okapi know that we want to document this endpoing
#[get("/test_route")]
fn test_route() -> &'static str {
"test_route"
}
#[rocket::main]
pub async fn main() -> () {
rocket::build()
// Mount routes with `openapi_get_routes` instead of rocket `routes``
.mount("/", openapi_get_routes![test_route])
// Mount swagger-ui on thisroute
.mount(
"/docs",
make_swagger_ui(&SwaggerUIConfig {
// The automatically generated openapi.json will be here
url: "../openapi.json".to_owned(),
..Default::default()
}),
)
.launch()
.await
.unwrap();
()
}
Thats good. But I would like to provide okapi the settings for the API, and I wasnt able to do that. I know there is an example at https://github.com/GREsau/okapi/blob/master/examples/custom_schema/src/main.rs, but I couldnt load a custom OpenApi schema in my api.
Also, I would like to load a custom openapi.json; but I don't know how to achieve that.

Testing with Rocket and Diesel

I am trying to work on a web app with Diesel and Rocket, by following the rocket guide. I have not able to understand how to do testing of this app.
//in main.rs
#[database("my_db")]
struct PgDbConn(diesel::PgConnection);
#[post("/users", format="application/json", data="<user_info>")]
fn create_new_user(conn: PgDbConn, user_info: Json<NewUser>) {
use schema::users;
diesel::insert_into(users::table).values(&*user_info).execute(&*conn).unwrap();
}
fn main() {
rocket::ignite()
.attach(PgDbConn::fairing())
.mount("/", routes![create_new_user])
.launch();
}
// in test.rs
use crate::PgDbConn;
#[test]
fn test_user_creation() {
let rocket = rocket::ignite().attach(PgDbConn::fairing());
let client = Client::new(rocket).unwrap();
let response = client
.post("/users")
.header(ContentType::JSON)
.body(r#"{"username": "xyz", "email": "temp#abc.com"}"#)
.dispatch();
assert_eq!(response.status(), Status::Ok);
}
But this modifies the database. How can I make sure that the test does not alter the database.
I tried to create two database and use them in the following way(I am not sure if this is recommended)
#[cfg(test)]
#[database("test_db")]
struct PgDbConn(diesel::PgConnection);
#[cfg(not(test))]
#[database("live_db")]
struct PgDbConn(diesel::PgConnection);
Now I thought I can use the test_transaction method of the diesel::connection::Connection trait in the following way:-
use crate::PgDbConn;
#[test]
fn test_user_creation() {
// !!This statment is wrong as PgDbConn is an Fn object instead of a struct
// !!I am not sure how it works but it seems that this Fn object is resolved
// !!into struct only when used as a Request Guard
let conn = PgDbConn;
// Deref trait for PgDbConn is implemented, So I thought that dereferencing
// it will return a diesel::PgConnection
(*conn).test_transaction::<_, (), _>(|| {
let rocket = rocket::ignite().attach(PgDbConn::fairing());
let client = Client::new(rocket).unwrap();
let response = client
.post("/users")
.header(ContentType::JSON)
.body(r#"{"username": "Tushar", "email": "temp#abc.com"}"#)
.dispatch();
assert_eq!(response.status(), Status::Ok);
Ok(())
});
}
The above code obviously fails to compile. Is there a way to resolve this Fn object into the struct and obtain the PgConnection in it. And I am not even sure if this is the right to way to do things.
Is there a recommended way to do testing while using both Rocket and Diesel?
This will fundamentally not work as you imagined there, as conn will be a different connection than whatever rocket generates for you. The test_transaction pattern assumes that you use the same connection for everything.

Use precomputed big object in actix-web route handler

Is there a way to make an actix-web route handler aware of a pre-computed heavy object, that is needed for the computation of result?
What I intend to do, in the end, is to avoid having to recompute my_big_heavy_object each time a request comes along, and instead compute it once and for all in main, there access it from the index method.
extern crate actix_web;
use actix_web::{server, App, HttpRequest};
fn index(_req: HttpRequest) -> HttpResponse {
// how can I access my_big_heavy_object from here?
let result = do_something_with(my_big_heavy_object);
HttpResponse::Ok()
.content_type("application/json")
.body(result)
}
fn main() {
let my_big_heavy_object = compute_big_heavy_object();
server::new(|| App::new().resource("/", |r| r.f(index)))
.bind("127.0.0.1:8088")
.unwrap()
.run();
}
First, create a struct which is the shared state for your application:
struct AppState {
my_big_heavy_object: HeavyObject,
}
It's better to create a context wrapper like this, rather than just using HeavyObject, so you can add other fields to it later if necessary.
A few objects in Actix will now need to interact with this, so you can make them aware of it by overriding the application state parameter in those types, for example HttpRequest<AppState>.
Your index handler can access the state through the HttpRequest's state property, and now looks like this:
fn index(req: HttpRequest<AppState>) -> HttpResponse {
let result = do_something_with(req.state.my_big_heavy_object);
HttpResponse::Ok()
.content_type("application/json")
.body(result)
}
When building the App, use the with_state constructor instead of new:
server::new(|| {
let app_state = AppState { my_big_heavy_object: compute_big_heavy_object() };
App::with_state(app_state).resource("/", |r| r.f(index))
})
.bind("127.0.0.1:8088")
.unwrap()
.run();
Note that the application state is assumed to be immutable. It sounds like you don't need any handlers to mutate it but if you did then you would have to use something like Cell or RefCell for interior mutability.

Resources