My Rust Actix Web application provides multiple routes to the same resource with different content types. The example below works fine with curl localhost:8080/index -H "Accept:text/html but not in the browser (tested with Firefox developer edition), because that sends Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8.
Is there a way to comfortably handle real-world accept headers such as those, including wildcards and priority scores with the "q" attribute, or do I have to implement this logic myself?
use actix_web::{guard, web, App, HttpResponse, HttpServer};
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().service(
web::resource("/index")
.route(
web::get()
.guard(guard::Header("Accept", "text/html"))
.to(|| async {
HttpResponse::Ok()
.content_type("text/html")
.body("<html><body>hello html</body></html>")
}),
)
.route(
web::get()
.guard(guard::Header("Accept", "text/plain"))
.to(|| async {
HttpResponse::Ok()
.content_type("text/plain")
.body("hello text")
}),
),
)
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
To clarify the expected behaviour:
the application defines multiple routes with contents C1...Cn with content types T1...Tn in order of preference
the client (e.g. web browser) sends a get request
if there is no accept header in the get request, return C1 with content type T1
if there is an accept header but there is no matching content type including wildcards, return an error response
if there are multiple matching content types, return the one with the highest q score, assume 1 if not given
if there are multiple matching content types with the same highest q score, return the one with the lowest index (highest preference by the application)
I could implement this logic myself but given that this seems to be a common case, I was wondering if there is some established method in place that already handles this (or a similar) use case with Actix Web.
I guess you have to write your own guard, but you don't have to parse the header yourself. https://docs.rs/actix-web/latest/actix_web/guard/struct.GuardContext.html#method.header
// from https://docs.rs/actix-web/latest/actix_web/guard/struct.GuardContext.html#method.header
use actix_web::{guard::fn_guard, http::header};
let image_accept_guard = fn_guard(|ctx| {
match ctx.header::<header::Accept>() {
Some(hdr) => hdr.preference() == "image/*",
None => false,
}
});
Related
What I want to do is return custom headers in hyper (but return them and return the response body as well) For example, I'm going to take the code from the hyper documentation as an example:
async fn handle(_: Request<Body>) -> Result<Response<Body>, Infallible> {
Ok(Response::new("Hello, World!".into()))
}
For example, this code only displays Hello World! on every request, and always returns a 200 status code. But for example, How can I send other types of custom headers? What I tried was to use Response::builder instead of Response::new (as you can see in my example I use ::new, what I tried separately was to use ::builder) but it gave errors since that type of data cannot be returned. So how can I return the header I want and as many as I want but keeping the "body"?
In general your idea of using Response::builder seems to be correct. Note however that it returns a Builder, which method body must be later used to create a Response. As body's documentation states:
“Consumes” this builder, using the provided body to return a constructed Response.
A working example of setting custom headers of a response (how many you like) you can see in the documentation of Builder::header. It looks like this:
let body = "Hello, world!";
let response = Response::builder()
.header("Content-Type", "text/html")
.header("X-Custom-Foo", "bar")
.header("content-length", body.len())
.body(body.into())
.unwrap();
I am trying to write a discovery function in Rust that discovers the device IP in LAN, this function should sends HTTP requests for all possible IPs in the IP space and once receives 200 status return the value and cancels all others requests (by dropping the future in case reqwest used)
async fn discover() -> Result<String, ()> {
for i in 0..256 {
tokio::spawn(async move {
let url = format!("http://192.168.0.{i}:8000/ping");
reqwest::get(url).await;
});
}
}
Edit 1
once receives 200 status return the value and cancels all others requests
We would like to also validate the response content to make sure that this is the correct device and not someone who is trying to spy on the network we will validate the response by verifying the JWT token returning from the /ping endpoint, how can we do this
This may be achievable with the futures-crate and select_ok().
You can do the following:
convert the range in an Iterator and map each element i into a reqwest request. select_ok needs the Futures in the Vec to implement UnPin; that's why I used .boxed() here.
pass the Vec of futures into select_ok
the response will contain the first succesful future result
we can use the remaining_futures to cancel the remaining requests with drop (I am not sure if that works, you need to test this)
use futures::{future::select_ok, FutureExt};
async fn discover() {
let futures = (0..256)
.into_iter()
.map(|i| {
let url = format!("http://192.168.0.{i}:8000/ping");
return reqwest::get(url).boxed();
})
.collect::<Vec<_>>();
let (response, mut remaining_futures) = select_ok(futures).await.unwrap();
remaining_futures.iter_mut().for_each(|f| drop(f));
}
I'm testing the engine's register and poll(health check at regular intervals) behavior by making a mock server that replaces the admin server.
let admin_mock_server = admin_server.mock(|when, then| {
when.path("/register")
.header("content-type", "application/json")
.header_exists("content-type")
.json_body_partial(
r#"
{
"engineName": "engine_for_mock"
}
"#,
);
then.status(200)
.header("content-type", "application/json")
.json_body(get_sample_register_response_body());
});
After performing the register operation, a poll message is sent to the same admin server. Therefore, to test this behavior, I must send a poll message to the same mock server.
Is there a way to set up two pairs of request-response(when-then) on one mock server?
when.path("/poll");
then.status(200)
.header("content-type", "application/json")
.json_body(get_sample_poll_response_body());
If you look a the doc here:
https://docs.rs/httpmock/latest/httpmock/struct.MockServer.html
you can see that the mockfunction returns a Mock object on the mock server.
By this, i guess you can add more mocks on the same server by simply doing the same operations you did for the registerendpoint.
let mock_register = admin_server.mock(|when, then| {
...
});
let mock_poll = admin_server.mock(|when, then| {
...
});
In this way you would have two different mocks on the same admin_server, and then you can put in you test
mock_register.assert()
...
mock_poll.assert()
to interact with those two different endpoints.
I'm trying to block all access using Basic HTTP Authentication, but wish to avoid accidentally forgetting any routes.
I thought that perhaps it could be implemented through a Fairing, but I cannot see how that would work given that "Fairings cannot respond to an incoming request directly."
I saw that I can get an Outcome from a request guard, but I do not see how it could be used.
#[rocket::async_trait]
impl Fairing for TheFairing {
fn info(&self) -> Info {
Info {
name: "TheFairing",
kind: Kind::Request | Kind::Response,
}
}
async fn on_request(&self, request: &mut Request<'_>, _: &mut Data<'_>) {
let outcome = request.guard::<BasicAuthentication>().await;
}
}
I also know that it's possible to block data by modifying the response in on_response with something like response.set_status(Status::Unauthorized) but this still generates a response, requires erasing its content and more generally, seems like it would cause unintended consequences.
I am using v0.5-rc.
I'm running into an issue with my http-proxy-middleware stuff. I'm using it to proxy requests to another service which i.e. might resize images et al.
The problem is that multiple clients might call the method multiple times and thus create a stampede on the original service. I'm now looking into (what some services call request coalescing i.e. varnish) a solution that would call the service once, wait for the response and 'queue' the incoming requests with the same signature until the first is done, and return them all in a single go... This is different from 'caching' results due to the fact that I want to prevent calling the backend multiple times simultaneously and not necessarily cache the results.
I'm trying to find if something like that might be called differently or am i missing something that others have already solved someway... but i can't find anything...
As the use case seems pretty 'basic' for a reverse-proxy type setup, I would have expected alot of hits on my searches but since the problemspace is pretty generic i'm not getting anything...
Thanks!
A colleague of mine has helped my hack my own answer. It's currently used as a (express) middleware for specific GET-endpoints and basically hashes the request into a map, starts a new separate request. Concurrent incoming requests are hashed and checked and walked on the separate request callback and thus reused. This also means that if the first response is particularly slow, all coalesced requests are too
This seemed easier than to hack it into the http-proxy-middleware, but oh well, this got the job done :)
const axios = require('axios');
const responses = {};
module.exports = (req, res) => {
const queryHash = `${req.path}/${JSON.stringify(req.query)}`;
if (responses[queryHash]) {
console.log('re-using request', queryHash);
responses[queryHash].push(res);
return;
}
console.log('new request', queryHash);
const axiosConfig = {
method: req.method,
url: `[the original backend url]${req.path}`,
params: req.query,
headers: {}
};
if (req.headers.cookie) {
axiosConfig.headers.Cookie = req.headers.cookie;
}
responses[queryHash] = [res];
axios.request(axiosConfig).then((axiosRes) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.json(axiosRes.data);
});
responses[queryHash] = undefined;
}).catch((err) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.status(500).json(false);
});
responses[queryHash] = undefined;
});
};