How to catch all routes in rocket? - rust

I have looked up everywhere but I only found this github issue, but it's from 5 years ago, and rocket has changed a lot.
Is there a way to catch all methods (get, post, head, options, whatever .. .) and all routes (/abc, /abc/xyz, /xyz, /whatever) in rocket.
I tried using the code from the github issue, but rocket api has updated a lot so I can't figure out how.

What you want to do is part of "manual routing" in Rocket, the code linked in the issue you found is now a dead link, but there's a similar example in the current version.
It uses the Handler trait with its impl for functions:
fn hi<'r>(req: &'r Request, _: Data<'r>) -> route::BoxFuture<'r> {
route::Outcome::from(req, "Hello!").pin()
}
#[rocket::launch]
fn rocket() -> _ {
let hello = Route::ranked(2, Get, "/", hi);
rocket::build()
.mount("/", vec![hello])
}
If you need multiple methods you can loop through them in the same way that's described in the issue and create the routes with
#[rocket::launch]
fn rocket() -> _ {
use rocket::http::Method::*;
let mut routes = vec![];
for method in &[Get, Put, Post, Delete, Options, Head, Trace, Connect, Patch] {
routes.push(Route::new(*method, "/git/<path..>", git_http));
}
rocket::build().mount("/", routes)
}
The <path..> captures all segments (query parameters are caught by default from my testing), and you can get to them using Request's methods.

Related

Trouble passing mpsc sync channel in warp handler

I'm new to rust and encountered an issue while building an API with warp. I'm trying to pass some requests to another thread with a channel(trying to avoid using arc/mutex). Still, I noticed that when I pass an mpsc::sync::Sender to a warp handler, I get this error.
"std::sync::mpsc::Sender cannot be shared between threads safely"
and
"the trait Sync is not implemented for `std::sync::mpsc::Sender"
Can someone lead me in the right direction?
use std::sync::mpsc::Sender;
pub async fn init_server(run_tx: Sender<Packet>) {
let store = Store::new();
let store_filter = warp::any().map(move || store.clone());
let run_tx_filter = warp::any().map(move || run_tx.clone());
let update_item = warp::get()
.and(warp::path("v1"))
.and(warp::path("auth"))
.and(warp::path::end())
.and(post_json())
.and(store_filter.clone())
.and(run_tx_filter.clone()) //where I'm trying to send "Sender"
.and_then(request_token);
let routes = update_item;
println!("HTTP server started on port 8080");
warp::serve(routes).run(([127, 0, 0, 1], 3030)).await;
}
pub async fn request_token(
req: TokenRequest,
store: Store,
run_tx: Sender<Packet>,
) -> Result<impl warp::Reply, warp::Rejection> {
let (tmp_tx, tmp_rx) = std::sync::mpsc::channel();
run_tx
.send(Packet::IsPlayerLoggedIn(req.address, tmp_tx))
.unwrap();
let logged_in = tmp_rx.recv().unwrap();
if logged_in {
return Ok(warp::reply::with_status(
"Already logged in",
http::StatusCode::BAD_REQUEST,
));
}
Ok(warp::reply::with_status("some token", http::StatusCode::OK))
}
I've looked through some of the examples for warp, and was also wondering what are some good resources to get knowledgable of the crate. Thank you!
This is because you're using std::sync::mpsc::Sender. std::sync implements !Sync, so you won't be able to use that. You don't want to use that anyway since it's blocking.
When you use async functionality in rust, you need to provide a runtime for it. The good news is that warp runs on the tokio runtime, so you should have access to tokio::sync::mpsc If you take a look at that link, the Sender for that mpsc implementation implements Sync and Send so it's safe to share across threads.
TLDR:
Use tokio::sync::mpsc instead of std::sync::mpsc and this won't be an issue.

How to handle a failure to get a connection from the database pool?

I'm building an API using Rocket and Diesel, and I'm managing the DbPool using Rocket's State. A search request handler might look like this, then:
#[get("/search?<term>")]
pub fn general_privileged(
db_pool: &State<Pool>,
_key: PrivilegedApiKey,
term: &str,
) -> Json<Vec<SearchResult>> {
let dao1_connection = db_pool.get().unwrap();
let dao2_connection = db_pool.get().unwrap();
let company_dao = CompanyDao::new(dao1_connection);
let user_dao = UserDao::new(dao2_connection);
let results = lib::search::general_privileged::search(term, company_dao, user_dao);
Json(results)
}
As you can see, I'm unwrapping the connections here, which is not a good practice. In the case of a panic, it takes the service a long time to recover, which degrades performance. Obviously I could just return a `503 status instead of panicking (better!) but that's still not a great experience to have an API that frequently cannot talk to the DB.
I could use some help understanding a few things here:
Under what circumstances should I expect a failure to retrieve a connection from the pool; i.e. can I plan for and avoid these situations?
What are the recommended best practices for handling such failures?
What are the recommended best practices for responding to and recovering from such failures?
Yeah, I've definitely been 'starving' my pool by mismanaging my database connections. It's worth noting that r2d2 defaults to a connection pool with a max size of just 10. I upped mine to 32.
As #kmdreko alluded to, it's actually a bad idea to keep connections alive for a long time across multiple operations as a general rule of thumb. The code example above is better implemented as
#[get("/search?<term>")]
pub fn general_privileged(
db_pool: &State<Pool>,
_key: PrivilegedApiKey,
term: &str,
) -> Json<Vec<SearchResult>> {
let org_dao = OrgDao::new(db_pool);
let user_dao = UserDao::new(db_pool);
let results = lib::search::general_privileged::search(term, org_dao, user_dao);
Json(results)
}
I didn't realize that connections are automatically returned to the pool when they fall out of scope (Rust lifetimes sure are handy!)...so an example DAO implementation should look something like this.
pub struct UserDao<'r> {
pool: &'r Pool,
}
impl<'r> UserDao<'r> {
pub fn new(pool: &Pool) -> UserDao {
UserDao { pool }
}
}
impl<'r> IDaoSearch<User> for UserDao<'r> {
fn search_all(&mut self, search_term: &str) -> Vec<User> {
let connection = &mut self.pool.get().unwrap();
User::search_by_name(connection, search_term)
}
}
The connection is automatically returned to the pool at the end of this function, instead of being persisted throughout the lifetime of the DAO instance
The exact details depend on the Pool type. For example, looking at the docs for r2d2 it says:
Waits for at most the configured connection timeout before returning an error.
What should you do if you're timing out getting a database connection? It depends. You could:
increase the number of connections in the pool
return a 5xx error
try again
Unfortunately, the "correct" behaviour depends a lot on your particular application.

Some errors E0425 & E0599 write_fmt

mod loginfo{
use std::io::Error;
use chrono::prelude::*;
use std::io::prelude::*;
use std::fs::OpenOptions;
const LOG_SYS :&'static str = "log.txt";
const LOG_ERR :&'static str = "log_error.txt";
pub fn set_log_error(info: String)->Result<(), String>{
let mut handler = OpenOptions::new().append(true)
.open(LOG_ERR);
if handler.is_err(){
create_file(LOG_ERR.to_owned()).unwrap();
set_log_error(info).unwrap();
}
if let Err(_errno) = handler.write_fmt(
format_args!("{:?}\t{:?} ->[Last OS error({:?})]\n",
Utc::now().to_rfc2822().to_string(), info,
Error::last_os_error()) ){
panic!(
"\nCannot write info log error\t Info\t:{:?}\n",
Error::last_os_error());
}
Ok(())
}
pub fn set_log(info: String)->Result<(), String>{
let mut handler = OpenOptions::new().append(true)
.open(LOG_SYS);
if handler.is_err(){
set_log_error("Cannot write info log".to_owned())
.unwrap();
}
if let Err(_errno) = handler.write_fmt(
format_args!("{:?}\t{:?}\n",
Utc::now().to_rfc2822().to_string(), info)){
set_log_error("Cannot write data log file".to_owned())
.unwrap();
}
Ok(())
}
pub fn create_file(filename : String)->Result<(), String>{
let handler = OpenOptions::new().write(true)
.create(true).open(filename);
if handler.is_err(){
panic!(
"\nCannot create log file\t Info\t:{:?}\n",
Error::last_os_error());
}
Ok(())
}
}
When compiling, I get the following errors, "error[E0599]: no method named write_fmt found for enum std::result::Result<std::fs::File, std::io::Error> in the current scope --> src/loginfo.rs:19:38`"
but despite using the right imports, I still get the same errors. Is this due to a bad implementation of the module?
Thank you in advance for your answers and remarks?
+1 #Masklinn Ok I think I understand it would be easier to just write
pub fn foo_write_log( info: String){
let mut handler = OpenOptions::new().append(true)
.create(true).open(LOG_SYS).expect("Cannot create log");
handler.write_fmt(
format_args!("{:?}\t{:?} ->[Last OS error({:?})]\n",
Utc::now().to_rfc2822().to_string(), info,
Error::last_os_error())).unwrap();
}
but despite using the right imports, I still get the same errors. Is this due to a bad implementation of the module?
Kind-of? If you look at the type specified in the error, handler is a Result<File, Error>. And while io::Write is implemented on File, it's not implemented on Result.
The problem is that while you're checking whether handler.is_err() you never get the file out of it, nor do you ever return in the error case. Normally you'd use something like match or if let or one of the higher-order methods (e.g. Result::map, Result::and_then) in order to handle or propagate the various cases.
And to be honest the entire thing is rather odd and awkward e.g. your functions can fail but they panic instead (you never actually return an Err); if you're going to try and create a file when opening it for writing fails, why not just do that directly[0]; you are manually calling write_fmt and format_args why not just write!; write_fmt already returns an io::Error why do you discard it then ask for it again via Error::last_os_error; etc...
It's also a bit strange to hand-roll your own logger thing when the rust ecosystem already has a bunch of them though you do you; and the naming is also somewhat awkward e.g. I'd expect something called set_X to actually set the X, so to me set_log would be a way to set the file being logged to.
[0] .create(true).append(true) should open the file in append mode if it exists and create it otherwise; not to mention your version has a concurrency issue: if the open-for-append fails you create the file in write mode, but someone else could have created the file -- with content -- between the two calls, in which case you're going to partially overwrite the file

Why don't I get traces when sending OpenTelemetry to Jaeger?

I'm learning tracing and open-telemetry in Rust. I feel there are too many concepts and too many crates (at least in Rust) to see traces.
I wrote a simple lib app that adds two u32s:
use std::ops::Add;
pub fn add(f: u32, s: u32) -> u32 {
let span = tracing::info_span!("Add function", ?f, ?s);
let _guard = span.enter();
tracing::info!("Info event");
f.add(s)
}
And then I'm using the lib in my binary app:
use TracedLibrary::add;
use tracing_opentelemetry::OpenTelemetryLayer;
use tracing_subscriber::util::SubscriberInitExt;
use opentelemetry::{global, sdk::propagation::TraceContextPropagator};
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::Registry;
fn main() {
setup_global_subscriber();
let sum = add::add(1, 2);
}
fn setup_global_subscriber() {
global::set_text_map_propagator(TraceContextPropagator::new());
let (tracer, _uninstall) = opentelemetry_jaeger::new_pipeline()
.with_service_name("trace_demo_2")
.install().expect("Error initializing Jaeger exporter");
let telemetry = tracing_opentelemetry::layer().with_tracer(tracer);
Registry::default()
.with(telemetry).init();
}
The most confusing part is my apps Cargo.toml which looks like
tracing-subscriber = { version = "0.2.15" }
tracing-opentelemetry = { version= "0.11.0"}
opentelemetry = "0.12.0"
opentelemetry-jaeger = {version = "0.11.0" }
What on earth are those different crates are for? The only crate that makes sense is opentelemetry-jaeger. Are others even required?
And to my main question: I'm running Jaeger's all-in-one docker container. But when I visit http://localhost:16686, I see no traces.
Does anyone know what's happening?
Turns out when I create the Jaeger pipeline in setup_global_subscriber(), the _uninstall being returned gets dropped at the end of the function.
And when it gets dropped, collector shuts down.
To get traces I had to move contents of setup_global_subscriber() in main().
I think this is what you need.
tracing::subscriber::set_global_default(subscriber);
This sets a global Dispatcher, which Dispatcher.inner is your subscriber.

How to inject an immutable object into service code in actix-web?

I'm using actix framework to build a server that should support an opportunity to show both age/balance to a user given a user_id:
fn show_balance(req: &HttpRequest) -> HttpResponse {
let client = create_client();
let user_id = req.match_info().get("user_id").unwrap();
let balance = client.load_grade(user_id); // Returns a balance as a String
HttpResponse::Ok()
.content_type("text/plain")
.body(format!("Hello! Your balance is {}", balance))
}
fn show_age(req: &HttpRequest) -> HttpResponse {
let client = create_client();
let user_id = req.match_info().get("user_id").unwrap();
let age = client.load_grade(user_id); // Returns an age as a String
HttpResponse::Ok()
.content_type("text/plain")
.body(format!("Hello! Your balance is {}", age))
}
fn main() {
env::set_var("RUST_LOG", "actix_web=debug");
env::set_var("RUST_BACKTRACE", "1");
env_logger::init();
let sys = actix::System::new("basic-example");
let addr = server::new(
|| App::new()
// enable logger
.middleware(middleware::Logger::default())
.resource("/balance/{user_id}", |r| r.method(Method::GET).f(show_balance))
.resource("/age/{user_id}", |r| r.method(Method::GET).f(show_age))
.bind("127.0.0.1:8080").expect("Can not bind to 127.0.0.1:8080")
.start();
println!("Starting http server: 127.0.0.1:8080");
let _ = sys.run();
}
fn create_client() -> UserDataClient {
let enviroment = grpcio::EnvBuilder::new().build();
let channel = grpcio::ChannelBuilder::new(enviroment)
.connect(API_URL);
UserDataClient::new(channel)
}
This code works, but my concern is that I have to create a client (and open a channel) for every incoming request which is inefficient and readable, I think it's a good idea to make sort of a singleton instead (since I can reuse it). I looked through the example folder and found that todo example is kinda similar to what I'm doing. So I found the following two options to inject my client object (after I create a single instance of it in main():
Put it in an app state
Inject it as a middleware (1, 2)?
What's the best/correct one to implement?
I thought about just passing a client object to every handler as an argument but I didn't manage to make it work (and doesn't look good anyway).

Resources