Periodic tasks in Rust Rocket - rust

I try to build a REST with Rocket, that allows the caller to configure times at which he will be unavailable. Therefore it can either happen, that the unavailability has to be started immediately or I have to store it in a database for later execution.
I can implement all this with a normal set up of Rocket, rocket_sync_db_pools and Diesel:
#[database("postgres_diesel")]
pub struct DbConn(diesel::PgConnection);
#[rocket::main]
async fn main() {
rocket::build()
.attach(DbConn::fairing())
.mount(
"/v1",
routes![ post_block ],
)
.launch()
.await
.expect("could not launch rocket");
}
Within the handler function for the incoming requests I can get the database connection:
#[post("/block", data = "<new_block>")]
pub async fn post_block(
new_block: Json<NewBlock>,
conn: DbConn,
) -> Result<Json<Block>, BadRequest<&'static str>> {
Ok(Json(conn.run(move |c| configure_block(new_block.0, c)).await))
}
Within the implementation of configure_block() I either execute the configuration immediately or write it as a job to the database.
But how can I execute the configuration changes written to the database at a later moment? I need something like a cronjob within Rocket within that I can check the database, if there are pending jobs.
I tried to do it using tokio::time::interval which will run my task repeatedly. But I cannot get access to by DbPool within this tokio timer. I cannot use DbPool::get_one(&Rocket) as I don't have access to my rocket instance.
So I wonder: how can I run a repeated task within rocket?

Related

In a Rust Tonic gRPC server, how to close the network connection after receiving a malicious request?

Rust Tonic generates the following interface for a simple "hello-world" application:
pub trait HelloworldService: Send + Sync + 'static {
async fn sayhello(
&self,
request: tonic::Request<super::UserInput>,
) -> Result<tonic::Response<super::UserInputResponse>, tonic::Status>;
}
After implementing function sayhello and starting a tonic server, everything works as expected.
My question is:
If I check the input UserInput object and decide that the current user input is malicious (say, contain an empty security token), I'd like to immediately close the network connection without feeding any response (not even some error msg/code) to the client-side, how to do that?
Tonic doesn't seem to have an API to access the underlying network. Had to dig deeper all the way down to hyper h2 server (hyper/src/proto/h2/server.rs). In the implementation of struct H2Stream (impl<F, B, E> H2Stream<F, B> where ...), there is a function poll2(), which is called to poll the state of H2Stream. That function contains the following piece of code that apparently inserts the current time into the response header before sending it over to the client.
// set Date header if it isn't already set...
res.headers_mut()
.entry(::http::header::DATE)
.or_insert_with(date::update_and_header_value);
This leads to the following idea: add a flag in the response returned from HelloworldService::sayhello() when detecting a malicious request, then check that flag in H2Stream::poll2(), if the flag is presented, immediately return from H2Stream::poll2() with return Poll::Ready(Ok(())), this will drop the stream and hence the connection.
Http response Extensions are used to carry that flag. It is set by sayHello() in the Extensions field of tonic::Response, and that field is carried over to the Extensions field of http::response::Response, which is checked by H2Stream::poll2().
This apparent hack seems to work. But is there a better idea?

Making channels constructioned in a function last the lifetime of the program in Rust

I'm trying to write a basic multithreaded application using gtk3-rs, where the main thread sends a message to a child thread when a button is clicked, and the child thread sends a message back in response, after doing some calculations, the results of which are displayed by the main thread in a dialog box.
This seems simple enough conceptually, but I'm running into a problem where the channels that I'm creating in the callback that is used by gtk::Application::connect_activate to build the user interface are getting closed before the child thread (also created in that callback, and then detached) can even use them once, let alone how I intended, which is continually throughout the life of the application.
These are glib channels on the MainContext, not MSPC channels, so instead of busy-waiting for input like for normal channels, I was able to attach a listener on both receivers. I have one listening in the main thread (attached in the UI builder callback) and one listening in the spawned thread, but apparently that's not enough to keep the channels alive, because when I try to send a message to the thread's channel, it errors out saying that the thread is closed.
So the basic structure of my code is like this:
fn connect_events(/* event box, channel a */) {
event_box.connect_button_release_event(move |_, _| {
a.send("foo").unwrap();
});
}
fn build_ui(app: &gtk::Application) {
let (a, b) = glib::MainContext::channel(glib::PRIORITY_DEFAULT);
let (c, d) = glib::MainContext::channel(glib::PRIORITY_DEFAULT);
let event_box = /* GTK event box to capture events */;
connect_events(&event_box, a.clone());
thread::spawn(move || {
b.attach(/* handle receiving a message from the main thread by sending a message back on c */);
});
d.attach(/* pop up a dialog box with whatever was sent back */);
}
fn main() {
let application = gtk::Application::new(
Some("com.example.aaaaaaaa"),
Default::default(),
);
application.connect_activate(build_ui);
application.run();
}
So, how do I convince Rust to keep the channels alive? I tried doing some lazy_static magic and using .leak(), but neither of those seemed to work, and moving all of this code out of the UI builder is unfortunately not an option.
My pragmatic answer is: Don't use glib channels.
I'm using async rust channels for things like this. In your case, a oneshot channel could be useful. But many crates provide async channels, async-std or tokio for example.
You can spawn a function via glib::MainContext::default().spawn_local() that .awaits the message(s) from the channel and show the dialog there.

Tokio thread is not starting / spawning

I'm trying to start a new task to read from a socket client. I'm using the following same method on both the websocket server and client to receive from the connection.
The problem is, on the server side, the thread is started (2 log lines printed), but on the client side the thread is not starting (only the first line printed).
If I await on the spawn(), I can receive from the client. But then the parent task cannot proceed.
Any pointers for solving this problem?
pub async fn receive_message_from_peer(
mut receiver: PeerReceiver,
sender: Sender<IoEvent>,
peer_index: u64,
) {
debug!("starting new task for reading from peer : {:?}", peer_index);
tokio::task::spawn(async move {
debug!("new thread started for peer receiving");
// ....
}); // not awaiting or join!()
Do you use tokio::TcpListener::from_std(...) to create a listener object this way?
I had the same problem as you, my std_listener object was created based on net2. So there is a scheduling incompatibility problem.
From the description in the newer official documentation https://docs.rs/tokio/latest/tokio/net/struct.TcpListener.html#method.from_std, it seems that tokio currently has better support for socket2.
So I think the issue was I was using std::thread::sleep() in async code in some places. after using tokio::time::sleep() didn't need to yield the thread.

Not able to debug inside checkpoint_set.try_for_each in Opentelemetry Rust example

I was looking into dynatrace's example where it exports metrics using opentelemetry
https://github.com/open-telemetry/opentelemetry-rust/blob/main/opentelemetry-dynatrace/src/metric.rs
fn export(&self, checkpoint_set: &mut dyn CheckpointSet) -> Result<()> {
// I am able to put console logs upto here
checkpoint_set.try_for_each(self.export_kind_selector.as_ref(), &mut |record| {
..................
// not able to put console logs inside this block
})
})
the try_for_each method seems to be working in a seperate thread. In order to debug, how I can I put console logs inside that particular block ?!
Also, unsure if the try_for_each loop is working.. as there is no change in data before and after loop, am I supposed to call it explicitly ?

how to get current thread(worker) in rust rocket

Now I am using rust rocket rocket = { version = "0.5.0-rc.1", features = ["json"] } as a web server, I am facing a problem when the request quickly, some request may turn to time out, my server side code look like this:
#[post("/v1",data = "<record>")]
pub fn word_search(record: Json<WordSearchRequest>, login_user_info: LoginUserInfo) -> content::Json<String> {
// some logic to fetch data from database
}
I was wonder why the requst turned to time out, so I want to print the server side thread and handle request time. is it possible to get current thread id in rust rocket? I am seriously doubt the server only have one thread.
I finnally found the server only have one worker from the log ouput, then I add more workers in the Rocket.toml config file fix the timeout problem.
[release]
workers = 12
log_level = "normal"
keep_alive = 5
port = 8000

Resources