I have this code below that is repeated in many places in my application.
The only thing that differs on the per-command basis is the if let Role { whatever } = role line
#[command]
pub async fn protect(ctx: &Context, msg: &Message, mut args: Args) -> CommandResult {
let target_tag: String = args.single()?;
let (user_id, guild_id) = msg.get_ids();
let (target_id, _) = LupusCtxHelper::parse_tag_to_target_id(ctx, Tag(target_tag))
.await
.ok_or(MyError)?;
let player = {
let dt = ctx.data.read().await;
dt.get_player(&guild_id, &user_id).await
};
if let Some(p) = player {
// this line is the problem
if let LupusRole::BODYGUARD { .. } = *p.current_role() {
LupusCtxHelper::send_lupus_command(ctx, msg, LupusAction::Protect(target_id)).await?
} else {
msg.channel_id
.say(&ctx.http, "check your role dude")
.await?;
}
}
Ok(())
}
How would you guys suggest I go about cleaning up this code/refactoring in an external function? It seems like I can't pattern match dynamically on the left.
ps: beware that the enum LupusRole has struct values
Example of another file:
#[command]
pub async fn frame(ctx: &Context, msg: &Message, mut args: Args) -> CommandResult {
let target_tag: String = args.single()?;
let (user_id, guild_id) = msg.get_ids();
let (target_id, _) = LupusCtxHelper::parse_tag_to_target_id(ctx, Tag(target_tag))
.await
.ok_or(MyError)?;
let player = {
let dt = ctx.data.read().await;
dt.get_player(&guild_id, &user_id).await
};
if let Some(p) = player {
if let LupusRole::GUFO = *p.current_role() {
LupusCtxHelper::send_lupus_command(ctx, msg, LupusAction::Frame(target_id)).await?
} else {
msg.channel_id
.say(&ctx.http, "fra... ruolo sbagliato")
.await?;
}
}
Ok(())
}
Related
The test code below considers a situation in which there are three different threads.
Each thread has to do certain asynchronous tasks, that may take a certain time to finish.
This is "simulated" in the code below with a sleep.
On top of that, two of the threads collect information that they have to send to the third one for further processing. This is done using mpsc channels.
Due to the fact that there are out of our control information obtained from outside of the Rust application, the threads may get interrupted. This is emulated by generating a random number, and the loop on each thread breaks when that happens.
What I'm trying to achieve is a system in which whenever one of the threads has an error (simulated with the random number = 9), every other thread is cancelled too.
`
use std::sync::mpsc::channel;
use std::sync::mpsc::{Sender, Receiver, TryRecvError};
use std::thread::sleep;
use tokio::time::Duration;
use rand::distributions::{Uniform, Distribution};
#[tokio::main]
async fn main() {
execution_cycle().await;
}
async fn execution_cycle() {
let (tx_first, rx_first) = channel::<Message>();
let (tx_second, rx_second) = channel::<Message>();
let handle_sender_first = tokio::spawn(sender_thread(tx_first));
let handle_sender_second = tokio::spawn(sender_thread(tx_second));
let handle_receiver = tokio::spawn(receiver_thread(rx_first, rx_second));
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..10);
let mut cancel_from_cycle = rng_generator.sample(&mut thread_rng);
while !&handle_sender_first.is_finished() && !&handle_sender_second.is_finished() && !&handle_receiver.is_finished() {
cancel_from_cycle = rng_generator.sample(&mut thread_rng);
if (cancel_from_cycle == 9) {
println!("Aborting from the execution cycle.");
handle_receiver.abort();
handle_sender_first.abort();
handle_sender_second.abort();
}
}
if handle_sender_first.is_finished() {
println!("handle_sender_first finished.");
} else {
println!("handle_sender_first ongoing.");
}
if handle_sender_second.is_finished() {
println!("handle_sender_second finished.");
} else {
println!("handle_sender_second ongoing.");
}
if handle_receiver.is_finished() {
println!("handle_receiver finished.");
} else {
println!("handle_receiver ongoing.");
}
}
async fn sender_thread(tx: Sender<Message>) {
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..20);
let mut random_id = rng_generator.sample(&mut thread_rng);
while random_id != 9 {
let msg = Message {
id: random_id,
text: "hello".to_owned()
};
println!("Sending message {}.", msg.id);
random_id = rng_generator.sample(&mut thread_rng);
println!("Generated id {}.", random_id);
let result = tx.send(msg);
match result {
Ok(res) => {},
Err(error) => {
println!("Sending error {:?}", error);
random_id = 9;
}
}
sleep(Duration::from_millis(2000));
}
}
async fn receiver_thread(rx_first: Receiver<Message>, rx_second: Receiver<Message>) {
let mut channel_open_first = true;
let mut channel_open_second = true;
let mut thread_rng = rand::thread_rng();
let rng_generator = Uniform::from(1..15);
let mut random_event = rng_generator.sample(&mut thread_rng);
while channel_open_first && channel_open_second && random_event != 9 {
channel_open_first = receiver_inner(&rx_first);
channel_open_second = receiver_inner(&rx_second);
random_event = rng_generator.sample(&mut thread_rng);
println!("Generated event {}.", random_event);
sleep(Duration::from_millis(800));
}
}
fn receiver_inner(rx: &Receiver<Message>) -> bool {
let value = rx.try_recv();
match value {
Ok(msg) => {
println!("Message {} received: {}", msg.id, msg.text);
},
Err(error) => {
if error != TryRecvError::Empty {
println!("{}", error);
return false;
} else { /* Channel is empty.*/ }
}
}
return true;
}
struct Message {
id: usize,
text: String,
}
`
In the working example here, it does exactly that, however, it does it only from inside the threads, and I would like to add a "kill switch" in the execution_cycle() method, allowing to cancel all the three threads when a certain event takes place (the random number cancel_from_cycle == 9), and do that in the most simple way possible... I tried drop(handler_sender), and also panic!() from the execution_cycle() but the spawn threads keep running, preventing the application to finish. I also tried handle_receiver().abort() without success.
How can I achieve the wished result?
I send an http get request to a server and receive a response:
let resp = reqwest::blocking::get(req)?.text()?;
resp holds a String like this:
<?xml version=\"1.0\" encoding=\"UTF-8\">\n<Document xmlns=...
<datetime>202207102300</datetime>\n\t\t\t\t\t\t<value>320.08</value>\n\t\t\t\t\t<datetime>202207110000</datetime>\n\t\t\t\t\t\t<value>278.00</value>
...</Document>
What is the best way to get this text parsed into a vector containing tuple elements, as follows:
let mut tuple_items: (chrono::DateTime, f32)
This is my code that I created with the quickxml crate:
use chrono::NaiveDateTime;
use quick_xml::events::Event;
use quick_xml::Reader;
pub struct DatetimeValue {
pub dt: NaiveDateTime,
pub val: f32,
}
pub fn parse_xml_string(&self, xml_string: String) -> Vec<DatetimeValue> {
let mut response_vector: Vec<DatetimeValue> = vec![];
let mut reader = Reader::from_str(&xml_string[..]);
reader.trim_text(true);
let mut val_flag = false;
let mut dt_flag = false;
let mut buf = Vec::new();
let mut count = 0;
let mut actual_dt: NaiveDateTime;
loop {
match reader.read_event(&mut buf) {
Ok(Event::Start(ref e)) => {
if let b"value" = e.name() { val_flag = true }
else if let b"datetime" = e.name() { dt_flag = true }
}
Ok(Event::Text(e)) => {
if dt_flag {
actual_dt = NaiveDateTime::parse_from_str(e
.unescape_and_decode(&reader)
.unwrap(), "%Y%m%d%H%M").unwrap();
dt_flag = false;
}
else if val_flag {
response_vector.push(DatetimeValue {
dt: actual_dt,
val: e
.unescape_and_decode(&reader)
.unwrap()
.parse::<f32>()
.unwrap(),
});
val_flag = false;
}
}
Ok(Event::Eof) => break,
Err(e) => panic!("Error at position {}: {:?}", reader.buffer_position(), e),
_ => (),
}
buf.clear();
}
response_vector
}
I am using the stream function of redis in actix-web 4, I want to create the consumer in the main function, this is my current code
[dependencies]
actix-web = "4"
tokio = { version = "1", features = ["full"] }
redis = { version = "0.21", features = [
# "cluster",
"tokio-comp",
"tokio-native-tls-comp",
] }
#[actix_web::main]
async fn main() -> std::io::Result<()> {
utils::init::init_envfile();
env_logger::init_from_env(env_logger::Env::new());
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
let redist_stream_consumer = web::block(redis_stream_group);
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(redis_pool.clone()))
.app_data(web::Data::new(mysql_pool.clone()))
.service(web::scope("/api").configure(controller::api::config))
})
.bind(("0.0.0.0", 7777))?
.run()
.await?;
redist_stream_consumer.await.unwrap();
Ok(())
}
fn redis_stream_group() {
let client = redis::Client::open("redis://127.0.0.1/").expect("client");
let mut con = client.get_connection().expect("con");
let key = "s.order";
let group_name = "g1";
let consumer_name = "c1";
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$");
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(0);
loop {
let read_reply: StreamReadReply =
con.xread_options(&[key], &[">"], &opts).expect("read err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!("id:{} | key:{} | data:{:?}", id, key, map);
}
let id_strs: Vec<&String> = ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con.xack(key, group_name, &id_strs).expect("ack err");
}
}
}
When I use cargo r, I can run the program normally and get the sent messages, but when I execute ctrl+c, I can't exit the program.
Also I'm not sure if using web::block in the main function is correct and if there is a better way to run child threads
UPDATE: Tried using tokio::spawn, seems to work
#[actix_web::main]
async fn main() -> std::io::Result<()> {
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
for consumer_index in 1..=2 {
let c_redis_pool = redis_pool.clone();
tokio::spawn(async move {
let mut con = c_redis_pool.get().await.unwrap();
let key = "s.order";
let group_name = "g1";
let consumer_name = &format!("c{consumer_index}");
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$").await;
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(5000);
loop {
let read_reply: StreamReadReply = con
.xread_options(&[key], &[">"], &opts)
.await
.expect("err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!(
"consumer: {} | id:{} | key:{} | data:{:?}",
consumer_name,
id,
key,
map
);
}
let id_strs: Vec<&String> =
ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con
.xack(key, group_name, &id_strs)
.await
.expect("ack err");
}
}
});
}
let serve = HttpServer::new(move || {
...
}
This can be done with the standard library by useing std::thread and then creating the thread and whatever you want the other thread to do in a closure
fn main() {
thread::spawn(|| {
println!("doing things in the thread!");
});
println!("doing things outside the thread.... how boring");
}
if you want to pass data between them, you can use std::sync::mpsc to transfer data between the threads safely and quickly, using let (item_one,item_two) = mpsc::channel();, like so
fn main() {
let (sender,receiver) = mpsc::channel();
thread::spawn(move || {
let message = String::from("This message is from the thread");
sender.send(message).unwrap();
});
let letter = receiver.recv().unwrap();
note that the main thread proceeds as normal until it comes to the .recv(), at which it either receives the data from the thread, or waits until the other thread is done.
in your example you could do something like
use std::sync::mpsc;
use std::thread;
#[actix_web::main]
async fn main() -> std::io::Result<()> {
utils::init::init_envfile();
env_logger::init_from_env(env_logger::Env::new());
let port = get_env!("ACTIX_PORT", u16);
log::info!(
"starting HTTP server at http://{}:{}",
local_ipaddress::get().unwrap_or("localhost".to_string()),
port
);
let redis_pool = utils::init::init_redis_pool();
let mysql_pool = utils::init::init_mysql_pool();
let (consumer_sender,consumer_listener) = mpsc::channel();
thread::spawn(move || {
consumer_sender.send(redis_stream_group()).expect("You probably want to handle this case, but I'm too lazy");
});
let serve = HttpServer::new(move || {
let app_state = utils::init::AppState {
app_name: get_env!("APP_NAME", String),
pwd_secret: get_env!("PWD_SECRET", String),
jwt_secret: get_env!("JWT_SECRET", String),
jwt_exp: get_env!("JWT_EXP", i64),
};
App::new()
.app_data(web::Data::new(awc::Client::default()))
.app_data(web::Data::new(app_state))
.app_data(web::Data::new(redis_pool.clone()))
.app_data(web::Data::new(mysql_pool.clone()))
.wrap(actix_cors::Cors::default().allowed_origin_fn(|_, _| true))
.service(web::scope("/chat").configure(controller::chat::config))
.service(web::scope("/ws").configure(controller::ws::config))
.service(web::scope("/api").configure(controller::api::config))
});
if cfg!(debug_assertions) {
serve.bind(("0.0.0.0", port))?
} else {
let p = format!("/tmp/{}.socket", get_env!("APP_NAME", String));
let r = serve.bind_uds(&p)?;
let mut perms = std::fs::metadata(&p)?.permissions();
perms.set_readonly(false);
std::fs::set_permissions(&p, perms)?;
r
}
.run()
.await?;
let consumer = consumer_listener.recv().unwrap();
//then put things to do with the consumer here, or not idc
Ok(())
}
fn redis_stream_group() {
let client = redis::Client::open("redis://127.0.0.1/").expect("client");
let mut con = client.get_connection().expect("con");
let key = "s.order";
let group_name = "g1";
let consumer_name = "c1";
let _: Result<(), _> = con.xgroup_create_mkstream(key, group_name, "$");
let opts = StreamReadOptions::default()
.group(group_name, consumer_name)
.count(1)
.block(0);
loop {
let read_reply: StreamReadReply =
con.xread_options(&[key], &[">"], &opts).expect("read err");
for StreamKey { key, ids } in read_reply.keys {
for StreamId { id, map } in &ids {
log::info!("id:{} | key:{} | data:{:?}", id, key, map);
}
let id_strs: Vec<&String> = ids.iter().map(|StreamId { id, map: _ }| id).collect();
let _: usize = con.xack(key, group_name, &id_strs).expect("ack err");
}
}
}
I am trying to write some code that could be able to write a method that returns me a Vec with the names of the fields of a struct.
Code snippet below:
# Don't forget about dependencies if you try to reproduce this on local
use proc_macro2::{Span, Ident};
use quote::quote;
use syn::{
punctuated::Punctuated, token::Comma, Attribute, DeriveInput, Fields, Meta, NestedMeta,
Variant, Visibility,
};
#[proc_macro_derive(StructFieldNames, attributes(struct_field_names))]
pub fn derive_field_names(input: proc_macro::TokenStream) -> proc_macro::TokenStream {
let ast: DeriveInput = syn::parse(input).unwrap();
let (vis, ty, generics) = (&ast.vis, &ast.ident, &ast.generics);
let names_struct_ident = Ident::new(&(ty.to_string() + "FieldStaticStr"), Span::call_site());
let fields = filter_fields(match ast.data {
syn::Data::Struct(ref s) => &s.fields,
_ => panic!("FieldNames can only be derived for structs"),
});
let names_struct_fields = fields.iter().map(|(vis, ident)| {
quote! {
#vis #ident: &'static str
}
});
let mut vec_fields: Vec<String> = Vec::new();
let names_const_fields = fields.iter().map(|(_vis, ident)| {
let ident_name = ident.to_string();
vec_fields.push(ident_name);
quote! {
#vis #ident: -
}
});
let names_const_fields_as_vec = fields.iter().map(|(_vis, ident)| {
let ident_name = ident.to_string();
// vec_fields.push(ident_name)
});
let (impl_generics, ty_generics, where_clause) = generics.split_for_impl();
let tokens = quote! {
#[derive(Debug)]
#vis struct #names_struct_ident {
#(#names_struct_fields),*
}
impl #impl_generics #ty #ty_generics
#where_clause
{
#vis fn get_field_names() -> &'static str {
// stringify!(
[ #(#vec_fields),* ]
.map( |s| s.to_string())
.collect()
// )
}
}
};
tokens.into()
}
fn filter_fields(fields: &Fields) -> Vec<(Visibility, Ident)> {
fields
.iter()
.filter_map(|field| {
if field
.attrs
.iter()
.find(|attr| has_skip_attr(attr, "struct_field_names"))
.is_none()
&& field.ident.is_some()
{
let field_vis = field.vis.clone();
let field_ident = field.ident.as_ref().unwrap().clone();
Some((field_vis, field_ident))
} else {
None
}
})
.collect::<Vec<_>>()
}
const ATTR_META_SKIP: &'static str = "skip";
fn has_skip_attr(attr: &Attribute, path: &'static str) -> bool {
if let Ok(Meta::List(meta_list)) = attr.parse_meta() {
if meta_list.path.is_ident(path) {
for nested_item in meta_list.nested.iter() {
if let NestedMeta::Meta(Meta::Path(path)) = nested_item {
if path.is_ident(ATTR_META_SKIP) {
return true;
}
}
}
}
}
false
}
The code it's taken from here. Basically I just want to get those values as a String, and not to access them via Foo::FIELD_NAMES.some_random_field, because I need them for another process.
How can I achieve that?
Thanks
I use nickel.rs:
router.get("/api/movies", middleware! { |request, response|
let mut test_movies =
r#"[
{ "title": "Ironman"},
{ "title": "The Walk"},
{ "title": "Paddington"}
]
"#;
let json = Json::from_str(test_movies);
format!("{}", json.unwrap())
});
I wanted to create JSON format. The code for connecting to PostgreSQL and converting to JSON definition is below:
extern crate rustc_serialize;
use rustc_serialize::json::{Json, Parser};
#[derive(RustcDecodable, RustcEncodable)]
struct Movie {
title: String,
}
And I tried to select query and create JSON
router.get("/api/movies", middleware! { |request, response|
let conn = Connection::connect("postgres://postgres#localhost", SslMode::None).unwrap();
let stmt = match conn.prepare("select title from movie") {
Ok(stmt) => stmt,
Err(e) => {
return response.send(format!("Preparing query failed: {}", e));
}
};
let res = match stmt.execute(&[]) {
Ok(v) => println!("Selecting movie was Success."),
Err(e) => println!("Selecting movie failed. => {:?}", e)
};
// ???
// let movies = Json::from_obj(res);
// let movies = request.json_as::<&[Movie]>().unwrap();
// let movies = request.json_as::Vec<Movie>().unwrap();
format!("{}", movies)
});
however, I have no idea how to convert the result to JSON.
let conn = conn.clone();
makes the errors.
error: no method named `clone` found for type `postgres::Connection` in the current scope
I added
use nickel::status::StatusCode;
//use rustc_serialize::json::{Json, Parser};
use rustc_serialize::{json};
json::encode(&movies).unwrap();
was work. but null returned...
Finally
I changed execute to query and also use Vec<Movie>.
let mut v: Vec<Movie> = vec![];
let movies = &conn.query("select title from movie", &[]).unwrap();
for row in movies {
let movie = Movie {
title: row.get(0),
};
v.push(movie);
}
let json_obj = json::encode(&v).unwrap();
response.set(MediaType::Json);
response.set(StatusCode::Ok);
return response.send(json_obj);
I also defined struct Moview like model
struct Movie {
// id: i32,
title: String,
}
hmm.. troublesome a lot.
however, I can't conn.clone() yet.
Try the following:
let json = json::encode(&res).unwrap();
response.set(MediaType::Json);
response.set(StatusCode::Ok);
return response.send(json);
Also, it's not efficient to create a new connection for each request. You can create one connection in the main() function and then clone it inside each request closure.
fn main(){
let mut server = Nickel::new();
let mut router = Nickel::router();
let conn = Connection::connect("postgres://postgres#localhost", SslMode::None).unwrap();
let shared_connection = Arc::new(conn);
{
let conn = shared_connection.clone();
router.get("/api/movies", middleware! ( |request, mut response|{
let mut v: Vec<Movie> = vec![];
let movies = &conn.query("select title", &[]).unwrap();
for row in movies {
let movie = Movie {
title: row.get(0),
};
v.push(movie);
}
let json_obj = json::encode(&v).unwrap();
res.set(MediaType::Json);
res.set(StatusCode::Ok);
return res.send(json_obj);
}));
}
{
let conn = shared_connection.clone();
router.post("/api/movies",middleware!(|request, mut response|{
//...
}));
}
server.utilize(router);
server.listen("127.0.0.1:6767");
}